WorldWideScience

Sample records for covariates stratified models

  1. Partially linear varying coefficient models stratified by a functional covariate

    KAUST Repository

    Maity, Arnab; Huang, Jianhua Z.

    2012-01-01

    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric

  2. Partially linear varying coefficient models stratified by a functional covariate

    KAUST Repository

    Maity, Arnab

    2012-10-01

    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.

  3. A Powerful Approach to Estimating Annotation-Stratified Genetic Covariance via GWAS Summary Statistics.

    Science.gov (United States)

    Lu, Qiongshi; Li, Boyang; Ou, Derek; Erlendsdottir, Margret; Powles, Ryan L; Jiang, Tony; Hu, Yiming; Chang, David; Jin, Chentian; Dai, Wei; He, Qidu; Liu, Zefeng; Mukherjee, Shubhabrata; Crane, Paul K; Zhao, Hongyu

    2017-12-07

    Despite the success of large-scale genome-wide association studies (GWASs) on complex traits, our understanding of their genetic architecture is far from complete. Jointly modeling multiple traits' genetic profiles has provided insights into the shared genetic basis of many complex traits. However, large-scale inference sets a high bar for both statistical power and biological interpretability. Here we introduce a principled framework to estimate annotation-stratified genetic covariance between traits using GWAS summary statistics. Through theoretical and numerical analyses, we demonstrate that our method provides accurate covariance estimates, thereby enabling researchers to dissect both the shared and distinct genetic architecture across traits to better understand their etiologies. Among 50 complex traits with publicly accessible GWAS summary statistics (N total ≈ 4.5 million), we identified more than 170 pairs with statistically significant genetic covariance. In particular, we found strong genetic covariance between late-onset Alzheimer disease (LOAD) and amyotrophic lateral sclerosis (ALS), two major neurodegenerative diseases, in single-nucleotide polymorphisms (SNPs) with high minor allele frequencies and in SNPs located in the predicted functional genome. Joint analysis of LOAD, ALS, and other traits highlights LOAD's correlation with cognitive traits and hints at an autoimmune component for ALS. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  4. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  5. Properties of the endogenous post-stratified estimator using a random forests model

    Science.gov (United States)

    John Tipton; Jean Opsomer; Gretchen G. Moisen

    2012-01-01

    Post-stratification is used in survey statistics as a method to improve variance estimates. In traditional post-stratification methods, the variable on which the data is being stratified must be known at the population level. In many cases this is not possible, but it is possible to use a model to predict values using covariates, and then stratify on these predicted...

  6. Modeling Covariance Breakdowns in Multivariate GARCH

    OpenAIRE

    Jin, Xin; Maheu, John M

    2014-01-01

    This paper proposes a flexible way of modeling dynamic heterogeneous covariance breakdowns in multivariate GARCH (MGARCH) models. During periods of normal market activity, volatility dynamics are governed by an MGARCH specification. A covariance breakdown is any significant temporary deviation of the conditional covariance matrix from its implied MGARCH dynamics. This is captured through a flexible stochastic component that allows for changes in the conditional variances, covariances and impl...

  7. MC3D modelling of stratified explosion

    International Nuclear Information System (INIS)

    Picchi, S.; Berthoud, G.

    1999-01-01

    It is known that a steam explosion can occur in a stratified geometry and that the observed yields are lower than in the case of explosion in a premixture configuration. However, very few models are available to quantify the amount of melt which can be involved and the pressure peak that can be developed. In the stratified application of the MC3D code, mixing and fragmentation of the melt are explained by the growth of Kelvin Helmholtz instabilities due to the shear flow of the two phase coolant above the melt. Such a model is then used to recalculate the Frost-Ciccarelli tin-water experiment. Pressure peak, speed of propagation, bubble shape and erosion height are well reproduced as well as the influence of the inertial constraint (height of the water pool). (author)

  8. MC3D modelling of stratified explosion

    Energy Technology Data Exchange (ETDEWEB)

    Picchi, S.; Berthoud, G. [DTP/SMTH/LM2, CEA, 38 - Grenoble (France)

    1999-07-01

    It is known that a steam explosion can occur in a stratified geometry and that the observed yields are lower than in the case of explosion in a premixture configuration. However, very few models are available to quantify the amount of melt which can be involved and the pressure peak that can be developed. In the stratified application of the MC3D code, mixing and fragmentation of the melt are explained by the growth of Kelvin Helmholtz instabilities due to the shear flow of the two phase coolant above the melt. Such a model is then used to recalculate the Frost-Ciccarelli tin-water experiment. Pressure peak, speed of propagation, bubble shape and erosion height are well reproduced as well as the influence of the inertial constraint (height of the water pool). (author)

  9. Modelling of vapour explosion in stratified geometrie

    International Nuclear Information System (INIS)

    Picchi, St.

    1999-01-01

    When a hot liquid comes into contact with a colder volatile liquid, one can obtain in some conditions an explosive vaporization, told vapour explosion, whose consequences can be important on neighbouring structures. This explosion needs the intimate mixing and the fine fragmentation between the two liquids. In a stratified vapour explosion, these two liquids are initially superposed and separated by a vapor film. A triggering of the explosion can induce a propagation of this along the film. A study of experimental results and existent models has allowed to retain the following main points: - the explosion propagation is due to a pressure wave propagating through the medium; - the mixing is due to the development of Kelvin-Helmholtz instabilities induced by the shear velocity between the two liquids behind the pressure wave. The presence of the vapour in the volatile liquid explains experimental propagation velocity and the velocity difference between the two fluids at the pressure wave crossing. A first model has been proposed by Brayer in 1994 in order to describe the fragmentation and the mixing of the two fluids. Results of the author do not show explosion propagation. We have therefore built a new mixing-fragmentation model based on the atomization phenomenon that develops itself during the pressure wave crossing. We have also taken into account the transient aspect of the heat transfer between fuel drops and the volatile liquid, and elaborated a model of transient heat transfer. These two models have been introduced in a multi-components, thermal, hydraulic code, MC3D. Results of calculation show a qualitative and quantitative agreement with experimental results and confirm basic options of the model. (author)

  10. EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS

    NARCIS (Netherlands)

    LUIJBEN, TCW

    1991-01-01

    Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank

  11. Covariant, chirally symmetric, confining model of mesons

    International Nuclear Information System (INIS)

    Gross, F.; Milana, J.

    1991-01-01

    We introduce a new model of mesons as quark-antiquark bound states. The model is covariant, confining, and chirally symmetric. Our equations give an analytic solution for a zero-mass pseudoscalar bound state in the case of exact chiral symmetry, and also reduce to the familiar, highly successful nonrelativistic linear potential models in the limit of heavy-quark mass and lightly bound systems. In this fashion we are constructing a unified description of all the mesons from the π through the Υ. Numerical solutions for other cases are also presented

  12. A special covariance structure for random coefficient models with both between and within covariates

    International Nuclear Information System (INIS)

    Riedel, K.S.

    1990-07-01

    We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)

  13. Inferences from Genomic Models in Stratified Populations

    DEFF Research Database (Denmark)

    Janss, Luc; de los Campos, Gustavo; Sheehan, Nuala

    2012-01-01

    Unaccounted population stratification can lead to spurious associations in genome-wide association studies (GWAS) and in this context several methods have been proposed to deal with this problem. An alternative line of research uses whole-genome random regression (WGRR) models that fit all marker...

  14. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei; Holbrook, Andrew; Fortin, Norbert J.; Ombao, Hernando; Shahbaba, Babak

    2017-01-01

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix

  15. Nonparametric Bayesian models for a spatial covariance.

    Science.gov (United States)

    Reich, Brian J; Fuentes, Montserrat

    2012-01-01

    A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.

  16. Validity of covariance models for the analysis of geographical variation

    DEFF Research Database (Denmark)

    Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio

    2014-01-01

    1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained att...

  17. Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning

    1996-01-01

    In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...

  18. Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning

    In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continuous-time system excited by Gaussian white noise. This result is generalize...

  19. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  20. Modelling the Covariance Structure in Marginal Multivariate Count Models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Olivero, J.; Grande-Vega, M.

    2017-01-01

    The main goal of this article is to present a flexible statistical modelling framework to deal with multivariate count data along with longitudinal and repeated measures structures. The covariance structure for each response variable is defined in terms of a covariance link function combined...... be used to indicate whether there was statistical evidence of a decline in blue duikers and other species hunted during the study period. Determining whether observed drops in the number of animals hunted are indeed true is crucial to assess whether species depletion effects are taking place in exploited...... with a matrix linear predictor involving known matrices. In order to specify the joint covariance matrix for the multivariate response vector, the generalized Kronecker product is employed. We take into account the count nature of the data by means of the power dispersion function associated with the Poisson...

  1. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  2. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  3. Improvements to TRAC models of condensing stratified flow. Pt. 1

    International Nuclear Information System (INIS)

    Zhang, Q.; Leslie, D.C.

    1991-12-01

    Direct contact condensation in stratified flow is an important phenomenon in LOCA analyses. In this report, the TRAC interfacial heat transfer model for stratified condensing flow has been assessed against the Bankoff experiments. A rectangular channel option has been added to the code to represent the experimental geometry. In almost all cases the TRAC heat transfer coefficient (HTC) over-predicts the condensation rates and in some cases it is so high that the predicted steam is sucked in from the normal outlet in order to conserve mass. Based on their cocurrent and countercurrent condensing flow experiments, Bankoff and his students (Lim 1981, Kim 1985) developed HTC models from the two cases. The replacement of the TRAC HTC with either of Bankoff's models greatly improves the predictions of condensation rates in the experiment with cocurrent condensing flow. However, the Bankoff HTC for countercurrent flow is preferable because it is based only on the local quantities rather than on the quantities averaged from the inlet. (author)

  4. Stratified flow model for convective condensation in an inclined tube

    International Nuclear Information System (INIS)

    Lips, Stéphane; Meyer, Josua P.

    2012-01-01

    Highlights: ► Convective condensation in an inclined tube is modelled. ► The heat transfer coefficient is the highest for about 20° below the horizontal. ► Capillary forces have a strong effect on the liquid–vapour interface shape. ► A good agreement between the model and the experimental results was observed. - Abstract: Experimental data are reported for condensation of R134a in an 8.38 mm inner diameter smooth tube in inclined orientations with a mass flux of 200 kg/m 2 s. Under these conditions, the flow is stratified and there is an optimum inclination angle, which leads to the highest heat transfer coefficient. There is a need for a model to better understand and predict the flow behaviour. In this paper, the state of the art of existing models of stratified two-phase flows in inclined tubes is presented, whereafter a new mechanistic model is proposed. The liquid–vapour distribution in the tube is determined by taking into account the gravitational and the capillary forces. The comparison between the experimental data and the model prediction showed a good agreement in terms of heat transfer coefficients and pressure drops. The effect of the interface curvature on the heat transfer coefficient has been quantified and has been found to be significant. The optimum inclination angle is due to a balance between an increase of the void fraction and an increase in the falling liquid film thickness when the tube is inclined downwards. The effect of the mass flux and the vapour quality on the optimum inclination angle has also been studied.

  5. Generalized Extreme Value model with Cyclic Covariate Structure ...

    Indian Academy of Sciences (India)

    48

    enhances the estimation of the return period; however, its application is ...... Cohn T A and Lins H F 2005 Nature's style: Naturally trendy; GEOPHYSICAL ..... Final non-stationary GEV models with covariate structures shortlisted based on.

  6. A modified stratified model for the 3C 273 jet

    International Nuclear Information System (INIS)

    Liu Wenpo; Shen Zhiqiang

    2009-01-01

    We present a modified stratified jet model to interpret the observed spectral energy distributions of knots in the 3C 273 jet. Based on the hypothesis of the single index of the particle energy spectrum at injection and identical emission processes among all the knots, the observed difference of spectral shape among different 3C 273 knots can be understood as a manifestation of the deviation of the equivalent Doppler factor of stratified emission regions in an individual knot from a characteristic one. The summed spectral energy distributions of all ten knots in the 3C 273 jet can be well fitted by two components: a low-energy component (radio to optical) dominated by synchrotron radiation and a high-energy component (UV, X-ray and γ-ray) dominated by inverse Compton scattering of the cosmic microwave background. This gives a consistent spectral index of α = 0.88 (S v ∝ v -α ) and a characteristic Doppler factor of 7.4. Assuming the average of the summed spectrum as the characteristic spectrum of each knot in the 3C 273 jet, we further get a distribution of Doppler factors. We discuss the possible implications of these results for the physical properties in the 3C 273 jet. Future GeV observations with GLAST could separate the γ-ray emission of 3C 273 from the large scale jet and the small scale jet (i.e. the core) through measuring the GeV spectrum.

  7. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  8. Matérn-based nonstationary cross-covariance models for global processes

    KAUST Repository

    Jun, Mikyoung

    2014-01-01

    -covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters

  9. Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)

    DEFF Research Database (Denmark)

    Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis

    We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...

  10. Bayes factor covariance testing in item response models

    NARCIS (Netherlands)

    Fox, J.P.; Mulder, J.; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  11. Bayes Factor Covariance Testing in Item Response Models

    NARCIS (Netherlands)

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-01-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning

  12. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  13. Bayes Factor Covariance Testing in Item Response Models.

    Science.gov (United States)

    Fox, Jean-Paul; Mulder, Joris; Sinharay, Sandip

    2017-12-01

    Two marginal one-parameter item response theory models are introduced, by integrating out the latent variable or random item parameter. It is shown that both marginal response models are multivariate (probit) models with a compound symmetry covariance structure. Several common hypotheses concerning the underlying covariance structure are evaluated using (fractional) Bayes factor tests. The support for a unidimensional factor (i.e., assumption of local independence) and differential item functioning are evaluated by testing the covariance components. The posterior distribution of common covariance components is obtained in closed form by transforming latent responses with an orthogonal (Helmert) matrix. This posterior distribution is defined as a shifted-inverse-gamma, thereby introducing a default prior and a balanced prior distribution. Based on that, an MCMC algorithm is described to estimate all model parameters and to compute (fractional) Bayes factor tests. Simulation studies are used to show that the (fractional) Bayes factor tests have good properties for testing the underlying covariance structure of binary response data. The method is illustrated with two real data studies.

  14. Stratified flows with variable density: mathematical modelling and numerical challenges.

    Science.gov (United States)

    Murillo, Javier; Navas-Montilla, Adrian

    2017-04-01

    Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux

  15. Merons in a generally covariant model with Gursey term

    International Nuclear Information System (INIS)

    Akdeniz, K.G.; Smailagic, A.

    1982-10-01

    We study meron solutions of the generally covariant and Weyl invariant fermionic model with Gursey term. We find that, due to the presence of this term, merons can exist even without the cosmological constant. This is a new feature compared to previously studied models. (author)

  16. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  17. Modeling the Conditional Covariance between Stock and Bond Returns

    NARCIS (Netherlands)

    P. de Goeij (Peter); W.A. Marquering (Wessel)

    2002-01-01

    textabstractTo analyze the intertemporal interaction between the stock and bond market returns, we allow the conditional covariance matrix to vary over time according to a multivariate GARCH model similar to Bollerslev, Engle and Wooldridge (1988). We extend the model such that it allows for

  18. A reduced covariant string model for the extrinsic string

    International Nuclear Information System (INIS)

    Botelho, L.C.L.

    1989-01-01

    It is studied a reduced covariant string model for the extrinsic string by using Polyakov's path integral formalism. On the basis of this reduced model it is suggested that the extrinsic string has its critical dimension given by 13. Additionally, it is calculated in a simple way Poliakov's renormalization group law for the string rigidity coupling constants. (A.C.A.S.) [pt

  19. Using Covariation Reasoning to Support Mathematical Modeling

    Science.gov (United States)

    Jacobson, Erik

    2014-01-01

    For many students, making connections between mathematical ideas and the real world is one of the most intriguing and rewarding aspects of the study of mathematics. In the Common Core State Standards for Mathematics (CCSSI 2010), mathematical modeling is highlighted as a mathematical practice standard for all grades. To engage in mathematical…

  20. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    Science.gov (United States)

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  1. Identifying nonproportional covariates in the Cox model

    Czech Academy of Sciences Publication Activity Database

    Kraus, David

    2008-01-01

    Roč. 37, č. 4 (2008), s. 617-625 ISSN 0361-0926 R&D Projects: GA AV ČR(CZ) IAA101120604; GA MŠk(CZ) 1M06047; GA ČR(CZ) GD201/05/H007 Institutional research plan: CEZ:AV0Z10750506 Keywords : Cox model * goodness of fit * proportional hazards assumption * time-varying coefficients Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.324, year: 2008

  2. Globally covering a-priori regional gravity covariance models

    Directory of Open Access Journals (Sweden)

    D. Arabelos

    2003-01-01

    Full Text Available Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation, the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-22. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`

  3. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei

    2017-11-08

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.

  4. Markov modulated Poisson process models incorporating covariates for rainfall intensity.

    Science.gov (United States)

    Thayakaran, R; Ramesh, N I

    2013-01-01

    Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.

  5. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  6. Chiral phase transition in a covariant nonlocal NJL model

    International Nuclear Information System (INIS)

    General, I.; Scoccola, N.N.

    2001-01-01

    The properties of the chiral phase transition at finite temperature and chemical potential are investigated within a nonlocal covariant extension of the NJL model based on a separable quark-quark interaction. We find that for low values of T the chiral transition is always of first order and, for finite quark masses, at certain end point the transition turns into a smooth crossover. Our predictions for the position of this point is similar, although somewhat smaller, than previous estimates. (author)

  7. Some remarks on estimating a covariance structure model from a sample correlation matrix

    OpenAIRE

    Maydeu Olivares, Alberto; Hernández Estrada, Adolfo

    2000-01-01

    A popular model in structural equation modeling involves a multivariate normal density with a structured covariance matrix that has been categorized according to a set of thresholds. In this setup one may estimate the covariance structure parameters from the sample tetrachoricl polychoric correlations but only if the covariance structure is scale invariant. Doing so when the covariance structure is not scale invariant results in estimating a more restricted covariance structure than the one i...

  8. Experimental Validation of a Domestic Stratified Hot Water Tank Model in Modelica for Annual Performance Assessment

    DEFF Research Database (Denmark)

    Carmo, Carolina; Dumont, Olivier; Nielsen, Mads Pagh

    2015-01-01

    The use of stratified hot water tanks in solar energy systems - including ORC systems - as well as heat pump systems is paramount for a better performance of these systems. However, the availability of effective and reliable models to predict the annual performance of stratified hot water tanks...

  9. Stratified turbulent Bunsen flames : flame surface analysis and flame surface density modelling

    NARCIS (Netherlands)

    Ramaekers, W.J.S.; Oijen, van J.A.; Goey, de L.P.H.

    2012-01-01

    In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold

  10. Statistical mechanics of learning orthogonal signals for general covariance models

    International Nuclear Information System (INIS)

    Hoyle, David C

    2010-01-01

    Statistical mechanics techniques have proved to be useful tools in quantifying the accuracy with which signal vectors are extracted from experimental data. However, analysis has previously been limited to specific model forms for the population covariance C, which may be inappropriate for real world data sets. In this paper we obtain new statistical mechanical results for a general population covariance matrix C. For data sets consisting of p sample points in R N we use the replica method to study the accuracy of orthogonal signal vectors estimated from the sample data. In the asymptotic limit of N,p→∞ at fixed α = p/N, we derive analytical results for the signal direction learning curves. In the asymptotic limit the learning curves follow a single universal form, each displaying a retarded learning transition. An explicit formula for the location of the retarded learning transition is obtained and we find marked variation in the location of the retarded learning transition dependent on the distribution of population covariance eigenvalues. The results of the replica analysis are confirmed against simulation

  11. Emergent gravity on covariant quantum spaces in the IKKT model

    Energy Technology Data Exchange (ETDEWEB)

    Steinacker, Harold C. [Faculty of Physics, University of Vienna,Boltzmanngasse 5, A-1090 Vienna (Austria)

    2016-12-30

    We study perturbations of 4-dimensional fuzzy spheres as backgrounds in the IKKT or IIB matrix model. Gauge fields and metric fluctuations are identified among the excitation modes with lowest spin, supplemented by a tower of higher-spin fields. They arise from an internal structure which can be viewed as a twisted bundle over S{sup 4}, leading to a covariant noncommutative geometry. The linearized 4-dimensional Einstein equations are obtained from the classical matrix model action under certain conditions, modified by an IR cutoff. Some one-loop contributions to the effective action are computed using the formalism of string states.

  12. Structural Equation Models in a Redundancy Analysis Framework With Covariates.

    Science.gov (United States)

    Lovaglio, Pietro Giorgio; Vittadini, Giorgio

    2014-01-01

    A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.

  13. Ultracentrifuge separative power modeling with multivariate regression using covariance matrix

    International Nuclear Information System (INIS)

    Migliavacca, Elder

    2004-01-01

    In this work, the least-squares methodology with covariance matrix is applied to determine a data curve fitting to obtain a performance function for the separative power δU of a ultracentrifuge as a function of variables that are experimentally controlled. The experimental data refer to 460 experiments on the ultracentrifugation process for uranium isotope separation. The experimental uncertainties related with these independent variables are considered in the calculation of the experimental separative power values, determining an experimental data input covariance matrix. The process variables, which significantly influence the δU values are chosen in order to give information on the ultracentrifuge behaviour when submitted to several levels of feed flow rate F, cut θ and product line pressure P p . After the model goodness-of-fit validation, a residual analysis is carried out to verify the assumed basis concerning its randomness and independence and mainly the existence of residual heteroscedasticity with any explained regression model variable. The surface curves are made relating the separative power with the control variables F, θ and P p to compare the fitted model with the experimental data and finally to calculate their optimized values. (author)

  14. Theoretical study of evaporation heat transfer in horizontal microfin tubes: stratified flow model

    Energy Technology Data Exchange (ETDEWEB)

    Honda, H; Wang, Y S [Kyushu Univ., Inst. for Materials Chemistry and Engineering, Kasuga, Fukuoka (Japan)

    2004-08-01

    The stratified flow model of evaporation heat transfer in helically grooved, horizontal microfin tubes has been developed. The profile of stratified liquid was determined by a theoretical model previously developed for condensation in horizontal microfin tubes. For the region above the stratified liquid, the meniscus profile in the groove between adjacent fins was determined by a force balance between the gravity and surface tension forces. The thin film evaporation model was applied to predict heat transfer in the thin film region of the meniscus. Heat transfer through the stratified liquid was estimated by using an empirical correlation proposed by Mori et al. The theoretical predictions of the circumferential average heat transfer coefficient were compared with available experimental data for four tubes and three refrigerants. A good agreement was obtained for the region of Fr{sub 0}<2.5 as long as partial dry out of tube surface did not occur. (Author)

  15. The impact of covariance misspecification in group-based trajectory models for longitudinal data with non-stationary covariance structure.

    Science.gov (United States)

    Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C

    2017-08-01

    One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.

  16. Simulation model of stratified thermal energy storage tank using finite difference method

    Science.gov (United States)

    Waluyo, Joko

    2016-06-01

    Stratified TES tank is normally used in the cogeneration plant. The stratified TES tanks are simple, low cost, and equal or superior in thermal performance. The advantage of TES tank is that it enables shifting of energy usage from off-peak demand for on-peak demand requirement. To increase energy utilization in a stratified TES tank, it is required to build a simulation model which capable to simulate the charging phenomenon in the stratified TES tank precisely. This paper is aimed to develop a novel model in addressing the aforementioned problem. The model incorporated chiller into the charging of stratified TES tank system in a closed system. The model was developed in one-dimensional type involve with heat transfer aspect. The model covers the main factors affect to degradation of temperature distribution namely conduction through the tank wall, conduction between cool and warm water, mixing effect on the initial flow of the charging as well as heat loss to surrounding. The simulation model is developed based on finite difference method utilizing buffer concept theory and solved in explicit method. Validation of the simulation model is carried out using observed data obtained from operating stratified TES tank in cogeneration plant. The temperature distribution of the model capable of representing S-curve pattern as well as simulating decreased charging temperature after reaching full condition. The coefficient of determination values between the observed data and model obtained higher than 0.88. Meaning that the model has capability in simulating the charging phenomenon in the stratified TES tank. The model is not only capable of generating temperature distribution but also can be enhanced for representing transient condition during the charging of stratified TES tank. This successful model can be addressed for solving the limitation temperature occurs in charging of the stratified TES tank with the absorption chiller. Further, the stratified TES tank can be

  17. The breaking of Bjorken scaling in the covariant parton model

    International Nuclear Information System (INIS)

    Polkinghorne, J.C.

    1976-01-01

    Scale breaking is investigated in terms of a covariant parton model formulation of deep inelastic processes. It is shown that a consistent theory requires that the convergence properties of parton-hadron amplitudes should be modified as well as the parton being given form factors. Purely logarithmic violation is possible and the resulting model has many features in common with asymtotically free gauge theories. Behaviour at large and small ω and fixed q 2 is investigated. γW 2 should increase with q 2 at large ω and decrease with q 2 at small ω. Heuristic arguments are also given which suggest that the model would only lead to logarithmic modifications of dimensional counting results in purely hadronic deep scattering. (Auth.)

  18. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    Science.gov (United States)

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  19. Mathematical models for two-phase stratified pipe flow

    Energy Technology Data Exchange (ETDEWEB)

    Biberg, Dag

    2005-06-01

    The simultaneous transport of oil, gas and water in a single multiphase flow pipe line has for economical and practical reasons become common practice in the gas and oil fields operated by the oil industry. The optimal design and safe operation of these pipe lines require reliable estimates of liquid inventory, pressure drop and flow regime. Computer simulations of multiphase pipe flow have thus become an important design tool for field developments. Computer simulations yielding on-line monitoring and look ahead predictions are invaluable in day-to-day field management. Inaccurate predictions may have large consequences. The accuracy and reliability of multiphase pipe flow models are thus important issues. Simulating events in large pipelines or pipeline systems is relatively computer intensive. Pipe-lines carrying e.g. gas and liquefied gas (condensate) may cover distances of several hundred km in which transient phenomena may go on for months. The evaluation times associated with contemporary 3-D CFD models are thus not compatible with field applications. Multiphase flow lines are therefore normally simulated using specially dedicated 1-D models. The closure relations of multiphase pipe flow models are mainly based on lab data. The maximum pipe inner diameter, pressure and temperature in a multiphase pipe flow lab is limited to approximately 0.3 m, 90 bar and 60{sup o}C respectively. The corresponding field values are, however, much higher i.e.: 1 m, 1000 bar and 200{sup o}C respectively. Lab data does thus not cover the actual field conditions. Field predictions are consequently frequently based on model extrapolation. Applying field data or establishing more advanced labs will not solve this problem. It is in fact not practically possible to acquire sufficient data to cover all aspects of multiphase pipe flow. The parameter range involved is simply too large. Liquid levels and pressure drop in three-phase flow are e.g. determined by 13 dimensionless parameters

  20. The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.

    Science.gov (United States)

    Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J

    2018-03-23

    In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.

  1. FDTD Modelling of Electromagnetic waves in Stratified Medium ...

    African Journals Online (AJOL)

    The technique is an adaptation of the finite-difference time domain (FDTD) approach usually applied to model electromagnetic wave propagation. In this paper a simple 2D implementation of FDTD algorithm in mathematica environment is presented. Source implementation and the effect of conductivity on the incident field ...

  2. Potential Flow Model for Compressible Stratified Rayleigh-Taylor Instability

    Science.gov (United States)

    Rydquist, Grant; Reckinger, Scott; Owkes, Mark; Wieland, Scott

    2017-11-01

    The Rayleigh-Taylor Instability (RTI) is an instability that occurs when a heavy fluid lies on top of a lighter fluid in a gravitational field, or a gravity-like acceleration. It occurs in many fluid flows of a highly compressive nature. In this study potential flow analysis (PFA) is used to model the early stages of RTI growth for compressible fluids. In the localized region near the bubble tip, the effects of vorticity are negligible, so PFA is applicable, as opposed to later stages where the induced velocity due to vortices generated from the growth of the instability dominate the flow. The incompressible PFA is extended for compressibility effects by applying the growth rate and the associated perturbation spatial decay from compressible linear stability theory. The PFA model predicts theoretical values for a bubble terminal velocity for single-mode compressible RTI, dependent upon the Atwood (A) and Mach (M) numbers, which is a parameter that measures both the strength of the stratification and intrinsic compressibility. The theoretical bubble terminal velocities are compared against numerical simulations. The PFA model correctly predicts the M dependence at high A, but the model must be further extended to include additional physics to capture the behavior at low A. Undergraduate Scholars Program - Montana State University.

  3. Modelling of vapour explosion in a stratified geometry

    International Nuclear Information System (INIS)

    Brayer, Claude

    1994-01-01

    A vapour explosion is the explosive vaporisation of a volatile liquid in contact with another hotter liquid. Such a violent vaporisation requires an intimate mixing and a fine fragmentation of both liquids. Based on a synthesis of published experimental results, the author of this research thesis reports the development of a new physical model which describes the explosion. In this model, the explosion propagation is due to the propagation of the pressure wave associated with this this explosion, all along the vapour film which initially separates both liquids. The author takes the presence of water in the liquid initially located over the film into account. This presence of vapour explains experimental propagation rates. Another consequence, when the pressure wave passes, is an acceleration of liquids at different rates below and above the film. The author considers that a mixture layer then forms from the point of disappearance of the film, between both liquids, and that fragmentation is due to the turbulence in this mixture layer. This fragmentation model is then introduced into an Euler thermodynamic, three-dimensional and multi-constituents code of calculation, MC3D, to study the influence of fragmentation on thermal exchanges between the various constituents on the volatile liquid vaporisation [fr

  4. Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models

    OpenAIRE

    Liang, Yuli

    2015-01-01

    This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data....

  5. Structure of Pioncare covariant tensor operators in quantum mechanical models

    International Nuclear Information System (INIS)

    Polyzou, W.N.; Klink, W.H.

    1988-01-01

    The structure of operators that transform covariantly in Poincare invariant quantum mechanical models is analyzed. These operators are shown to have an interaction dependence that comes from the geometry of the Poincare group. The operators can be expressed in terms of matrix elements in a complete set of eigenstates of the mass and spin operators associated with the dynamical representation of the Poincare group. The matrix elements are factored into geometrical coefficients (Clebsch--Gordan coefficients for the Poincare group) and invariant matrix elements. The geometrical coefficients are fixed by the transformation properties of the operator and the eigenvalue spectrum of the mass and spin. The invariant matrix elements, which distinguish between different operators with the same transformation properties, are given in terms of a set of invariant form factors. copyright 1988 Academic Press, Inc

  6. Numerical Simulations of a Multiscale Model of Stratified Langmuir Circulation

    Science.gov (United States)

    Malecha, Ziemowit; Chini, Gregory; Julien, Keith

    2012-11-01

    Langmuir circulation (LC), a prominent form of wind and surface-wave driven shear turbulence in the ocean surface boundary layer (BL), is commonly modeled using the Craik-Leibovich (CL) equations, a phase-averaged variant of the Navier-Stokes (NS) equations. Although surface-wave filtering renders the CL equations more amenable to simulation than are the instantaneous NS equations, simulations in wide domains, hundreds of times the BL depth, currently earn the ``grand challenge'' designation. To facilitate simulations of LC in such spatially-extended domains, we have derived multiscale CL equations by exploiting the scale separation between submesoscale and BL flows in the upper ocean. The numerical algorithm for simulating this multiscale model resembles super-parameterization schemes used in meteorology, but retains a firm mathematical basis. We have validated our algorithm and here use it to perform multiscale simulations of the interaction between LC and upper ocean density stratification. ZMM, GPC, KJ gratefully acknowledge funding from NSF CMG Award 0934827.

  7. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Rodríguez-Ramírez, J. C.; Raga, A. C. [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Ap. 70-543, 04510 D.F., México (Mexico); Lora, V. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Cantó, J., E-mail: juan.rodriguez@nucleares.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ap. 70-468, 04510 D. F., México (Mexico)

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. We compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.

  8. Numerical simulation of stratified flows with different k-ε turbulence models

    International Nuclear Information System (INIS)

    Dagestad, S.

    1991-01-01

    The thesis comprises the numerical simulation of stratified flows with different k-ε models. When using the k-ε model, two equations are solved to describe the turbulence. The k-equation represents the turbulent kinetic energy of the turbulence and the ε-equation is the turbulent dissipation. Different k-ε models predict stratified flows differently. The standard k-ε model leads to higher turbulent mixing than the low-Reynolds model does. For lower Froude numbers, F 0 , this effect becomes enhanced. Buoyancy extension of the k-ε model also leads to less vertical mixing in cases with strong stratification. When the stratification increases, buoyancy-extension becomes larger influence. The turbulent Prandtl number effects have large impact on the transport of heat and the development of the flow. Two different formulae which express the turbulent Prandtl effects have been tested. For unstably stratified flows, the rapid mixing and three-dimensionality of the flow can in fact be computed using a k-ε model when buoyancy-extended is employed. The turbulent heat transfer and thus turbulent production in unstable stratified flows depends strongly upon the turbulent Prandtl number effect. The main conclusions are: Stable stratified flows should be computed with a buoyancy-extended low-Reynolds k-ε model; Unstable stratified flows should be computed with a buoyancy-extended standard k-ε model; The turbulent Prandtl number effects should be included in the computations; Buoyancy-extension has lead to more correct description of the physics for all of the investigated flows. 78 refs., 128 figs., 17 tabs

  9. Parametric Covariance Model for Horizon-Based Optical Navigation

    Science.gov (United States)

    Hikes, Jacob; Liounis, Andrew J.; Christian, John A.

    2016-01-01

    This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.

  10. Measures to assess the prognostic ability of the stratified Cox proportional hazards model

    DEFF Research Database (Denmark)

    (Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne

    2009-01-01

    Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures...

  11. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    Science.gov (United States)

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Real-time probabilistic covariance tracking with efficient model update.

    Science.gov (United States)

    Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li

    2012-05-01

    The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.

  14. A cautionary note on generalized linear models for covariance of unbalanced longitudinal data

    KAUST Repository

    Huang, Jianhua Z.

    2012-03-01

    Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.

  15. Mathematical modeling of turbulent stratified flows. Application of liquid metal fast breeders

    Energy Technology Data Exchange (ETDEWEB)

    Villand, M; Grand, D [CEA-Service des Transferts Thermiques, Grenoble (France)

    1983-07-01

    Mathematical model of turbulent stratified flow was proposed under the following assumptions: Newtonian fluid; incompressible fluid; coupling between temperature and momentum fields according to Boussinesq approximation; two-dimensional invariance for translation or rotation; coordinates cartesian or curvilinear. Solutions obtained by the proposed method are presented.

  16. Computational Fluid Dynamics model of stratified atmospheric boundary-layer flow

    DEFF Research Database (Denmark)

    Koblitz, Tilman; Bechmann, Andreas; Sogachev, Andrey

    2015-01-01

    For wind resource assessment, the wind industry is increasingly relying on computational fluid dynamics models of the neutrally stratified surface-layer. So far, physical processes that are important to the whole atmospheric boundary-layer, such as the Coriolis effect, buoyancy forces and heat...

  17. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan; Jun, Mikyoung; Huang, Jianhua Z.

    2011-01-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models

  18. Modeling the Conducting Stably-Stratified Layer of the Earth's Core

    Science.gov (United States)

    Petitdemange, L.; Philidet, J.; Gissinger, C.

    2017-12-01

    Observations of the Earth magnetic field as well as recent theoretical works tend to show that the Earth's outer liquid core is mostly comprised of a convective zone in which the Earth's magnetic field is generated - likely by dynamo action -, but also features a thin, stably stratified layer at the top of the core.We carry out direct numerical simulations by modeling this thin layer as an axisymmetric spherical Couette flow for a stably stratified fluid embedded in a dipolar magnetic field. The dynamo region is modeled by a conducting inner core rotating slightly faster than the insulating mantle due to magnetic torques acting on it, such that a weak differential rotation (low Rossby limit) can develop in the stably stratified layer.In the case of a non-stratified fluid, the combined action of the differential rotation and the magnetic field leads to the well known regime of `super-rotation', in which the fluid rotates faster than the inner core. Whereas in the classical case, this super-rotation is known to vanish in the magnetostrophic limit, we show here that the fluid stratification significantly extends the magnitude of the super-rotation, keeping this phenomenon relevant for the Earth core. Finally, we study how the shear layers generated by this new state might give birth to magnetohydrodynamic instabilities or waves impacting the secular variations or jerks of the Earth's magnetic field.

  19. Quantum mechanics vs. general covariance in gravity and string models

    International Nuclear Information System (INIS)

    Martinec, E.J.

    1984-01-01

    Quantization of simple low-dimensional systems embodying general covariance is studied. Functional methods are employed in the calculation of effective actions for fermionic strings and 1 + 1 dimensional gravity. The author finds that regularization breaks apparent symmetries of the theory, providing new dynamics for the string and non-trivial dynamics for 1 + 1 gravity. The author moves on to consider the quantization of some generally covariant systems with a finite number of physical degrees of freedom, assuming the existence of an invariant cutoff. The author finds that the wavefunction of the universe in these cases is given by the solution to simple quantum mechanics problems

  20. Existence and uniqueness of the maximum likelihood estimator for models with a Kronecker product covariance structure

    NARCIS (Netherlands)

    Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.

    2016-01-01

    This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator

  1. Forecasting Co-Volatilities via Factor Models with Asymmetry and Long Memory in Realized Covariance

    NARCIS (Netherlands)

    M. Asai (Manabu); M.J. McAleer (Michael)

    2014-01-01

    markdownabstract__Abstract__ Modelling covariance structures is known to suffer from the curse of dimensionality. In order to avoid this problem for forecasting, the authors propose a new factor multivariate stochastic volatility (fMSV) model for realized covariance measures that accommodates

  2. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    Science.gov (United States)

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  3. Implementing the Keele stratified care model for patients with low back pain: an observational impact study.

    Science.gov (United States)

    Bamford, Adrian; Nation, Andy; Durrell, Susie; Andronis, Lazaros; Rule, Ellen; McLeod, Hugh

    2017-02-03

    The Keele stratified care model for management of low back pain comprises use of the prognostic STarT Back Screening Tool to allocate patients into one of three risk-defined categories leading to associated risk-specific treatment pathways, such that high-risk patients receive enhanced treatment and more sessions than medium- and low-risk patients. The Keele model is associated with economic benefits and is being widely implemented. The objective was to assess the use of the stratified model following its introduction in an acute hospital physiotherapy department setting in Gloucestershire, England. Physiotherapists recorded data on 201 patients treated using the Keele model in two audits in 2013 and 2014. To assess whether implementation of the stratified model was associated with the anticipated range of treatment sessions, regression analysis of the audit data was used to determine whether high- or medium-risk patients received significantly more treatment sessions than low-risk patients. The analysis controlled for patient characteristics, year, physiotherapists' seniority and physiotherapist. To assess the physiotherapists' views on the usefulness of the stratified model, audit data on this were analysed using framework methods. To assess the potential economic consequences of introducing the stratified care model in Gloucestershire, published economic evaluation findings on back-related National Health Service (NHS) costs, quality-adjusted life years (QALYs) and societal productivity losses were applied to audit data on the proportion of patients by risk classification and estimates of local incidence. When the Keele model was implemented, patients received significantly more treatment sessions as the risk-rating increased, in line with the anticipated impact of targeted treatment pathways. Physiotherapists were largely positive about using the model. The potential annual impact of rolling out the model across Gloucestershire is a gain in approximately 30

  4. Experimental determination and modelling of interface area concentration in horizontal stratified flow

    International Nuclear Information System (INIS)

    Junqua-Moullet, Alexandra

    2003-01-01

    This research thesis concerns the modelling and experimentation of biphasic liquid/gas flows (water/air) while using the two-fluid model, a six-equation model. The author first addresses the modelling of interfacial magnitudes for a known topology (problem of two-fluid model closure, closure relationships for some variables, equation for a given configuration). She reports the development of an equation system for interfacial magnitudes. The next parts deal with experiments and report the study of stratified flows in the THALC experiment, and more particularly the study of the interfacial area concentration and of the liquid velocities in such flows. Results are discussed, as well as their consistency

  5. Simultaneous genetic analysis of longitudinal means and covariance structure in the simplex model using twin data

    NARCIS (Netherlands)

    Dolan, C.V.; Molenaar, P.C.M.; Boomsma, D.I.

    1991-01-01

    D. Soerbom's (1974, 1976) simplex model approach to simultaneous analysis of means and covariance structure was applied to analysis of means observed in a single group. The present approach to the simultaneous biometric analysis of covariance and mean structure is based on the testable assumption

  6. Robustness studies in covariance structure modeling - An overview and a meta-analysis

    NARCIS (Netherlands)

    Hoogland, Jeffrey J.; Boomsma, A

    In covariance structure modeling, several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the

  7. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    Science.gov (United States)

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  8. Assessment of horizontal in-tube condensation models using MARS code. Part I: Stratified flow condensation

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Seong-Su [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Hong, Soon-Joon, E-mail: sjhong90@fnctech.com [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Park, Ju-Yeop; Seul, Kwang-Won [Korea Institute of Nuclear Safety, 19 Kuseong-dong, Yuseong-gu, Daejon (Korea, Republic of); Park, Goon-Cherl [Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer This study collected 11 horizontal in-tube condensation models for stratified flow. Black-Right-Pointing-Pointer This study assessed the predictive capability of the models for steam condensation. Black-Right-Pointing-Pointer Purdue-PCCS experiments were simulated using MARS code incorporated with models. Black-Right-Pointing-Pointer Cavallini et al. (2006) model predicts well the data for stratified flow condition. Black-Right-Pointing-Pointer Results of this study can be used to improve condensation model in RELAP5 or MARS. - Abstract: The accurate prediction of the horizontal in-tube condensation heat transfer is a primary concern in the optimum design and safety analysis of horizontal heat exchangers of passive safety systems such as the passive containment cooling system (PCCS), the emergency condenser system (ECS) and the passive auxiliary feed-water system (PAFS). It is essential to analyze and assess the predictive capability of the previous horizontal in-tube condensation models for each flow regime using various experimental data. This study assessed totally 11 condensation models for the stratified flow, one of the main flow regime encountered in the horizontal condenser, with the heat transfer data from the Purdue-PCCS experiment using the multi-dimensional analysis of reactor safety (MARS) code. From the assessments, it was found that the models by Akers and Rosson, Chato, Tandon et al., Sweeney and Chato, and Cavallini et al. (2002) under-predicted the data in the main condensation heat transfer region, on the contrary to this, the models by Rosson and Meyers, Jaster and Kosky, Fujii, Dobson and Chato, and Thome et al. similarly- or over-predicted the data, and especially, Cavallini et al. (2006) model shows good predictive capability for all test conditions. The results of this study can be used importantly to improve the condensation models in thermal hydraulic code, such as RELAP5 or MARS code.

  9. RANS Modeling of Stably Stratified Turbulent Boundary Layer Flows in OpenFOAM®

    Directory of Open Access Journals (Sweden)

    Wilson Jordan M.

    2015-01-01

    Full Text Available Quantifying mixing processes relating to the transport of heat, momentum, and scalar quantities of stably stratified turbulent geophysical flows remains a substantial task. In a stably stratified flow, such as the stable atmospheric boundary layer (SABL, buoyancy forces have a significant impact on the flow characteristics. This study investigates constant and stability-dependent turbulent Prandtl number (Prt formulations linking the turbulent viscosity (νt and diffusivity (κt for modeling applications of boundary layer flows. Numerical simulations of plane Couette flow and pressure-driven channel flow are performed using the Reynolds-averaged Navier-Stokes (RANS framework with the standard k-ε turbulence model. Results are compared with DNS data to evaluate model efficacy for predicting mean velocity and density fields. In channel flow simulations, a Prandtl number formulation for wall-bounded flows is introduced to alleviate overmixing of the mean density field. This research reveals that appropriate specification of Prt can improve predictions of stably stratified turbulent boundary layer flows.

  10. Matérn-based nonstationary cross-covariance models for global processes

    KAUST Repository

    Jun, Mikyoung

    2014-07-01

    Many spatial processes in environmental applications, such as climate variables and climate model errors on a global scale, exhibit complex nonstationary dependence structure, in not only their marginal covariance but also their cross-covariance. Flexible cross-covariance models for processes on a global scale are critical for an accurate description of each spatial process as well as the cross-dependences between them and also for improved predictions. We propose various ways to produce cross-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters vary over space, coupled with a differential operators approach for modeling large-scale nonstationarity. We compare their performance to the performance of some existing models in terms of the aic and spatial predictions in two applications: joint modeling of surface temperature and precipitation, and joint modeling of errors in climate model ensembles. © 2014 Elsevier Inc.

  11. The error and covariance structures of the mean approach model of pooled cross-section and time series data

    International Nuclear Information System (INIS)

    Nuamah, N.N.N.N.

    1991-01-01

    This paper postulates the assumptions underlying the Mean Approach model and recasts the re-expressions of the normal equations of this model in partitioned matrices of covariances. These covariance structures have been analysed. (author). 16 refs

  12. Stratified turbulent Bunsen flames: flame surface analysis and flame surface density modelling

    Science.gov (United States)

    Ramaekers, W. J. S.; van Oijen, J. A.; de Goey, L. P. H.

    2012-12-01

    In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold (FGM) reduction method for reaction kinetics. Before examining the suitability of the FSD model, flame surfaces are characterized in terms of thickness, curvature and stratification. All flames are in the Thin Reaction Zones regime, and the maximum equivalence ratio range covers 0.1⩽φ⩽1.3. For all flames, local flame thicknesses correspond very well to those observed in stretchless, steady premixed flamelets. Extracted curvature radii and mixing length scales are significantly larger than the flame thickness, implying that the stratified flames all burn in a premixed mode. The remaining challenge is accounting for the large variation in (subfilter) mass burning rate. In this contribution, the FSD model is proven to be applicable for Large Eddy Simulations (LES) of stratified flames for the equivalence ratio range 0.1⩽φ⩽1.3. Subfilter mass burning rate variations are taken into account by a subfilter Probability Density Function (PDF) for the mixture fraction, on which the mass burning rate directly depends. A priori analysis point out that for small stratifications (0.4⩽φ⩽1.0), the replacement of the subfilter PDF (obtained from DNS data) by the corresponding Dirac function is appropriate. Integration of the Dirac function with the mass burning rate m=m(φ), can then adequately model the filtered mass burning rate obtained from filtered DNS data. For a larger stratification (0.1⩽φ⩽1.3), and filter widths up to ten flame thicknesses, a β-function for the subfilter PDF yields substantially better predictions than a Dirac function. Finally, inclusion of a simple algebraic model for the FSD resulted only in small additional deviations from DNS data

  13. The optical interface of a photonic crystal: Modeling an opal with a stratified effective index

    OpenAIRE

    Maurin, Isabelle; Moufarej, Elias; Laliotis, Athanasios; Bloch, Daniel

    2014-01-01

    An artificial opal is a compact arrangement of transparent spheres, and is an archetype of a three-dimensional photonic crystal. Here, we describe the optics of an opal using a flexible model based upon a stratified medium whose (effective) index is governed by the opal density in a small planar slice of the opal. We take into account the effect of the substrate and assume a well- controlled number of layers, as it occurs for an opal fabricated by Langmuir-Blodgett deposition. The calculation...

  14. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    Science.gov (United States)

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  15. Implications of the modelling of stratified hot water storage tanks in the simulation of CHP plants

    Energy Technology Data Exchange (ETDEWEB)

    Campos Celador, A., E-mail: alvaro.campos@ehu.es [ENEDI Research Group-University of the Basque Country, Departamento de Maquinas y Motores Termicos, E.T.S.I. de Bilbao Alameda de Urquijo, s/n 48013 Bilbao, Bizkaia (Spain); Odriozola, M.; Sala, J.M. [ENEDI Research Group-University of the Basque Country, Departamento de Maquinas y Motores Termicos, E.T.S.I. de Bilbao Alameda de Urquijo, s/n 48013 Bilbao, Bizkaia (Spain)

    2011-08-15

    Highlights: {yields} Three different modelling approaches for simulation of hot water tanks are presented. {yields} The three models are simulated within a residential cogeneration plant. {yields} Small differences in the results are found by an energy and exergy analysis. {yields} Big differences between the results are found by an advanced exergy analysis. {yields} Results on the feasibility study are explained by the advanced exergy analysis. - Abstract: This paper considers the effect that different hot water storage tank modelling approaches have on the global simulation of residential CHP plants as well as their impact on their economic feasibility. While a simplified assessment of the heat storage is usually considered in the feasibility studies of CHP plants in buildings, this paper deals with three different levels of modelling of the hot water tank: actual stratified model, ideal stratified model and fully mixed model. These three approaches are presented and comparatively evaluated under the same case of study, a cogeneration plant with thermal storage meeting the loads of an urbanisation located in the Bilbao metropolitan area (Spain). The case of study is simulated by TRNSYS for each one of the three modelling cases and the so obtained annual results are analysed from both a First and Second-Law-based viewpoint. While the global energy and exergy efficiencies of the plant for the three modelling cases agree quite well, important differences are found between the economic results of the feasibility study. These results can be predicted by means of an advanced exergy analysis of the storage tank considering the endogenous and exogenous exergy destruction terms caused by the hot water storage tank.

  16. Testing Constancy of the Error Covariance Matrix in Vector Models against Parametric Alternatives using a Spectral Decomposition

    DEFF Research Database (Denmark)

    Yang, Yukay

    I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...... to consider multivariate volatility modelling....

  17. The Misspecification of the Covariance Structures in Multilevel Models for Single-Case Data: A Monte Carlo Simulation Study

    Science.gov (United States)

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim

    2016-01-01

    The impact of misspecifying covariance matrices at the second and third levels of the three-level model is evaluated. Results indicate that ignoring existing covariance has no effect on the treatment effect estimate. In addition, the between-case variance estimates are unbiased when covariance is either modeled or ignored. If the research interest…

  18. On adjustment for auxiliary covariates in additive hazard models for the analysis of randomized experiments

    DEFF Research Database (Denmark)

    Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen

    2014-01-01

    We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...

  19. Covariance approximation for large multivariate spatial data sets with an application to multiple climate model errors

    KAUST Repository

    Sang, Huiyan

    2011-12-01

    This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.

  20. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  1. A Proportional Hazards Regression Model for the Subdistribution with Covariates-adjusted Censoring Weight for Competing Risks Data

    DEFF Research Database (Denmark)

    He, Peng; Eriksson, Frank; Scheike, Thomas H.

    2016-01-01

    function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...

  2. Model-Based Prediction of Pulsed Eddy Current Testing Signals from Stratified Conductive Structures

    International Nuclear Information System (INIS)

    Zhang, Jian Hai; Song, Sung Jin; Kim, Woong Ji; Kim, Hak Joon; Chung, Jong Duk

    2011-01-01

    Excitation and propagation of electromagnetic field of a cylindrical coil above an arbitrary number of conductive plates for pulsed eddy current testing(PECT) are very complex problems due to their complicated physical properties. In this paper, analytical modeling of PECT is established by Fourier series based on truncated region eigenfunction expansion(TREE) method for a single air-cored coil above stratified conductive structures(SCS) to investigate their integrity. From the presented expression of PECT, the coil impedance due to SCS is calculated based on analytical approach using the generalized reflection coefficient in series form. Then the multilayered structures manufactured by non-ferromagnetic (STS301L) and ferromagnetic materials (SS400) are investigated by the developed PECT model. Good prediction of analytical model of PECT not only contributes to the development of an efficient solver but also can be applied to optimize the conditions of experimental setup in PECT

  3. Promotion time cure rate model with nonparametric form of covariate effects.

    Science.gov (United States)

    Chen, Tianlei; Du, Pang

    2018-05-10

    Survival data with a cured portion are commonly seen in clinical trials. Motivated from a biological interpretation of cancer metastasis, promotion time cure model is a popular alternative to the mixture cure rate model for analyzing such data. The existing promotion cure models all assume a restrictive parametric form of covariate effects, which can be incorrectly specified especially at the exploratory stage. In this paper, we propose a nonparametric approach to modeling the covariate effects under the framework of promotion time cure model. The covariate effect function is estimated by smoothing splines via the optimization of a penalized profile likelihood. Point-wise interval estimates are also derived from the Bayesian interpretation of the penalized profile likelihood. Asymptotic convergence rates are established for the proposed estimates. Simulations show excellent performance of the proposed nonparametric method, which is then applied to a melanoma study. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Stochastic modeling of the Earth's magnetic field: Inversion for covariances over the observatory era

    DEFF Research Database (Denmark)

    Gillet, N.; Jault, D.; Finlay, Chris

    2013-01-01

    Inferring the core dynamics responsible for the observed geomagnetic secular variation requires knowledge of the magnetic field at the core-mantle boundary together with its associated model covariances. However, most currently available field models have been built using regularization conditions...... variation error model in core flow inversions and geomagnetic data assimilation studies....

  5. Stochastic modelling of the Earth’s magnetic field: inversion for covariances over the observatory era

    DEFF Research Database (Denmark)

    Gillet, Nicolas; Jault, D.; Finlay, Chris

    2013-01-01

    Inferring the core dynamics responsible for the observed geomagnetic secular variation requires knowledge of the magnetic field at the core mantle boundary together with its associated model covariances. However, all currently available field models have been built using regularization conditions...... variation error model in core flow inversions and geomagnetic data assimilation studies....

  6. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis; Tandeo, P.; Pulido, M.; Ait-El-Fquih, Boujemaa; Chonavel, T.; Hoteit, Ibrahim

    2017-01-01

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended

  7. Simulations and cosmological inference: A statistical model for power spectra means and covariances

    International Nuclear Information System (INIS)

    Schneider, Michael D.; Knox, Lloyd; Habib, Salman; Heitmann, Katrin; Higdon, David; Nakhleh, Charles

    2008-01-01

    We describe an approximate statistical model for the sample variance distribution of the nonlinear matter power spectrum that can be calibrated from limited numbers of simulations. Our model retains the common assumption of a multivariate normal distribution for the power spectrum band powers but takes full account of the (parameter-dependent) power spectrum covariance. The model is calibrated using an extension of the framework in Habib et al. (2007) to train Gaussian processes for the power spectrum mean and covariance given a set of simulation runs over a hypercube in parameter space. We demonstrate the performance of this machinery by estimating the parameters of a power-law model for the power spectrum. Within this framework, our calibrated sample variance distribution is robust to errors in the estimated covariance and shows rapid convergence of the posterior parameter constraints with the number of training simulations.

  8. Prediction of stably stratified homogeneous shear flows with second-order turbulence models

    International Nuclear Information System (INIS)

    Pereira, J C F; Rocha, J M P

    2010-01-01

    The present study investigated the role of pressure-correlation second-order turbulence modelling schemes on the predicted behaviour of stably stratified homogeneous vertical-sheared turbulence. The pressure-correlation terms were modelled with a nonlinear formulation (Craft 1991), which was compared with a linear pressure-strain model and the 'isotropization of production' model for the pressure-scalar correlation. Two additional modelling issues were investigated: the influence of the buoyancy term in the kinetic energy dissipation rate equation and the time scale in the thermal production term in the scalar variance dissipation equation. The predicted effects of increasing the Richardson number on turbulence characteristics were compared against a comprehensive set of direct numerical simulation databases. The linear models provide a broadly satisfactory description of the major effects of the Richardson number on stratified shear flow. The buoyancy term in the dissipation equation of the turbulent kinetic energy generates excessively low levels of dissipation. For moderate and large Richardson numbers, the term yields unrealistic linear oscillations in the shear and buoyancy production terms, and therefore should be dropped in this flow (or at least their coefficient c ε3 should be substantially reduced from its standard value). The mechanical dissipation time scale provides marginal improvements in comparison to the scalar time scale in the production. The observed inaccuracy of the linear model in predicting the magnitude of the effects on the velocity anisotropy was demonstrated to be attributed mainly to the defective behaviour of the pressure-correlation model, especially for stronger stratification. The turbulence closure embodying a nonlinear formulation for the pressure-correlations and specific versions of the dissipation equations failed to predict the tendency of the flow to anisotropy with increasing stratification. By isolating the effects of the

  9. Model test on partial expansion in stratified subsidence during foundation pit dewatering

    Science.gov (United States)

    Wang, Jianxiu; Deng, Yansheng; Ma, Ruiqiang; Liu, Xiaotian; Guo, Qingfeng; Liu, Shaoli; Shao, Yule; Wu, Linbo; Zhou, Jie; Yang, Tianliang; Wang, Hanmei; Huang, Xinlei

    2018-02-01

    Partial expansion was observed in stratified subsidence during foundation pit dewatering. However, the phenomenon was suspected to be an error because the compression of layers is known to occur when subsidence occurs. A slice of the subsidence cone induced by drawdown was selected as the prototype. Model tests were performed to investigate the phenomenon. The underlying confined aquifer was generated as a movable rigid plate with a hinge at one end. The overlying layers were simulated with remolded materials collected from a construction site. Model tests performed under the conceptual model indicated that partial expansion occurred in stratified settlements under coordination deformation and consolidation conditions. During foundation pit dewatering, rapid drawdown resulted in rapid subsidence in the dewatered confined aquifer. The rapidly subsiding confined aquifer top was the bottom deformation boundary of the overlying layers. Non-coordination deformation was observed at the top and bottom of the subsiding overlying layers. The subsidence of overlying layers was larger at the bottom than at the top. The layers expanded and became thicker. The phenomenon was verified using numerical simulation method based on finite difference method. Compared with numerical simulation results, the boundary effect of the physical tests was obvious in the observation point close to the movable endpoint. The tensile stress of the overlying soil layers induced by the underlying settlement of dewatered confined aquifer contributed to the expansion phenomenon. The partial expansion of overlying soil layers was defined as inversed rebound. The inversed rebound was induced by inversed coordination deformation. Compression was induced by the consolidation in the overlying soil layers because of drainage. Partial expansion occurred when the expansion exceeded the compression. Considering the inversed rebound, traditional layer-wise summation method for calculating subsidence should be

  10. A multivariate multilevel Gaussian model with a mixed effects structure in the mean and covariance part.

    Science.gov (United States)

    Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel

    2014-05-20

    A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Univariate and Multivariate Specification Search Indices in Covariance Structure Modeling.

    Science.gov (United States)

    Hutchinson, Susan R.

    1993-01-01

    Simulated population data were used to compare relative performances of the modification index and C. Chou and P. M. Bentler's Lagrange multiplier test (a multivariate generalization of a modification index) for four levels of model misspecification. Both indices failed to recover the true model except at the lowest level of misspecification. (SLD)

  12. Nucleon quark distributions in a covariant quark-diquark model

    Energy Technology Data Exchange (ETDEWEB)

    Cloet, I.C. [Special Research Centre for the Subatomic Structure of Matter and Department of Physics and Mathematical Physics, University of Adelaide, SA 5005 (Australia) and Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States)]. E-mail: icloet@physics.adelaide.edu.au; Bentz, W. [Department of Physics, School of Science, Tokai University, Hiratsuka-shi, Kanagawa 259-1292 (Japan)]. E-mail: bentz@keyaki.cc.u-tokai.ac.jp; Thomas, A.W. [Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States)]. E-mail: awthomas@jlab.org

    2005-08-18

    Spin-dependent and spin-independent quark light-cone momentum distributions and structure functions are calculated for the nucleon. We utilize a modified Nambu-Jona-Lasinio model in which confinement is simulated by eliminating unphysical thresholds for nucleon decay into quarks. The nucleon bound state is obtained by solving the Faddeev equation in the quark-diquark approximation, where both scalar and axial-vector diquark channels are included. We find excellent agreement between our model results and empirical data.

  13. Adaptive Non-Interventional Heuristics for Covariation Detection in Causal Induction: Model Comparison and Rational Analysis

    Science.gov (United States)

    Hattori, Masasi; Oaksford, Mike

    2007-01-01

    In this article, 41 models of covariation detection from 2 x 2 contingency tables were evaluated against past data in the literature and against data from new experiments. A new model was also included based on a limiting case of the normative phi-coefficient under an extreme rarity assumption, which has been shown to be an important factor in…

  14. P2 : A random effects model with covariates for directed graphs

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Snijders, T.A.B.; Zijlstra, B.J.H.

    A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node.

  15. Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier

    We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...

  16. MODELS OF COVARIANCE FUNCTIONS OF GAUSSIAN RANDOM FIELDS ESCAPING FROM ISOTROPY, STATIONARITY AND NON NEGATIVITY

    Directory of Open Access Journals (Sweden)

    Pablo Gregori

    2014-03-01

    Full Text Available This paper represents a survey of recent advances in modeling of space or space-time Gaussian Random Fields (GRF, tools of Geostatistics at hand for the understanding of special cases of noise in image analysis. They can be used when stationarity or isotropy are unrealistic assumptions, or even when negative covariance between some couples of locations are evident. We show some strategies in order to escape from these restrictions, on the basis of rich classes of well known stationary or isotropic non negative covariance models, and through suitable operations, like linear combinations, generalized means, or with particular Fourier transforms.

  17. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  18. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  19. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying

    2014-07-15

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  20. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying; Bowman, Kenneth P.; Genton, Marc G.; Tokay, Ali

    2014-01-01

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  1. Covariant introduction of quark spin into the dual resonance model

    International Nuclear Information System (INIS)

    Iroshnikov, G.S.

    1979-01-01

    A very simple method of insertion of a quark spin into the dual resonance model of hadron interaction is proposed. The method is suitable for amplitudes with an arbitrary number of particles. The amplitude of interaction of real particles is presented as a product of contribution of oscillatory excitations in the (q anti q) system and of a spin factor. The latter is equal to the trace of the product of the external particle wave functions constructed from structural quarks and satisfying the relativistic Bargman-Wigner equations. Two examples of calculating the meson interaction amplitudes are presented

  2. Transversity quark distributions in a covariant quark-diquark model

    Energy Technology Data Exchange (ETDEWEB)

    Cloet, I.C. [Physics Division, Argonne National Laboratory, Argonne, IL 60439-4843 (United States)], E-mail: icloet@anl.gov; Bentz, W. [Department of Physics, School of Science, Tokai University, Hiratsuka-shi, Kanagawa 259-1292 (Japan)], E-mail: bentz@keyaki.cc.u-tokai.ac.jp; Thomas, A.W. [Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA 23606 (United States); College of William and Mary, Williamsburg, VA 23187 (United States)], E-mail: awthomas@jlab.org

    2008-01-17

    Transversity quark light-cone momentum distributions are calculated for the nucleon. We utilize a modified Nambu-Jona-Lasinio model in which confinement is simulated by eliminating unphysical thresholds for nucleon decay into quarks. The nucleon bound state is obtained by solving the relativistic Faddeev equation in the quark-diquark approximation, where both scalar and axial-vector diquark channels are included. Particular attention is paid to comparing our results with the recent experimental extraction of the transversity distributions by Anselmino et al. We also compare our transversity results with earlier spin-independent and helicity quark distributions calculated in the same approach.

  3. On the possibility on constructing covariant chromomagnetic field models

    International Nuclear Information System (INIS)

    Cabo, A.; Penaranda, S.; Martinez, R.

    1995-03-01

    Expressions for SO(4) invariant euclidean QCD generating functionals are introduced which should produce non-vanishing gluon condensates. Their investigation is started here by initially considering the loop expansion of the corresponding effective action searching for a description differing from the usual perturbation theory. At this level, we consider special free propagators showing a sort or off-diagonal long range order. The calculation of the polarization tensor leads to a gluon mass term which is proportional to the squared root of the also finite value for 2 >. The summation of all the one-loop contributions to the energy having only mass insertions, indicates the spontaneous generation of the condensate from the perturbative grounds state in a way resembling the similar effect in the case of the chromomagnetic field models. This initial inspection suggests the need for a closer investigation which will be considered elsewhere. (author). 22 refs

  4. New numerical approaches for modeling thermochemical convection in a compositionally stratified fluid

    Science.gov (United States)

    Puckett, Elbridge Gerry; Turcotte, Donald L.; He, Ying; Lokavarapu, Harsha; Robey, Jonathan M.; Kellogg, Louise H.

    2018-03-01

    Geochemical observations of mantle-derived rocks favor a nearly homogeneous upper mantle, the source of mid-ocean ridge basalts (MORB), and heterogeneous lower mantle regions. Plumes that generate ocean island basalts are thought to sample the lower mantle regions and exhibit more heterogeneity than MORB. These regions have been associated with lower mantle structures known as large low shear velocity provinces (LLSVPS) below Africa and the South Pacific. The isolation of these regions is attributed to compositional differences and density stratification that, consequently, have been the subject of computational and laboratory modeling designed to determine the parameter regime in which layering is stable and understanding how layering evolves. Mathematical models of persistent compositional interfaces in the Earth's mantle may be inherently unstable, at least in some regions of the parameter space relevant to the mantle. Computing approximations to solutions of such problems presents severe challenges, even to state-of-the-art numerical methods. Some numerical algorithms for modeling the interface between distinct compositions smear the interface at the boundary between compositions, such as methods that add numerical diffusion or 'artificial viscosity' in order to stabilize the algorithm. We present two new algorithms for maintaining high-resolution and sharp computational boundaries in computations of these types of problems: a discontinuous Galerkin method with a bound preserving limiter and a Volume-of-Fluid interface tracking algorithm. We compare these new methods with two approaches widely used for modeling the advection of two distinct thermally driven compositional fields in mantle convection computations: a high-order accurate finite element advection algorithm with entropy viscosity and a particle method that carries a scalar quantity representing the location of each compositional field. All four algorithms are implemented in the open source finite

  5. Two-phase pressurized thermal shock investigations using a 3D two-fluid modeling of stratified flow with condensation

    International Nuclear Information System (INIS)

    Yao, W.; Coste, P.; Bestion, D.; Boucker, M.

    2003-01-01

    In this paper, a local 3D two-fluid model for a turbulent stratified flow with/without condensation, which can be used to predict two-phase pressurized thermal shock, is presented. A modified turbulent K- model is proposed with turbulence production induced by interfacial friction. A model of interfacial friction based on a interfacial sublayer concept and three interfacial heat transfer models, namely, a model based on the small eddies controlled surface renewal concept (HDM, Hughes and Duffey, 1991), a model based on the asymptotic behavior of the Eddy Viscosity (EVM), and a model based on the Interfacial Sublayer concept (ISM) are implemented into a preliminary version of the NEPTUNE code based on the 3D module of the CATHARE code. As a first step to apply the above models to predict the two-phase thermal shock, the models are evaluated by comparison of calculated profiles with several experiments: a turbulent air-water stratified flow without interfacial heat transfer; a turbulent steam-water stratified flow with condensation; turbulence induced by the impact of a water jet in a water pool. The prediction results agree well with the experimental data. In addition, the comparison of three interfacial heat transfer models shows that EVM and ISM gave better prediction results while HDM highly overestimated the interfacial heat transfers compared to the experimental data of a steam water stratified flow

  6. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    Science.gov (United States)

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  7. A Flexible Spatio-Temporal Model for Air Pollution with Spatial and Spatio-Temporal Covariates

    OpenAIRE

    Lindström, Johan; Szpiro, Adam A; Sampson, Paul D; Oron, Assaf P; Richards, Mark; Larson, Tim V; Sheppard, Lianne

    2013-01-01

    The development of models that provide accurate spatio-temporal predictions of ambient air pollution at small spatial scales is of great importance for the assessment of potential health effects of air pollution. Here we present a spatio-temporal framework that predicts ambient air pollution by combining data from several different monitoring networks and deterministic air pollution model(s) with geographic information system (GIS) covariates. The model presented in this paper has been implem...

  8. Modeling the Thickness of Perennial Ice Covers on Stratified Lakes of the Taylor Valley, Antarctica

    Science.gov (United States)

    Obryk, M. K.; Doran, P. T.; Hicks, J. A.; McKay, C. P.; Priscu, J. C.

    2016-01-01

    A one-dimensional ice cover model was developed to predict and constrain drivers of long term ice thickness trends in chemically stratified lakes of Taylor Valley, Antarctica. The model is driven by surface radiative heat fluxes and heat fluxes from the underlying water column. The model successfully reproduced 16 years (between 1996 and 2012) of ice thickness changes for west lobe of Lake Bonney (average ice thickness = 3.53 m; RMSE = 0.09 m, n = 118) and Lake Fryxell (average ice thickness = 4.22 m; RMSE = 0.21 m, n = 128). Long-term ice thickness trends require coupling with the thermal structure of the water column. The heat stored within the temperature maximum of lakes exceeding a liquid water column depth of 20 m can either impede or facilitate ice thickness change depending on the predominant climatic trend (temperature cooling or warming). As such, shallow (< 20 m deep water columns) perennially ice-covered lakes without deep temperature maxima are more sensitive indicators of climate change. The long-term ice thickness trends are a result of surface energy flux and heat flux from the deep temperature maximum in the water column, the latter of which results from absorbed solar radiation.

  9. Evaluation of a Stratified National Breast Screening Program in the United Kingdom : An Early Model-Based Cost-Effectiveness Analysis

    NARCIS (Netherlands)

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D. Gareth R.; Astley, Sue; Payne, Katherine

    Objectives: To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. Methods: A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1,

  10. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis

    NARCIS (Netherlands)

    Gray, E.; Donten, A.; Karssemeijer, N.; Gils, C. van; Evans, D.G.; Astley, S.; Payne, K.

    2017-01-01

    OBJECTIVES: To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. METHODS: A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1,

  11. Integrating lysimeter drainage and eddy covariance flux measurements in a groundwater recharge model

    DEFF Research Database (Denmark)

    Vasquez, Vicente; Thomsen, Anton Gårde; Iversen, Bo Vangsø

    2015-01-01

    Field scale water balance is difficult to characterize because controls exerted by soils and vegetation are mostly inferred from local scale measurements with relatively small support volumes. Eddy covariance flux and lysimeters have been used to infer and evaluate field scale water balances...... because they have larger footprint areas than local soil moisture measurements.. This study quantifies heterogeneity of soil deep drainage (D) in four 12.5 m2 repacked lysimeters, compares evapotranspiration from eddy covariance (ETEC) and mass balance residuals of lysimeters (ETwbLys), and models D...

  12. Meson form factors and covariant three-dimensional formulation of composite model

    International Nuclear Information System (INIS)

    Skachkov, N.B.; Solovtsov, I.L.

    1978-01-01

    An approach is developed which is applied in the framework of the relativistic quark model to obtain explicit expressions for meson form factors in terms of covariant wave functions of the two-quark system. These wave functions obey the two-particle quasipotential equation in which the relative motion of quarks is singled out in a covariant way. The exact form of the wave functions is found using the transition to the relativistic configurational representation with the help of the harmonic analysis on the Lorentz group instead of the usual Fourier expansion and then solving the relativistic difference equation thus obtained. The expressions found for form factors are transformed into the three-dimensional covariant form which is a direct geometrical relativistic generalization of analogous expressions of the nonrelativistic quantum mechanics and provides the decrease of the meson form factor by the Fsub(π)(t) approximately t -1 law as -t infinity, in the Coulomb field

  13. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  14. Promoting Modeling and Covariational Reasoning among Secondary School Students in the Context of Big Data

    Science.gov (United States)

    Gil, Einat; Gibbs, Alison L.

    2017-01-01

    In this study, we follow students' modeling and covariational reasoning in the context of learning about big data. A three-week unit was designed to allow 12th grade students in a mathematics course to explore big and mid-size data using concepts such as trend and scatter to describe the relationships between variables in multivariate settings.…

  15. Meson form factors and covariant three-dimensional formulation of the composite model

    International Nuclear Information System (INIS)

    Skachkov, N.B.; Solovtsov, I.L.

    1979-01-01

    An apparatus is developed which allows within the relativistic quark model, to find explicit expressions for meson form factors in terms of the wave functions of two-quark system that obey the covariant two-particle quasipotential equation. The exact form of wave functions is obtained by passing to the relativistic configurational representation. As an example, the quark Coulomb interaction is considered

  16. Quality analysis applied on eddy covariance measurements at complex forest sites using footprint modelling

    Czech Academy of Sciences Publication Activity Database

    Rebmann, C.; Göckede, M.; Foken, T.; Aubinet, M.; Aurela, M.; Berbigier, P.; Bernhofer, C.; Buchmann, N.; Carrara, A.; Cescatti, A.; Ceulemans, R.; Clement, R.; Elbers, J. A.; Granier, A.; Grünwald, T.; Guyon, D.; Havránková, Kateřina; Heinesch, B.; Knohl, A.; Laurila, T.; Longdoz, B.; Marcolla, B.; Markkanen, T.; Miglietta, F.; Moncrieff, J.; Montagnani, L.; Moors, E.; Nardino, M.; Ourcival, J.-M.; Rambal, S.; Rannik, Ü.; Rotenberg, E.; Sedlák, Pavel; Unterhuber, G.; Vesala, T.; Yakir, D.

    2005-01-01

    Roč. 80, - (2005), s. 121-141 ISSN 0177-798X Grant - others:Carboeuroflux(XE) EVK-2-CT-1999-00032 Institutional research plan: CEZ:AV0Z30420517; CEZ:AV0Z6087904 Keywords : Eddy covariance * Quality assurance * Quality control * Footprint modelling * Heterogeneity Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.295, year: 2005

  17. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    2014-01-01

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...

  18. Lagged PM2.5 effects in mortality time series: Critical impact of covariate model

    Science.gov (United States)

    The two most common approaches to modeling the effects of air pollution on mortality are the Harvard and the Johns Hopkins (NMMAPS) approaches. These two approaches, which use different sets of covariates, result in dissimilar estimates of the effect of lagged fine particulate ma...

  19. Exact sampling of the unobserved covariates in Bayesian spline models for measurement error problems.

    Science.gov (United States)

    Bhadra, Anindya; Carroll, Raymond J

    2016-07-01

    In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.

  20. Bayesian semiparametric mixture Tobit models with left censoring, skewness, and covariate measurement errors.

    Science.gov (United States)

    Dagne, Getachew A; Huang, Yangxin

    2013-09-30

    Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. . Copyright © 2013 John Wiley & Sons, Ltd.

  1. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Lei Qin

    2014-05-01

    Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.

  2. Characterization and modeling of turbidity density plume induced into stratified reservoir by flood runoffs.

    Science.gov (United States)

    Chung, S W; Lee, H S

    2009-01-01

    In monsoon climate area, turbidity flows typically induced by flood runoffs cause numerous environmental impacts such as impairment of fish habitat and river attraction, and degradation of water supply efficiency. This study was aimed to characterize the physical dynamics of turbidity plume induced into a stratified reservoir using field monitoring and numerical simulations, and to assess the effect of different withdrawal scenarios on the control of downstream water quality. Three different turbidity models (RUN1, RUN2, RUN3) were developed based on a two-dimensional laterally averaged hydrodynamic and transport model, and validated against field data. RUN1 assumed constant settling velocity of suspended sediment, while RUN2 estimated the settling velocity as a function of particle size, density, and water temperature to consider vertical stratification. RUN3 included a lumped first-order turbidity attenuation rate taking into account the effects of particles aggregation and degradable organic particles. RUN3 showed best performance in replicating the observed variations of in-reservoir and release turbidity. Numerical experiments implemented to assess the effectiveness of different withdrawal depths showed that the alterations of withdrawal depth can modify the pathway and flow regimes of the turbidity plume, but its effect on the control of release water quality could be trivial.

  3. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  4. SOMAR-LES: A framework for multi-scale modeling of turbulent stratified oceanic flows

    Science.gov (United States)

    Chalamalla, Vamsi K.; Santilli, Edward; Scotti, Alberto; Jalali, Masoud; Sarkar, Sutanu

    2017-12-01

    A new multi-scale modeling technique, SOMAR-LES, is presented in this paper. Localized grid refinement gives SOMAR (the Stratified Ocean Model with Adaptive Resolution) access to small scales of the flow which are normally inaccessible to general circulation models (GCMs). SOMAR-LES drives a LES (Large Eddy Simulation) on SOMAR's finest grids, forced with large scale forcing from the coarser grids. Three-dimensional simulations of internal tide generation, propagation and scattering are performed to demonstrate this multi-scale modeling technique. In the case of internal tide generation at a two-dimensional bathymetry, SOMAR-LES is able to balance the baroclinic energy budget and accurately model turbulence losses at only 10% of the computational cost required by a non-adaptive solver running at SOMAR-LES's fine grid resolution. This relative cost is significantly reduced in situations with intermittent turbulence or where the location of the turbulence is not known a priori because SOMAR-LES does not require persistent, global, high resolution. To illustrate this point, we consider a three-dimensional bathymetry with grids adaptively refined along the tidally generated internal waves to capture remote mixing in regions of wave focusing. The computational cost in this case is found to be nearly 25 times smaller than that of a non-adaptive solver at comparable resolution. In the final test case, we consider the scattering of a mode-1 internal wave at an isolated two-dimensional and three-dimensional topography, and we compare the results with Legg (2014) numerical experiments. We find good agreement with theoretical estimates. SOMAR-LES is less dissipative than the closure scheme employed by Legg (2014) near the bathymetry. Depending on the flow configuration and resolution employed, a reduction of more than an order of magnitude in computational costs is expected, relative to traditional existing solvers.

  5. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Energy Technology Data Exchange (ETDEWEB)

    Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.

    2016-11-01

    The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)

  7. Covariant quantization of infinite spin particle models, and higher order gauge theories

    International Nuclear Information System (INIS)

    Edgren, Ludde; Marnelius, Robert

    2006-01-01

    Further properties of a recently proposed higher order infinite spin particle model are derived. Infinitely many classically equivalent but different Hamiltonian formulations are shown to exist. This leads to a condition of uniqueness in the quantization process. A consistent covariant quantization is shown to exist. Also a recently proposed supersymmetric version for half-odd integer spins is quantized. A general algorithm to derive gauge invariances of higher order Lagrangians is given and applied to the infinite spin particle model, and to a new higher order model for a spinning particle which is proposed here, as well as to a previously given higher order rigid particle model. The latter two models are also covariantly quantized

  8. Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model

    Science.gov (United States)

    Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.

    2017-12-01

    Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and

  9. A stratified percolation model for saturated and unsaturated flow through natural fractures

    International Nuclear Information System (INIS)

    Pyrak-Nolte, L.J.

    1990-01-01

    The geometry of the asperities of contact between the two surfaces of a fracture and of the adjacent void spaces determines fluid flow through a fracture and the mechanical deformation across a fracture. Heuristically we have developed a stratified continuum percolation model to describe this geometry based on a fractal construction that includes scale invariance and correlation of void apertures. Deformation under stress is analyzed using conservation of rock volume to correct for asperity interpenetration. Single phase flow is analyzed using a critical path along which the principal resistance is a result of laminar flow across the critical neck in this path. Results show that flow decreases with apparent aperture raised to a variable power greater than cubic, as is observed in flow experiments on natural fractures. For two phases, flow of the non-wetting phase is likewise governed by the critical neck along the critical path of largest aperture but flow of the wetting phase is governed by tortuosity. 17 refs., 10 figs

  10. Robust entry guidance using linear covariance-based model predictive control

    Directory of Open Access Journals (Sweden)

    Jianjun Luo

    2017-02-01

    Full Text Available For atmospheric entry vehicles, guidance design can be accomplished by solving an optimal issue using optimal control theories. However, traditional design methods generally focus on the nominal performance and do not include considerations of the robustness in the design process. This paper proposes a linear covariance-based model predictive control method for robust entry guidance design. Firstly, linear covariance analysis is employed to directly incorporate the robustness into the guidance design. The closed-loop covariance with the feedback updated control command is initially formulated to provide the expected errors of the nominal state variables in the presence of uncertainties. Then, the closed-loop covariance is innovatively used as a component of the cost function to guarantee the robustness to reduce its sensitivity to uncertainties. After that, the models predictive control is used to solve the optimal problem, and the control commands (bank angles are calculated. Finally, a series of simulations for different missions have been completed to demonstrate the high performance in precision and the robustness with respect to initial perturbations as well as uncertainties in the entry process. The 3σ confidence region results in the presence of uncertainties which show that the robustness of the guidance has been improved, and the errors of the state variables are decreased by approximately 35%.

  11. Quark model with chiral-symmetry breaking and confinement in the Covariant Spectator Theory

    Energy Technology Data Exchange (ETDEWEB)

    Biernat, Elmer P. [CFTP, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Pena, Maria Teresa [CFTP, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Ribiero, Jose' Emilio F. [CeFEMA, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Stadler, Alfred [Departamento de Física, Universidade de Évora, 7000-671 Évora, Portugal; Gross, Franz L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2016-03-01

    We propose a model for the quark-antiquark interaction in Minkowski space using the Covariant Spectator Theory. We show that with an equal-weighted scalar-pseudoscalar structure for the confining part of our interaction kernel the axial-vector Ward-Takahashi identity is preserved and our model complies with the Adler-zero constraint for pi-pi-scattering imposed by chiral symmetry.

  12. Applications of Multidimensional Item Response Theory Models with Covariates to Longitudinal Test Data. Research Report. ETS RR-16-21

    Science.gov (United States)

    Fu, Jianbin

    2016-01-01

    The multidimensional item response theory (MIRT) models with covariates proposed by Haberman and implemented in the "mirt" program provide a flexible way to analyze data based on item response theory. In this report, we discuss applications of the MIRT models with covariates to longitudinal test data to measure skill differences at the…

  13. Instabilities of continuously stratified zonal equatorial jets in a periodic channel model

    Directory of Open Access Journals (Sweden)

    S. Masina

    2002-05-01

    Full Text Available Several numerical experiments are performed in a nonlinear, multi-level periodic channel model centered on the equator with different zonally uniform background flows which resemble the South Equatorial Current (SEC. Analysis of the simulations focuses on identifying stability criteria for a continuously stratified fluid near the equator. A 90 m deep frontal layer is required to destabilize a zonally uniform, 10° wide, westward surface jet that is symmetric about the equator and has a maximum velocity of 100 cm/s. In this case, the phase velocity of the excited unstable waves is very similar to the phase speed of the Tropical Instability Waves (TIWs observed in the eastern Pacific Ocean. The vertical scale of the baroclinic waves corresponds to the frontal layer depth and their phase speed increases as the vertical shear of the jet is doubled. When the westward surface parabolic jet is made asymmetric about the equator, in order to simulate more realistically the structure of the SEC in the eastern Pacific, two kinds of instability are generated. The oscillations that grow north of the equator have a baroclinic nature, while those generated on and very close to the equator have a barotropic nature.  This study shows that the potential for baroclinic instability in the equatorial region can be as large as at mid-latitudes, if the tendency of isotherms to have a smaller slope for a given zonal velocity, when the Coriolis parameter vanishes, is compensated for by the wind effect.Key words. Oceanography: general (equatorial oceanography; numerical modeling – Oceanography: physics (fronts and jets

  14. Instabilities of continuously stratified zonal equatorial jets in a periodic channel model

    Directory of Open Access Journals (Sweden)

    S. Masina

    Full Text Available Several numerical experiments are performed in a nonlinear, multi-level periodic channel model centered on the equator with different zonally uniform background flows which resemble the South Equatorial Current (SEC. Analysis of the simulations focuses on identifying stability criteria for a continuously stratified fluid near the equator. A 90 m deep frontal layer is required to destabilize a zonally uniform, 10° wide, westward surface jet that is symmetric about the equator and has a maximum velocity of 100 cm/s. In this case, the phase velocity of the excited unstable waves is very similar to the phase speed of the Tropical Instability Waves (TIWs observed in the eastern Pacific Ocean. The vertical scale of the baroclinic waves corresponds to the frontal layer depth and their phase speed increases as the vertical shear of the jet is doubled. When the westward surface parabolic jet is made asymmetric about the equator, in order to simulate more realistically the structure of the SEC in the eastern Pacific, two kinds of instability are generated. The oscillations that grow north of the equator have a baroclinic nature, while those generated on and very close to the equator have a barotropic nature. 

    This study shows that the potential for baroclinic instability in the equatorial region can be as large as at mid-latitudes, if the tendency of isotherms to have a smaller slope for a given zonal velocity, when the Coriolis parameter vanishes, is compensated for by the wind effect.

    Key words. Oceanography: general (equatorial oceanography; numerical modeling – Oceanography: physics (fronts and jets

  15. Horizontal stratified flow model for the 1-D module of WCOBRA/TRAC-TF2: modeling and validation

    Energy Technology Data Exchange (ETDEWEB)

    Liao, J.; Frepoli, C.; Ohkawa, K., E-mail: liaoj@westinghouse.com [Westinghouse Electric Company LLC, LOCA Integrated Services I, Cranberry Twp, Pennsylvania (United States)

    2011-07-01

    For a two-phase flow in a horizontal pipe, the individual phases may separate by gravity. This horizontal stratification significantly impacts the interfacial drag, interfacial heat transfer and wall drag of the two phase flow. For a PWR small break LOCA, the horizontal stratification in cold legs is a highly important phenomenon during loop seal clearance, boiloff and recovery periods. The low interfacial drag in the stratified flow directly controls the time period for the loop clearance and the level of residual water in the loop seal. Horizontal stratification in hot legs also impacts the natural circulation stage of a small break LOCA. In addition, the offtake phenomenon and cold leg condensation phenomenon are also affected by the occurrence of horizontal stratification in the cold legs. In the 1-D module of the WCOBRA/TRAC-TF2 computer code, a horizontal stratification criterion was developed by combining the Taitel-Dukler model and the Wallis-Dobson model, which approximates the viscous Kelvin-Helmholtz neutral stability boundary. The objective of this paper is to present the horizontal stratification model implemented in the code and its assessment against relevant data. The adequacy of the horizontal stratification transition criterion is confirmed by examining the code-predicted flow regime in a horizontal pipe with the measured data in the flow regime map. The void fractions (or liquid level) for the horizontal stratified flow in cold leg or hot leg are predicted with a reasonable accuracy. (author)

  16. Robust estimation for partially linear models with large-dimensional covariates.

    Science.gov (United States)

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  17. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    Science.gov (United States)

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  18. Evaluation of a plot-scale methane emission model using eddy covariance observations and footprint modelling

    Directory of Open Access Journals (Sweden)

    A. Budishchev

    2014-09-01

    Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.

  19. Gaussian covariance graph models accounting for correlated marker effects in genome-wide prediction.

    Science.gov (United States)

    Martínez, C A; Khare, K; Rahman, S; Elzo, M A

    2017-10-01

    Several statistical models used in genome-wide prediction assume uncorrelated marker allele substitution effects, but it is known that these effects may be correlated. In statistics, graphical models have been identified as a useful tool for covariance estimation in high-dimensional problems and it is an area that has recently experienced a great expansion. In Gaussian covariance graph models (GCovGM), the joint distribution of a set of random variables is assumed to be Gaussian and the pattern of zeros of the covariance matrix is encoded in terms of an undirected graph G. In this study, methods adapting the theory of GCovGM to genome-wide prediction were developed (Bayes GCov, Bayes GCov-KR and Bayes GCov-H). In simulated data sets, improvements in correlation between phenotypes and predicted breeding values and accuracies of predicted breeding values were found. Our models account for correlation of marker effects and permit to accommodate general structures as opposed to models proposed in previous studies, which consider spatial correlation only. In addition, they allow incorporation of biological information in the prediction process through its use when constructing graph G, and their extension to the multi-allelic loci case is straightforward. © 2017 Blackwell Verlag GmbH.

  20. lme4qtl: linear mixed models with flexible covariance structure for genetic studies of related individuals.

    Science.gov (United States)

    Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel

    2018-02-27

    Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .

  1. Influence of covariate distribution on the predictive performance of pharmacokinetic models in paediatric research

    Science.gov (United States)

    Piana, Chiara; Danhof, Meindert; Della Pasqua, Oscar

    2014-01-01

    Aims The accuracy of model-based predictions often reported in paediatric research has not been thoroughly characterized. The aim of this exercise is therefore to evaluate the role of covariate distributions when a pharmacokinetic model is used for simulation purposes. Methods Plasma concentrations of a hypothetical drug were simulated in a paediatric population using a pharmacokinetic model in which body weight was correlated with clearance and volume of distribution. Two subgroups of children were then selected from the overall population according to a typical study design, in which pre-specified body weight ranges (10–15 kg and 30–40 kg) were used as inclusion criteria. The simulated data sets were then analyzed using non-linear mixed effects modelling. Model performance was assessed by comparing the accuracy of AUC predictions obtained for each subgroup, based on the model derived from the overall population and by extrapolation of the model parameters across subgroups. Results Our findings show that systemic exposure as well as pharmacokinetic parameters cannot be accurately predicted from the pharmacokinetic model obtained from a population with a different covariate range from the one explored during model building. Predictions were accurate only when a model was used for prediction in a subgroup of the initial population. Conclusions In contrast to current practice, the use of pharmacokinetic modelling in children should be limited to interpolations within the range of values observed during model building. Furthermore, the covariate point estimate must be kept in the model even when predictions refer to a subset different from the original population. PMID:24433411

  2. Covariance matrices for nuclear cross sections derived from nuclear model calculations

    International Nuclear Information System (INIS)

    Smith, D. L.

    2005-01-01

    The growing need for covariance information to accompany the evaluated cross section data libraries utilized in contemporary nuclear applications is spurring the development of new methods to provide this information. Many of the current general purpose libraries of evaluated nuclear data used in applications are derived either almost entirely from nuclear model calculations or from nuclear model calculations benchmarked by available experimental data. Consequently, a consistent method for generating covariance information under these circumstances is required. This report discusses a new approach to producing covariance matrices for cross sections calculated using nuclear models. The present method involves establishing uncertainty information for the underlying parameters of nuclear models used in the calculations and then propagating these uncertainties through to the derived cross sections and related nuclear quantities by means of a Monte Carlo technique rather than the more conventional matrix error propagation approach used in some alternative methods. The formalism to be used in such analyses is discussed in this report along with various issues and caveats that need to be considered in order to proceed with a practical implementation of the methodology

  3. Covariance of random stock prices in the Stochastic Dividend Discount Model

    OpenAIRE

    Agosto, Arianna; Mainini, Alessandra; Moretto, Enrico

    2016-01-01

    Dividend discount models have been developed in a deterministic setting. Some authors (Hurley and Johnson, 1994 and 1998; Yao, 1997) have introduced randomness in terms of stochastic growth rates, delivering closed-form expressions for the expected value of stock prices. This paper extends such previous results by determining a formula for the covariance between random stock prices when the dividends' rates of growth are correlated. The formula is eventually applied to real market data.

  4. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references.

  5. Covariances for neutron cross sections calculated using a regional model based on local-model fits to experimental data

    International Nuclear Information System (INIS)

    Smith, D.L.; Guenther, P.T.

    1983-11-01

    We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references

  6. Geological modeling of a stratified deposit with CAD-Based solid model automation

    Directory of Open Access Journals (Sweden)

    Ayten Eser

    Full Text Available Abstract The planning stages of mining activities require many comprehensive and detailed analyses. Determining the correct orebody model is the first stage and one of the most important. Three-dimensional solid modeling is one of the significant methods that can examine the position and shape of the ore deposit. Although there are many different types of mining software for determining a solid model, many users try to build geological models in the computer without knowing how these software packages work. As researchers on the subject, we wanted to answer the question "How would we do it". For this purpose, a system was developed for generating solid models using data obtained from boreholes. Obtaining this model in an AutoCAD environment will be important for geologists and engineers. Developed programs were first tested with virtual borehole data belonging to a virtual deposit. Then the real borehole data of a cement raw material site were successfully applied. This article allows readers not only to see a clear example of the programming approach to layered deposits but also to produce more complicated software in this context. Our study serves as a window to understanding the geological modeling process.

  7. Covariance evaluation system

    International Nuclear Information System (INIS)

    Kawano, Toshihiko; Shibata, Keiichi.

    1997-09-01

    A covariance evaluation system for the evaluated nuclear data library was established. The parameter estimation method and the least squares method with a spline function are used to generate the covariance data. Uncertainties of nuclear reaction model parameters are estimated from experimental data uncertainties, then the covariance of the evaluated cross sections is calculated by means of error propagation. Computer programs ELIESE-3, EGNASH4, ECIS, and CASTHY are used. Covariances of 238 U reaction cross sections were calculated with this system. (author)

  8. Numerical Differentiation Methods for Computing Error Covariance Matrices in Item Response Theory Modeling: An Evaluation and a New Proposal

    Science.gov (United States)

    Tian, Wei; Cai, Li; Thissen, David; Xin, Tao

    2013-01-01

    In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…

  9. Inverse modeling of the terrestrial carbon flux in China with flux covariance among inverted regions

    Science.gov (United States)

    Wang, H.; Jiang, F.; Chen, J. M.; Ju, W.; Wang, H.

    2011-12-01

    Quantitative understanding of the role of ocean and terrestrial biosphere in the global carbon cycle, their response and feedback to climate change is required for the future projection of the global climate. China has the largest amount of anthropogenic CO2 emission, diverse terrestrial ecosystems and an unprecedented rate of urbanization. Thus information on spatial and temporal distributions of the terrestrial carbon flux in China is of great importance in understanding the global carbon cycle. We developed a nested inversion with focus in China. Based on Transcom 22 regions for the globe, we divide China and its neighboring countries into 17 regions, making 39 regions in total for the globe. A Bayesian synthesis inversion is made to estimate the terrestrial carbon flux based on GlobalView CO2 data. In the inversion, GEOS-Chem is used as the transport model to develop the transport matrix. A terrestrial ecosystem model named BEPS is used to produce the prior surface flux to constrain the inversion. However, the sparseness of available observation stations in Asia poses a challenge to the inversion for the 17 small regions. To obtain additional constraint on the inversion, a prior flux covariance matrix is constructed using the BEPS model through analyzing the correlation in the net carbon flux among regions under variable climate conditions. The use of the covariance among different regions in the inversion effectively extends the information content of CO2 observations to more regions. The carbon flux over the 39 land and ocean regions are inverted for the period from 2004 to 2009. In order to investigate the impact of introducing the covariance matrix with non-zero off-diagonal values to the inversion, the inverted terrestrial carbon flux over China is evaluated against ChinaFlux eddy-covariance observations after applying an upscaling methodology.

  10. Tests for detecting overdispersion in models with measurement error in covariates.

    Science.gov (United States)

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  11. A Heat Transfer Model for a Stratified Corium-Metal Pool in the Lower Plenum of a Nuclear Reactor

    International Nuclear Information System (INIS)

    Sohal, M.S.; Siefken, L.J.

    1999-01-01

    This preliminary design report describes a model for heat transfer in a corium-metal stratified pool. It was decided to make use of the existing COUPLE model. Currently available correlations for natural convection heat transfer in a pool with and without internal heat generation were obtained. The appropriate correlations will be incorporated in the existing COUPLE model. Heat conduction and solidification modeling will be done with existing algorithms in the COUPLE. Assessment of the new model will be done by simple energy conservation problems

  12. Bayesian nonparametric generative models for causal inference with missing at random covariates.

    Science.gov (United States)

    Roy, Jason; Lum, Kirsten J; Zeldow, Bret; Dworkin, Jordan D; Re, Vincent Lo; Daniels, Michael J

    2018-03-26

    We propose a general Bayesian nonparametric (BNP) approach to causal inference in the point treatment setting. The joint distribution of the observed data (outcome, treatment, and confounders) is modeled using an enriched Dirichlet process. The combination of the observed data model and causal assumptions allows us to identify any type of causal effect-differences, ratios, or quantile effects, either marginally or for subpopulations of interest. The proposed BNP model is well-suited for causal inference problems, as it does not require parametric assumptions about the distribution of confounders and naturally leads to a computationally efficient Gibbs sampling algorithm. By flexibly modeling the joint distribution, we are also able to impute (via data augmentation) values for missing covariates within the algorithm under an assumption of ignorable missingness, obviating the need to create separate imputed data sets. This approach for imputing the missing covariates has the additional advantage of guaranteeing congeniality between the imputation model and the analysis model, and because we use a BNP approach, parametric models are avoided for imputation. The performance of the method is assessed using simulation studies. The method is applied to data from a cohort study of human immunodeficiency virus/hepatitis C virus co-infected patients. © 2018, The International Biometric Society.

  13. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    Science.gov (United States)

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  14. Multi-subject hierarchical inverse covariance modelling improves estimation of functional brain networks.

    Science.gov (United States)

    Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M

    2018-05-07

    A Bayesian model for sparse, hierarchical inverse covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fmri, meg and eeg data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in meg beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.

  15. From Near-Neutral to Strongly Stratified: Adequately Modelling the Clear-Sky Nocturnal Boundary Layer at Cabauw.

    Science.gov (United States)

    Baas, P; van de Wiel, B J H; van der Linden, S J A; Bosveld, F C

    2018-01-01

    The performance of an atmospheric single-column model (SCM) is studied systematically for stably-stratified conditions. To this end, 11 years (2005-2015) of daily SCM simulations were compared to observations from the Cabauw observatory, The Netherlands. Each individual clear-sky night was classified in terms of the ambient geostrophic wind speed with a [Formula: see text] bin-width. Nights with overcast conditions were filtered out by selecting only those nights with an average net radiation of less than [Formula: see text]. A similar procedure was applied to the observational dataset. A comparison of observed and modelled ensemble-averaged profiles of wind speed and potential temperature and time series of turbulent fluxes showed that the model represents the dynamics of the nocturnal boundary layer (NBL) at Cabauw very well for a broad range of mechanical forcing conditions. No obvious difference in model performance was found between near-neutral and strongly-stratified conditions. Furthermore, observed NBL regime transitions are represented in a natural way. The reference model version performs much better than a model version that applies excessive vertical mixing as is done in several (global) operational models. Model sensitivity runs showed that for weak-wind conditions the inversion strength depends much more on details of the land-atmosphere coupling than on the turbulent mixing. The presented results indicate that in principle the physical parametrizations of large-scale atmospheric models are sufficiently equipped for modelling stably-stratified conditions for a wide range of forcing conditions.

  16. From Near-Neutral to Strongly Stratified: Adequately Modelling the Clear-Sky Nocturnal Boundary Layer at Cabauw

    Science.gov (United States)

    Baas, P.; van de Wiel, B. J. H.; van der Linden, S. J. A.; Bosveld, F. C.

    2018-02-01

    The performance of an atmospheric single-column model (SCM) is studied systematically for stably-stratified conditions. To this end, 11 years (2005-2015) of daily SCM simulations were compared to observations from the Cabauw observatory, The Netherlands. Each individual clear-sky night was classified in terms of the ambient geostrophic wind speed with a 1 m s^{-1} bin-width. Nights with overcast conditions were filtered out by selecting only those nights with an average net radiation of less than - 30 W m^{-2}. A similar procedure was applied to the observational dataset. A comparison of observed and modelled ensemble-averaged profiles of wind speed and potential temperature and time series of turbulent fluxes showed that the model represents the dynamics of the nocturnal boundary layer (NBL) at Cabauw very well for a broad range of mechanical forcing conditions. No obvious difference in model performance was found between near-neutral and strongly-stratified conditions. Furthermore, observed NBL regime transitions are represented in a natural way. The reference model version performs much better than a model version that applies excessive vertical mixing as is done in several (global) operational models. Model sensitivity runs showed that for weak-wind conditions the inversion strength depends much more on details of the land-atmosphere coupling than on the turbulent mixing. The presented results indicate that in principle the physical parametrizations of large-scale atmospheric models are sufficiently equipped for modelling stably-stratified conditions for a wide range of forcing conditions.

  17. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  18. Thermalhydraulic study of a stratified flow in a piping elbow (Application to the model Coufast)

    International Nuclear Information System (INIS)

    Peniguel, C.; Stephan, J.M.

    1992-11-01

    In PWR's, mechanical damages (cracks) have been detected at the internal faces of steam generator feedwater piping and also in dead legs, when thermal stratification occurs. To gain some understanding on these issues, experimental and numerical programs have been set up at EDF. This paper reports a thermalhydraulic study of an elbow geometry under operating conditions leading to the establishment of a stable stratified flow. Results obtained with ESTET (a three dimensional finite differences-finite volume code solving the averaged Navier-Stokes equations) and comparisons with experimental data obtained on COUFAST (an analytical mock up, scale 1 of a French 900-MW PWR steam generator pipe elbow) are shown

  19. Relating covariant and canonical approaches to triangulated models of quantum gravity

    International Nuclear Information System (INIS)

    Arnsdorf, Matthias

    2002-01-01

    In this paper we explore the relation between covariant and canonical approaches to quantum gravity and BF theory. We will focus on the dynamical triangulation and spin-foam models, which have in common that they can be defined in terms of sums over spacetime triangulations. Our aim is to show how we can recover these covariant models from a canonical framework by providing two regularizations of the projector onto the kernel of the Hamiltonian constraint. This link is important for the understanding of the dynamics of quantum gravity. In particular, we will see how in the simplest dynamical triangulation model we can recover the Hamiltonian constraint via our definition of the projector. Our discussion of spin-foam models will show how the elementary spin-network moves in loop quantum gravity, which were originally assumed to describe the Hamiltonian constraint action, are in fact related to the time-evolution generated by the constraint. We also show that the Immirzi parameter is important for the understanding of a continuum limit of the theory

  20. Simulation of parametric model towards the fixed covariate of right censored lung cancer data

    Science.gov (United States)

    Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila

    2017-09-01

    In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.

  1. A versatile method for confirmatory evaluation of the effects of a covariate in multiple models

    DEFF Research Database (Denmark)

    Pipper, Christian Bressen; Ritz, Christian; Bisgaard, Hans

    2012-01-01

    to provide a fine-tuned control of the overall type I error in a wide range of epidemiological experiments where in reality no other useful alternative exists. The methodology proposed is applied to a multiple-end-point study of the effect of neonatal bacterial colonization on development of childhood asthma.......Modern epidemiology often requires testing of the effect of a covariate on multiple end points from the same study. However, popular state of the art methods for multiple testing require the tests to be evaluated within the framework of a single model unifying all end points. This severely limits...

  2. On the fit of models to covariances and methodology to the Bulletin.

    Science.gov (United States)

    Bentler, P M

    1992-11-01

    It is noted that 7 of the 10 top-cited articles in the Psychological Bulletin deal with methodological topics. One of these is the Bentler-Bonett (1980) article on the assessment of fit in covariance structure models. Some context is provided on the popularity of this article. In addition, a citation study of methodology articles appearing in the Bulletin since 1978 was carried out. It verified that publications in design, evaluation, measurement, and statistics continue to be important to psychological research. Some thoughts are offered on the role of the journal in making developments in these areas more accessible to psychologists.

  3. A cautionary note on generalized linear models for covariance of unbalanced longitudinal data

    KAUST Repository

    Huang, Jianhua Z.; Chen, Min; Maadooliat, Mehdi; Pourahmadi, Mohsen

    2012-01-01

    Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes

  4. Comparing the performance of geostatistical models with additional information from covariates for sewage plume characterization.

    Science.gov (United States)

    Del Monego, Maurici; Ribeiro, Paulo Justiniano; Ramos, Patrícia

    2015-04-01

    In this work, kriging with covariates is used to model and map the spatial distribution of salinity measurements gathered by an autonomous underwater vehicle in a sea outfall monitoring campaign aiming to distinguish the effluent plume from the receiving waters and characterize its spatial variability in the vicinity of the discharge. Four different geostatistical linear models for salinity were assumed, where the distance to diffuser, the west-east positioning, and the south-north positioning were used as covariates. Sample variograms were fitted by the Matèrn models using weighted least squares and maximum likelihood estimation methods as a way to detect eventual discrepancies. Typically, the maximum likelihood method estimated very low ranges which have limited the kriging process. So, at least for these data sets, weighted least squares showed to be the most appropriate estimation method for variogram fitting. The kriged maps show clearly the spatial variation of salinity, and it is possible to identify the effluent plume in the area studied. The results obtained show some guidelines for sewage monitoring if a geostatistical analysis of the data is in mind. It is important to treat properly the existence of anomalous values and to adopt a sampling strategy that includes transects parallel and perpendicular to the effluent dispersion.

  5. Mass spectra and wave functions of meson systems and the covariant oscillator quark model as an expansion basis

    International Nuclear Information System (INIS)

    Oda, Ryuichi; Ishida, Shin; Wada, Hiroaki; Yamada, Kenji; Sekiguchi, Motoo

    1999-01-01

    We examine mass spectra and wave functions of the nn-bar, cc-bar and bb-bar meson systems within the framework of the covariant oscillator quark model with the boosted LS-coupling scheme. We solve nonperturbatively an eigenvalue problem for the squared-mass operator, which incorporates the four-dimensional color-Coulomb-type interaction, by taking a set of covariant oscillator wave functions as an expansion basis. We obtain mass spectra of these meson systems, which reproduce quite well their experimental behavior. The resultant manifestly covariant wave functions, which are applicable to analyses of various reaction phenomena, are given. Our results seem to suggest that the present model may be considered effectively as a covariant version of the nonrelativistic linear-plus-Coulomb potential quark model. (author)

  6. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  7. Model-driven development of covariances for spatiotemporal environmental health assessment.

    Science.gov (United States)

    Kolovos, Alexander; Angulo, José Miguel; Modis, Konstantinos; Papantonopoulos, George; Wang, Jin-Feng; Christakos, George

    2013-01-01

    Known conceptual and technical limitations of mainstream environmental health data analysis have directed research to new avenues. The goal is to deal more efficiently with the inherent uncertainty and composite space-time heterogeneity of key attributes, account for multi-sourced knowledge bases (health models, survey data, empirical relationships etc.), and generate more accurate predictions across space-time. Based on a versatile, knowledge synthesis methodological framework, we introduce new space-time covariance functions built by integrating epidemic propagation models and we apply them in the analysis of existing flu datasets. Within the knowledge synthesis framework, the Bayesian maximum entropy theory is our method of choice for the spatiotemporal prediction of the ratio of new infectives (RNI) for a case study of flu in France. The space-time analysis is based on observations during a period of 15 weeks in 1998-1999. We present general features of the proposed covariance functions, and use these functions to explore the composite space-time RNI dependency. We then implement the findings to generate sufficiently detailed and informative maps of the RNI patterns across space and time. The predicted distributions of RNI suggest substantive relationships in accordance with the typical physiographic and climatologic features of the country.

  8. Analysis of Milk Production Traits in Early Lactation Using a Reaction Norm Model with Unknown Covariates

    DEFF Research Database (Denmark)

    Mahdi Shariati, Mohammad; Su, Guosheng; Madsen, Per

    2007-01-01

    The reaction norm model is becoming a popular approach to study genotype x environment interaction (GxE), especially when there is a continuum of environmental effects. These effects are typically unknown, and an approximation that is used in the literature is to replace them by the phenotypic...... means of each environment. It has been shown that this method results in poor inferences and that a more satisfactory alternative is to infer environmental effects jointly with the other parameters of the model. Such a reaction norm model with unknown covariates and heterogeneous residual variances...... across herds was fitted to milk, protein, and fat yield of first-lactation Danish Holstein cows to investigate the presence of GxE. Data included 188,502 first test-day records from 299 herds and 3,775 herd-years in a time period ranging from 1991 to 2003. Variance components and breeding values were...

  9. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    DEFF Research Database (Denmark)

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  10. Covariant Transform

    OpenAIRE

    Kisil, Vladimir V.

    2010-01-01

    The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H_2, Banach spaces, covariant functional calculus and many others. Keywords: Wavelets, cohe...

  11. Covariant field equations, gauge fields and conservation laws from Yang-Mills matrix models

    International Nuclear Information System (INIS)

    Steinacker, Harold

    2009-01-01

    The effective geometry and the gravitational coupling of nonabelian gauge and scalar fields on generic NC branes in Yang-Mills matrix models is determined. Covariant field equations are derived from the basic matrix equations of motions, known as Yang-Mills algebra. Remarkably, the equations of motion for the Poisson structure and for the nonabelian gauge fields follow from a matrix Noether theorem, and are therefore protected from quantum corrections. This provides a transparent derivation and generalization of the effective action governing the SU(n) gauge fields obtained in [1], including the would-be topological term. In particular, the IKKT matrix model is capable of describing 4-dimensional NC space-times with a general effective metric. Metric deformations of flat Moyal-Weyl space are briefly discussed.

  12. Mathematical modeling of two phase stratified flow in a microchannel with curved interface

    Science.gov (United States)

    Dandekar, Rajat; Picardo, Jason R.; Pushpavanam, S.

    2017-11-01

    Stratified or layered two-phase flows are encountered in several applications of microchannels, such as solvent extraction. Assuming steady, unidirectional creeping flow, it is possible to solve the Stokes equations by the method of eigenfunctions, provided the interface is flat and meets the wall with a 90 degree contact angle. However, in reality the contact angle depends on the pair of liquids and the material of the channel, and differs significantly from 90 degrees in many practical cases. For unidirectional flow, this implies that the interface is a circular arc (of constant curvature). We solve this problem within the framework of eigenfunctions, using the procedure developed by Shankar. We consider two distinct cases: (a) the interface meets the wall with the equilibrium contact angle; (b) the interface is pinned by surface treatment of the walls, so that the flow rates determine the apparent contact angle. We show that the contact angle appreciably affects the velocity profile and the volume fractions of the liquids, while limiting the range of flow rates that can be sustained without the interface touching the top/bottom walls. Non-intuitively, we find that the pressure drop is reduced when the more viscous liquid wets the wall.

  13. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  14. New theoretical model for two-phase flow discharged from stratified two-phase region through small break

    International Nuclear Information System (INIS)

    Yonomoto, Taisuke; Tasaka, Kanji

    1988-01-01

    A theoretical and experimental study was conducted to understand two-phase flow discharged from a stratified two-phase region through a small break. This problem is important for an analysis of a small break loss-of-coolant accident (LOCA) in a light water reactor (LWR). The present theoretical results show that a break quality is a function of h/h b , where h is the elevation difference between a bulk water level in the upstream region and break and b the suffix for entrainment initiation. This result is consistent with existing eperimental results in literature. An air-water experiment was also conducted changing a break orientation as an experimental parameter to develop and assess the model. Comparisons between the model and the experimental results show that the present model can satisfactorily predict the flow rate and the quality at the break without using any adjusting constant when liquid entrainment occurs in a stratified two-phase region. When gas entrainment occurs, the experimental data are correlated well by using a single empirical constant. (author)

  15. AMPTRACT: an algebraic model for computing pressure tube circumferential and steam temperature transients under stratified channel coolant conditions

    International Nuclear Information System (INIS)

    Gulshani, P.; So, C.B.

    1986-10-01

    In a number of postulated accident scenarios in a CANDU reactor, some of the horizontal fuel channels are predicted to experience periods of stratified channel coolant condition which can lead to a circumferential temperature gradient around the pressure tube. To study pressure tube strain and integrity under stratified flow channel conditions, it is, necessary to determine the pressure tube circumferential temperature distribution. This paper presents an algebraic model, called AMPTRACT (Algebraic Model for Pressure Tube TRAnsient Circumferential Temperature), developed to give the transient temperature distribution in a closed form. AMPTRACT models the following modes of heat transfer: radiation from the outermost elements to the pressure tube and from the pressure to calandria tube, convection between the fuel elements and the pressure tube and superheated steam, and circumferential conduction from the exposed to submerged part of the pressure tube. An iterative procedure is used to solve the mass and energy equations in closed form for axial steam and fuel-sheath transient temperature distributions. The one-dimensional conduction equation is then solved to obtain the pressure tube circumferential transient temperature distribution in a cosine series expansion. In the limit of large times and in the absence of convection and radiation to the calandria tube, the predicted pressure tube temperature distribution reduces identically to a parabolic profile. In this limit, however, radiation cannot be ignored because the temperatures are generally high. Convection and radiation tend to flatten the parabolic distribution

  16. Numerical modelling of disintegration of basin-scale internal waves in a tank filled with stratified water

    Directory of Open Access Journals (Sweden)

    N. Stashchuk

    2005-01-01

    Full Text Available We present the results of numerical experiments performed with the use of a fully non-linear non-hydrostatic numerical model to study the baroclinic response of a long narrow tank filled with stratified water to an initially tilted interface. Upon release, the system starts to oscillate with an eigen frequency corresponding to basin-scale baroclinic gravitational seiches. Field observations suggest that the disintegration of basin-scale internal waves into packets of solitary waves, shear instabilities, billows and spots of mixed water are important mechanisms for the transfer of energy within stratified lakes. Laboratory experiments performed by D. A. Horn, J. Imberger and G. N. Ivey (JFM, 2001 reproduced several regimes, which include damped linear waves and solitary waves. The generation of billows and shear instabilities induced by the basin-scale wave was, however, not sufficiently studied. The developed numerical model computes a variety of flows, which were not observed with the experimental set-up. In particular, the model results showed that under conditions of low dissipation, the regimes of billows and supercritical flows may transform into a solitary wave regime. The obtained results can help in the interpretation of numerous observations of mixing processes in real lakes.

  17. The covariance matrix of the Potts model: A random cluster analysis

    International Nuclear Information System (INIS)

    Borgs, C.; Chayes, J.T.

    1996-01-01

    We consider the covariance matrix, G mn = q 2 x ,m); δ(σ y ,n)>, of the d-dimensional q-states Potts model, rewriting it in the random cluster representation of Fortuin and Kasteleyn. In many of the q ordered phases, we identify the eigenvalues of this matrix both in terms of representations of the unbroken symmetry group of the model and in terms of random cluster connectivities and covariances, thereby attributing algebraic significance to these stochastic geometric quantities. We also show that the correlation length and the correlation length corresponding to the decay rate of one on the eigenvalues in the same as the inverse decay rate of the diameter of finite clusters. For dimension of d=2, we show that this correlation length and the correlation length of two-point function with free boundary conditions at the corresponding dual temperature are equal up to a factor of two. For systems with first-order transitions, this relation helps to resolve certain inconsistencies between recent exact and numerical work on correlation lengths at the self-dual point β o . For systems with second order transitions, this relation implies the equality of the correlation length exponents from above below threshold, as well as an amplitude ratio of two. In the course of proving the above results, we establish several properties of independent interest, including left continuity of the inverse correlation length with free boundary conditions and upper semicontinuity of the decay rate for finite clusters in all dimensions, and left continuity of the two-dimensional free boundary condition percolation probability at β o . We also introduce DLR equations for the random cluster model and use them to establish ergodicity of the free measure. In order to prove these results, we introduce a new class of events which we call decoupling events and two inequalities for these events

  18. A joint logistic regression and covariate-adjusted continuous-time Markov chain model.

    Science.gov (United States)

    Rubin, Maria Laura; Chan, Wenyaw; Yamal, Jose-Miguel; Robertson, Claudia Sue

    2017-12-10

    The use of longitudinal measurements to predict a categorical outcome is an increasingly common goal in research studies. Joint models are commonly used to describe two or more models simultaneously by considering the correlated nature of their outcomes and the random error present in the longitudinal measurements. However, there is limited research on joint models with longitudinal predictors and categorical cross-sectional outcomes. Perhaps the most challenging task is how to model the longitudinal predictor process such that it represents the true biological mechanism that dictates the association with the categorical response. We propose a joint logistic regression and Markov chain model to describe a binary cross-sectional response, where the unobserved transition rates of a two-state continuous-time Markov chain are included as covariates. We use the method of maximum likelihood to estimate the parameters of our model. In a simulation study, coverage probabilities of about 95%, standard deviations close to standard errors, and low biases for the parameter values show that our estimation method is adequate. We apply the proposed joint model to a dataset of patients with traumatic brain injury to describe and predict a 6-month outcome based on physiological data collected post-injury and admission characteristics. Our analysis indicates that the information provided by physiological changes over time may help improve prediction of long-term functional status of these severely ill subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro: Application of a stratified model

    Science.gov (United States)

    Lee, Kang Il

    2012-08-01

    The present study aims to provide insight into the relationships of the phase velocity with the microarchitectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21-0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  20. Comparison of exact, efron and breslow parameter approach method on hazard ratio and stratified cox regression model

    Science.gov (United States)

    Fatekurohman, Mohamat; Nurmala, Nita; Anggraeni, Dian

    2018-04-01

    Lungs are the most important organ, in the case of respiratory system. Problems related to disorder of the lungs are various, i.e. pneumonia, emphysema, tuberculosis and lung cancer. Comparing all those problems, lung cancer is the most harmful. Considering about that, the aim of this research applies survival analysis and factors affecting the endurance of the lung cancer patient using comparison of exact, Efron and Breslow parameter approach method on hazard ratio and stratified cox regression model. The data applied are based on the medical records of lung cancer patients in Jember Paru-paru hospital on 2016, east java, Indonesia. The factors affecting the endurance of the lung cancer patients can be classified into several criteria, i.e. sex, age, hemoglobin, leukocytes, erythrocytes, sedimentation rate of blood, therapy status, general condition, body weight. The result shows that exact method of stratified cox regression model is better than other. On the other hand, the endurance of the patients is affected by their age and the general conditions.

  1. Covariant boost and structure functions of baryons in Gross-Neveu models

    International Nuclear Information System (INIS)

    Brendel, Wieland; Thies, Michael

    2010-01-01

    Baryons in the large N limit of two-dimensional Gross-Neveu models are reconsidered. The time-dependent Dirac-Hartree-Fock approach is used to boost a baryon to any inertial frame and shown to yield the covariant energy-momentum relation. Momentum distributions are computed exactly in arbitrary frames and used to interpolate between the rest frame and the infinite momentum frame, where they are related to structure functions. Effects from the Dirac sea depend sensitively on the occupation fraction of the valence level and the bare fermion mass and do not vanish at infinite momentum. In the case of the kink baryon, they even lead to divergent quark and antiquark structure functions at x=0.

  2. Modelling anisotropic covariance using stochastic development and sub-Riemannian frame bundle geometry

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Svane, Anne Marie

    2017-01-01

    distributions. We discuss a factorization of the frame bundle projection map through this bundle, the natural sub-Riemannian structure of the frame bundle, the effect of holonomy, and the existence of subbundles where the Hormander condition is satisfied such that the Brownian motions have smooth transition......We discuss the geometric foundation behind the use of stochastic processes in the frame bundle of a smooth manifold to build stochastic models with applications in statistical analysis of non-linear data. The transition densities for the projection to the manifold of Brownian motions developed...... in the frame bundle lead to a family of probability distributions on the manifold. We explain how data mean and covariance can be interpreted as points in the frame bundle or, more precisely, in the bundle of symmetric positive definite 2-tensors analogously to the parameters describing Euclidean normal...

  3. Quarkonia and heavy-light mesons in a covariant quark model

    Directory of Open Access Journals (Sweden)

    Leitão Sofia

    2016-01-01

    Full Text Available Preliminary calculations using the Covariant Spectator Theory (CST employed a scalar linear confining interaction and an additional constant vector potential to compute the mesonic mass spectra. In this work we generalize the confining interaction to include more general structures, in particular a vector and also a pseudoscalar part, as suggested by a recent study [1]. A one-gluon-exchange kernel is also implemented to describe the short-range part of the interaction. We solve the simplest CST approximation to the complete Bethe-Salpeter equation, the one-channel spectator equation, using a numerical technique that eliminates all singularities from the kernel. The parameters of the model are determined through a fit to the experimental pseudoscalar meson spectra, with a good agreement for both quarkonia and heavy-light states.

  4. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  5. Do gamblers eat more salt? Testing a latent trait model of covariance in consumption.

    Science.gov (United States)

    Goodwin, Belinda C; Browne, Matthew; Rockloff, Matthew; Donaldson, Phillip

    2015-09-01

    A diverse class of stimuli, including certain foods, substances, media, and economic behaviours, may be described as 'reward-oriented' in that they provide immediate reinforcement with little initial investment. Neurophysiological and personality concepts, including dopaminergic dysfunction, reward sensitivity and rash impulsivity, each predict the existence of a latent behavioural trait that leads to increased consumption of all stimuli in this class. Whilst bivariate relationships (co-morbidities) are often reported in the literature, to our knowledge, a multivariate investigation of this possible trait has not been done. We surveyed 1,194 participants (550 male) on their typical weekly consumption of 11 types of reward-oriented stimuli, including fast food, salt, caffeine, television, gambling products, and illicit drugs. Confirmatory factor analysis was used to compare models in a 3×3 structure, based on the definition of a single latent factor (none, fixed loadings, or estimated loadings), and assumed residual covariance structure (none, a-priori / literature based, or post-hoc / data-driven). The inclusion of a single latent behavioural 'consumption' factor significantly improved model fit in all cases. Also confirming theoretical predictions, estimated factor loadings on reward-oriented indicators were uniformly positive, regardless of assumptions regarding residual covariances. Additionally, the latent trait was found to be negatively correlated with the non-reward-oriented indicators of fruit and vegetable consumption. The findings support the notion of a single behavioural trait leading to increased consumption of reward-oriented stimuli across multiple modalities. We discuss implications regarding the concentration of negative lifestyle-related health behaviours.

  6. Computing the transport time scales of a stratified lake on the basis of Tonolli’s model

    Directory of Open Access Journals (Sweden)

    Marco Pilotti

    2014-05-01

    Full Text Available This paper deals with a simple model to evaluate the transport time scales in thermally stratified lakes that do not necessarily completely mix on a regular annual basis. The model is based on the formalization of an idea originally proposed in Italian by Tonolli in 1964, who presented a mass balance of the water initially stored within a lake, taking into account the known seasonal evolution of its thermal structure. The numerical solution of this mass balance provides an approximation to the water age distribution for the conceptualised lake, from which an upper bound to the typical time scales widely used in limnology can be obtained. After discussing the original test case considered by Tonolli, we apply the model to Lake Iseo, a deep lake located in the North of Italy, presenting the results obtained on the basis of a 30 year series of data.

  7. A Nakanishi-based model illustrating the covariant extension of the pion GPD overlap representation and its ambiguities

    Science.gov (United States)

    Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2018-05-01

    A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.

  8. An integrative model of evolutionary covariance: a symposium on body shape in fishes.

    Science.gov (United States)

    Walker, Jeffrey A

    2010-12-01

    A major direction of current and future biological research is to understand how multiple, interacting functional systems coordinate in producing a body that works. This understanding is complicated by the fact that organisms need to work well in multiple environments, with both predictable and unpredictable environmental perturbations. Furthermore, organismal design reflects a history of past environments and not a plan for future environments. How complex, interacting functional systems evolve, then, is a truly grand challenge. In accepting the challenge, an integrative model of evolutionary covariance is developed. The model combines quantitative genetics, functional morphology/physiology, and functional ecology. The model is used to convene scientists ranging from geneticists, to physiologists, to ecologists, to engineers to facilitate the emergence of body shape in fishes as a model system for understanding how complex, interacting functional systems develop and evolve. Body shape of fish is a complex morphology that (1) results from many developmental paths and (2) functions in many different behaviors. Understanding the coordination and evolution of the many paths from genes to body shape, body shape to function, and function to a working fish body in a dynamic environment is now possible given new technologies from genetics to engineering and new theoretical models that integrate the different levels of biological organization (from genes to ecology).

  9. Simulation modeling for stratified breast cancer screening - a systematic review of cost and quality of life assumptions.

    Science.gov (United States)

    Arnold, Matthias

    2017-12-02

    The economic evaluation of stratified breast cancer screening gains momentum, but produces also very diverse results. Systematic reviews so far focused on modeling techniques and epidemiologic assumptions. However, cost and utility parameters received only little attention. This systematic review assesses simulation models for stratified breast cancer screening based on their cost and utility parameters in each phase of breast cancer screening and care. A literature review was conducted to compare economic evaluations with simulation models of personalized breast cancer screening. Study quality was assessed using reporting guidelines. Cost and utility inputs were extracted, standardized and structured using a care delivery framework. Studies were then clustered according to their study aim and parameters were compared within the clusters. Eighteen studies were identified within three study clusters. Reporting quality was very diverse in all three clusters. Only two studies in cluster 1, four studies in cluster 2 and one study in cluster 3 scored high in the quality appraisal. In addition to the quality appraisal, this review assessed if the simulation models were consistent in integrating all relevant phases of care, if utility parameters were consistent and methodological sound and if cost were compatible and consistent in the actual parameters used for screening, diagnostic work up and treatment. Of 18 studies, only three studies did not show signs of potential bias. This systematic review shows that a closer look into the cost and utility parameter can help to identify potential bias. Future simulation models should focus on integrating all relevant phases of care, using methodologically sound utility parameters and avoiding inconsistent cost parameters.

  10. Using Fit Indexes to Select a Covariance Model for Longitudinal Data

    Science.gov (United States)

    Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.

    2012-01-01

    This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…

  11. A comparative study of covariance selection models for the inference of gene regulatory networks.

    Science.gov (United States)

    Stifanelli, Patrizia F; Creanza, Teresa M; Anglani, Roberto; Liuzzi, Vania C; Mukherjee, Sayan; Schena, Francesco P; Ancona, Nicola

    2013-10-01

    The inference, or 'reverse-engineering', of gene regulatory networks from expression data and the description of the complex dependency structures among genes are open issues in modern molecular biology. In this paper we compared three regularized methods of covariance selection for the inference of gene regulatory networks, developed to circumvent the problems raising when the number of observations n is smaller than the number of genes p. The examined approaches provided three alternative estimates of the inverse covariance matrix: (a) the 'PINV' method is based on the Moore-Penrose pseudoinverse, (b) the 'RCM' method performs correlation between regression residuals and (c) 'ℓ(2C)' method maximizes a properly regularized log-likelihood function. Our extensive simulation studies showed that ℓ(2C) outperformed the other two methods having the most predictive partial correlation estimates and the highest values of sensitivity to infer conditional dependencies between genes even when a few number of observations was available. The application of this method for inferring gene networks of the isoprenoid biosynthesis pathways in Arabidopsis thaliana allowed to enlighten a negative partial correlation coefficient between the two hubs in the two isoprenoid pathways and, more importantly, provided an evidence of cross-talk between genes in the plastidial and the cytosolic pathways. When applied to gene expression data relative to a signature of HRAS oncogene in human cell cultures, the method revealed 9 genes (p-value<0.0005) directly interacting with HRAS, sharing the same Ras-responsive binding site for the transcription factor RREB1. This result suggests that the transcriptional activation of these genes is mediated by a common transcription factor downstream of Ras signaling. Software implementing the methods in the form of Matlab scripts are available at: http://users.ba.cnr.it/issia/iesina18/CovSelModelsCodes.zip. Copyright © 2013 The Authors. Published by

  12. Alcohol advertising, consumption and abuse: a covariance-structural modelling look at Strickland's data.

    Science.gov (United States)

    Adlaf, E M; Kohn, P M

    1989-07-01

    Re-analysis employing covariance-structural models was conducted on Strickland's (1983) survey data on 772 drinking students from Grades 7, 9 and 11. These data bear on the relations among alcohol consumption, alcohol abuse, association with drinking peers and exposure to televised alcohol advertising. Whereas Strickland used a just-identified model which, therefore, could not be tested for goodness of fit, our re-analysis tested several alternative models, which could be contradicted by the data. One model did fit his data particularly well. Its major implications are as follows: (1) Symptomatic consumption, negative consequences and self-rated severity of alcohol-related problems apparently reflect a common underlying factor, namely alcohol abuse. (2) Use of alcohol to relieve distress and frequency of intoxication, however, appear not to reflect abuse, although frequent intoxication contributes substantially to it. (3). Alcohol advertising affects consumption directly and abuse indirectly, although peer association has far greater impact on both consumption and abuse. These findings are interpreted as lending little support to further restrictions on advertising.

  13. Entanglement entropy production in gravitational collapse: covariant regularization and solvable models

    Science.gov (United States)

    Bianchi, Eugenio; De Lorenzo, Tommaso; Smerlak, Matteo

    2015-06-01

    We study the dynamics of vacuum entanglement in the process of gravitational collapse and subsequent black hole evaporation. In the first part of the paper, we introduce a covariant regularization of entanglement entropy tailored to curved spacetimes; this regularization allows us to propose precise definitions for the concepts of black hole "exterior entropy" and "radiation entropy." For a Vaidya model of collapse we find results consistent with the standard thermodynamic properties of Hawking radiation. In the second part of the paper, we compute the vacuum entanglement entropy of various spherically-symmetric spacetimes of interest, including the nonsingular black hole model of Bardeen, Hayward, Frolov and Rovelli-Vidotto and the "black hole fireworks" model of Haggard-Rovelli. We discuss specifically the role of event and trapping horizons in connection with the behavior of the radiation entropy at future null infinity. We observe in particular that ( i) in the presence of an event horizon the radiation entropy diverges at the end of the evaporation process, ( ii) in models of nonsingular evaporation (with a trapped region but no event horizon) the generalized second law holds only at early times and is violated in the "purifying" phase, ( iii) at late times the radiation entropy can become negative (i.e. the radiation can be less correlated than the vacuum) before going back to zero leading to an up-down-up behavior for the Page curve of a unitarily evaporating black hole.

  14. Entanglement entropy production in gravitational collapse: covariant regularization and solvable models

    International Nuclear Information System (INIS)

    Bianchi, Eugenio; Lorenzo, Tommaso De; Smerlak, Matteo

    2015-01-01

    We study the dynamics of vacuum entanglement in the process of gravitational collapse and subsequent black hole evaporation. In the first part of the paper, we introduce a covariant regularization of entanglement entropy tailored to curved spacetimes; this regularization allows us to propose precise definitions for the concepts of black hole “exterior entropy” and “radiation entropy.” For a Vaidya model of collapse we find results consistent with the standard thermodynamic properties of Hawking radiation. In the second part of the paper, we compute the vacuum entanglement entropy of various spherically-symmetric spacetimes of interest, including the nonsingular black hole model of Bardeen, Hayward, Frolov and Rovelli-Vidotto and the “black hole fireworks” model of Haggard-Rovelli. We discuss specifically the role of event and trapping horizons in connection with the behavior of the radiation entropy at future null infinity. We observe in particular that (i) in the presence of an event horizon the radiation entropy diverges at the end of the evaporation process, (ii) in models of nonsingular evaporation (with a trapped region but no event horizon) the generalized second law holds only at early times and is violated in the “purifying” phase, (iii) at late times the radiation entropy can become negative (i.e. the radiation can be less correlated than the vacuum) before going back to zero leading to an up-down-up behavior for the Page curve of a unitarily evaporating black hole.

  15. Modeling gross primary production in semi-arid Inner Mongolia using MODIS imagery and eddy covariance data

    Science.gov (United States)

    Ranjeet John; Jiquan Chen; Asko Noormets; Xiangming Xiao; Jianye Xu; Nan Lu; Shiping Chen

    2013-01-01

    We evaluate the modelling of carbon fluxes from eddy covariance (EC) tower observations in different water-limited land-cover/land-use (LCLU) and biome types in semi-arid Inner Mongolia, China. The vegetation photosynthesis model (VPM) and modified VPM (MVPM), driven by the enhanced vegetation index (EVI) and land-surface water index (LSWI), which were derived from the...

  16. Weak instruments and the first stage F-statistic in IV models with a nonscalar error covariance structure

    NARCIS (Netherlands)

    Bun, M.; de Haan, M.

    2010-01-01

    We analyze the usefulness of the first stage F-statistic for detecting weak instruments in the IV model with a nonscalar error covariance structure. More in particular, we question the validity of the rule of thumb of a first stage F-statistic of 10 or higher for models with correlated errors

  17. Modeling heterogeneous (co)variances from adjacent-SNP groups improves genomic prediction for milk protein composition traits

    DEFF Research Database (Denmark)

    Gebreyesus, Grum; Lund, Mogens Sandø; Buitenhuis, Albert Johannes

    2017-01-01

    Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci...... of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we...... developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls...

  18. 4He(γ,dd and 3He(γ,pd reactions in nonlocal covariant model

    Directory of Open Access Journals (Sweden)

    Kasatkin Yu. A.

    2014-03-01

    Full Text Available Photonuclear reaction research is of great interest to obtain information about the structure of nuclei. The investigation of structural effects requires certain insights into the reaction mechanisms, that have to be identified on the basis of the fundamental principles of covariance and gauge invariance. The major achievement of the chosen model is the ability to reproduce the cross-section dependence using the minimal necessary set of parameters. We analyze the two-particle disintegration of 3He nuclei by photons. Our interest was raised by the fact that 3He is the simplest many-particle system which admits an exact solutions. We also consider the process 4He(γ, dd. This process comes at the expense of the quadrupole absorption of γ-rays, while the dipole transition is suppressed. This property is a consequence of the isospin selection as well as the identity of the particles in the final state. Obtained results describe the energy range from threshold (20 MeV to 140 MeV. Therefore, the model mentioned in the paper has the peculiarity to be valid not only for the low-energy regime, but also for higher energies. Present paper is devoted to determine the roles of different reaction mechanisms and to solve problems above.

  19. Application of a plane-stratified emission model to predict the effects of vegetation in passive microwave radiometry

    Directory of Open Access Journals (Sweden)

    K. Lee

    2002-01-01

    Full Text Available This paper reports the application to vegetation canopies of a coherent model for the propagation of electromagnetic radiation through a stratified medium. The resulting multi-layer vegetation model is plausibly realistic in that it recognises the dielectric permittivity of the vegetation matter, the mixing of the dielectric permittivities for vegetation and air within the canopy and, in simplified terms, the overall vertical distribution of dielectric permittivity and temperature through the canopy. Any sharp changes in the dielectric profile of the canopy resulted in interference effects manifested as oscillations in the microwave brightness temperature as a function of canopy height or look angle. However, when Gaussian broadening of the top and bottom of the canopy (reflecting the natural variability between plants was included within the model, these oscillations were eliminated. The model parameters required to specify the dielectric profile within the canopy, particularly the parameters that quantify the dielectric mixing between vegetation and air in the canopy, are not usually available in typical field experiments. Thus, the feasibility of specifying these parameters using an advanced single-criterion, multiple-parameter optimisation technique was investigated by automatically minimizing the difference between the modelled and measured brightness temperatures. The results imply that the mixing parameters can be so determined but only if other parameters that specify vegetation dry matter and water content are measured independently. The new model was then applied to investigate the sensitivity of microwave emission to specific vegetation parameters. Keywords: passive microwave, soil moisture, vegetation, SMOS, retrieval

  20. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis.

    Science.gov (United States)

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D Gareth; Astley, Sue; Payne, Katherine

    2017-09-01

    To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1, risk 2, masking [supplemental screening for women with higher breast density], and masking and risk 1) compared with the current UK NBSP and no screening. The model assumed a lifetime horizon, the health service perspective to identify costs (£, 2015), and measured consequences in quality-adjusted life-years (QALYs). Multiple data sources were used: systematic reviews of effectiveness and utility, published studies reporting costs, and cohort studies embedded in existing NBSPs. Model parameter uncertainty was assessed using probabilistic sensitivity analysis and one-way sensitivity analysis. The base-case analysis, supported by probabilistic sensitivity analysis, suggested that the risk stratified NBSPs (risk 1 and risk-2) were relatively cost-effective when compared with the current UK NBSP, with incremental cost-effectiveness ratios of £16,689 per QALY and £23,924 per QALY, respectively. Stratified NBSP including masking approaches (supplemental screening for women with higher breast density) was not a cost-effective alternative, with incremental cost-effectiveness ratios of £212,947 per QALY (masking) and £75,254 per QALY (risk 1 and masking). When compared with no screening, all stratified NBSPs could be considered cost-effective. Key drivers of cost-effectiveness were discount rate, natural history model parameters, mammographic sensitivity, and biopsy rates for recalled cases. A key assumption was that the risk model used in the stratification process was perfectly calibrated to the population. This early model-based cost-effectiveness analysis provides indicative evidence for decision makers to understand the key drivers of costs and QALYs for exemplar stratified NBSP. Copyright

  1. Stratified spherical model for microwave imaging of the brain: Analysis and experimental validation of transmitted power

    DEFF Research Database (Denmark)

    Bjelogrlic, Mina; Volery, Maxime; Fuchs, Benjamin

    2018-01-01

    This work presents the analysis of power transmission of a radiating field inside the human head for microwave imaging applications. For this purpose, a spherical layered model composed of dispersive biological tissues is investigated in the range of (0.5–4) GHz and is confronted to experimental ...

  2. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  3. Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance

    Directory of Open Access Journals (Sweden)

    J. G. Barr

    2013-03-01

    Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE, and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and

  4. Research Article Comparing covariance matrices: random skewers method compared to the common principal components model

    Directory of Open Access Journals (Sweden)

    James M. Cheverud

    2007-03-01

    Full Text Available Comparisons of covariance patterns are becoming more common as interest in the evolution of relationships between traits and in the evolutionary phenotypic diversification of clades have grown. We present parallel analyses of covariance matrix similarity for cranial traits in 14 New World Monkey genera using the Random Skewers (RS, T-statistics, and Common Principal Components (CPC approaches. We find that the CPC approach is very powerful in that with adequate sample sizes, it can be used to detect significant differences in matrix structure, even between matrices that are virtually identical in their evolutionary properties, as indicated by the RS results. We suggest that in many instances the assumption that population covariance matrices are identical be rejected out of hand. The more interesting and relevant question is, How similar are two covariance matrices with respect to their predicted evolutionary responses? This issue is addressed by the random skewers method described here.

  5. Developing and evaluating polygenic risk prediction models for stratified disease prevention.

    Science.gov (United States)

    Chatterjee, Nilanjan; Shi, Jianxin; García-Closas, Montserrat

    2016-07-01

    Knowledge of genetics and its implications for human health is rapidly evolving in accordance with recent events, such as discoveries of large numbers of disease susceptibility loci from genome-wide association studies, the US Supreme Court ruling of the non-patentability of human genes, and the development of a regulatory framework for commercial genetic tests. In anticipation of the increasing relevance of genetic testing for the assessment of disease risks, this Review provides a summary of the methodologies used for building, evaluating and applying risk prediction models that include information from genetic testing and environmental risk factors. Potential applications of models for primary and secondary disease prevention are illustrated through several case studies, and future challenges and opportunities are discussed.

  6. An Econometric Analysis of Modulated Realised Covariance, Regression and Correlation in Noisy Diffusion Models

    DEFF Research Database (Denmark)

    Kinnebrock, Silja; Podolskij, Mark

    This paper introduces a new estimator to measure the ex-post covariation between high-frequency financial time series under market microstructure noise. We provide an asymptotic limit theory (including feasible central limit theorems) for standard methods such as regression, correlation analysis...... process can be relaxed and how our method can be applied to non-synchronous observations. We also present an empirical study of how high-frequency correlations, regressions and covariances change through time....

  7. A 3-D Riesz-Covariance Texture Model for Prediction of Nodule Recurrence in Lung CT

    OpenAIRE

    Cirujeda Pol; Dicente Cid Yashin; Müller Henning; Rubin Daniel L.; Aguilera Todd A.; Jr. Billy W. Loo; Diehn Maximilian; Binefa Xavier; Depeursinge Adrien

    2016-01-01

    This paper proposes a novel imaging biomarker of lung cancer relapse from 3 D texture analysis of CT images. Three dimensional morphological nodular tissue properties are described in terms of 3 D Riesz wavelets. The responses of the latter are aggregated within nodular regions by means of feature covariances which leverage rich intra and inter variations of the feature space dimensions. When compared to the classical use of the average for feature aggregation feature covariances preserve sp...

  8. Time dependent approach of TeV blazars based on a model of inhomogeneous stratified jet

    International Nuclear Information System (INIS)

    Boutelier, T.

    2009-05-01

    The study of the emission and variability mechanisms of TeV blazars has been the subject of intensive research for years. The homogeneous one-zone model commonly used is puzzling since it yields very high Lorentz factor, in contradiction with other observational evidences. In this work, I describe a new time dependent multi-zone approach, in the framework of the two-flow model. I compute the emission of a full jet, where relativistic electron-positron pairs distributed in pileup propagate. The evolution and the emission of the plasma is computed taking into account a turbulent heating term, some radiative cooling, and a pair production term due to photo-annihilation process. Applied to PKS 2155-304, the model allows the reproduction of the full spectra, as well as the simultaneous multi wavelength variability, with a relatively small Lorentz factor. The variability is explained by the instability of the pair creation process. Nonetheless, the value is still high to agree with other observational evidences in radio. Hence, I show in the last part of this work how to conciliate high Lorentz factor with the absence of apparent superluminal movement in radio, by taking into account the effect of the opening angle on the appearance of relativistic jets. (author)

  9. Unstructured grid modelling of offshore wind farm impacts on seasonally stratified shelf seas

    Science.gov (United States)

    Cazenave, Pierre William; Torres, Ricardo; Allen, J. Icarus

    2016-06-01

    Shelf seas comprise approximately 7% of the world's oceans and host enormous economic activity. Development of energy installations (e.g. Offshore Wind Farms (OWFs), tidal turbines) in response to increased demand for renewable energy requires a careful analysis of potential impacts. Recent remote sensing observations have identified kilometre-scale impacts from OWFs. Existing modelling evaluating monopile impacts has fallen into two camps: small-scale models with individually resolved turbines looking at local effects; and large-scale analyses but with sub-grid scale turbine parameterisations. This work straddles both scales through a 3D unstructured grid model (FVCOM): wind turbine monopiles in the eastern Irish Sea are explicitly described in the grid whilst the overall grid domain covers the south-western UK shelf. Localised regions of decreased velocity extend up to 250 times the monopile diameter away from the monopile. Shelf-wide, the amplitude of the M2 tidal constituent increases by up to 7%. The turbines enhance localised vertical mixing which decreases seasonal stratification. The spatial extent of this extends well beyond the turbines into the surrounding seas. With significant expansion of OWFs on continental shelves, this work highlights the importance of how OWFs may impact coastal (e.g. increased flooding risk) and offshore (e.g. stratification and nutrient cycling) areas.

  10. Non-stationary covariance function modelling in 2D least-squares collocation

    Science.gov (United States)

    Darbeheshti, N.; Featherstone, W. E.

    2009-06-01

    Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage for dealing with non-stationarity in geodetic data. We then compared stationary and non- stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC.

  11. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  12. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada

    Science.gov (United States)

    Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino

    2012-01-01

    Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...

  13. A Poisson Log-Normal Model for Constructing Gene Covariation Network Using RNA-seq Data.

    Science.gov (United States)

    Choi, Yoonha; Coram, Marc; Peng, Jie; Tang, Hua

    2017-07-01

    Constructing expression networks using transcriptomic data is an effective approach for studying gene regulation. A popular approach for constructing such a network is based on the Gaussian graphical model (GGM), in which an edge between a pair of genes indicates that the expression levels of these two genes are conditionally dependent, given the expression levels of all other genes. However, GGMs are not appropriate for non-Gaussian data, such as those generated in RNA-seq experiments. We propose a novel statistical framework that maximizes a penalized likelihood, in which the observed count data follow a Poisson log-normal distribution. To overcome the computational challenges, we use Laplace's method to approximate the likelihood and its gradients, and apply the alternating directions method of multipliers to find the penalized maximum likelihood estimates. The proposed method is evaluated and compared with GGMs using both simulated and real RNA-seq data. The proposed method shows improved performance in detecting edges that represent covarying pairs of genes, particularly for edges connecting low-abundant genes and edges around regulatory hubs.

  14. Spatial-temporal-covariance-based modeling, analysis, and simulation of aero-optics wavefront aberrations.

    Science.gov (United States)

    Vogel, Curtis R; Tyler, Glenn A; Wittich, Donald J

    2014-07-01

    We introduce a framework for modeling, analysis, and simulation of aero-optics wavefront aberrations that is based on spatial-temporal covariance matrices extracted from wavefront sensor measurements. Within this framework, we present a quasi-homogeneous structure function to analyze nonhomogeneous, mildly anisotropic spatial random processes, and we use this structure function to show that phase aberrations arising in aero-optics are, for an important range of operating parameters, locally Kolmogorov. This strongly suggests that the d5/3 power law for adaptive optics (AO) deformable mirror fitting error, where d denotes actuator separation, holds for certain important aero-optics scenarios. This framework also allows us to compute bounds on AO servo lag error and predictive control error. In addition, it provides us with the means to accurately simulate AO systems for the mitigation of aero-effects, and it may provide insight into underlying physical processes associated with turbulent flow. The techniques introduced here are demonstrated using data obtained from the Airborne Aero-Optics Laboratory.

  15. A model for warfare in stratified small-scale societies: The effect of within-group inequality

    Science.gov (United States)

    Pandit, Sagar; van Schaik, Carel

    2017-01-01

    In order to predict the features of non-raiding human warfare in small-scale, socially stratified societies, we study a coalitionary model of war that assumes that individuals participate voluntarily because their decisions serve to maximize fitness. Individual males join the coalition if war results in a net economic and thus fitness benefit. Within the model, viable offensive war ensues if the attacking coalition of males can overpower the defending coalition. We assume that the two groups will eventually fuse after a victory, with ranks arranged according to the fighting abilities of all males and that the new group will adopt the winning group’s skew in fitness payoffs. We ask whether asymmetries in skew, group size and the amount of resources controlled by a group affect the likelihood of successful war. The model shows, other things being equal, that (i) egalitarian groups are more likely to defeat their more despotic enemies, even when these are stronger, (ii) defection to enemy groups will be rare, unless the attacked group is far more despotic than the attacking one, and (iii) genocidal war is likely under a variety of conditions, in particular when the group under attack is more egalitarian. This simple optimality model accords with several empirically observed correlations in human warfare. Its success underlines the important role of egalitarianism in warfare. PMID:29228014

  16. A free-surface hydrodynamic model for density-stratified flow in the weakly to strongly non-hydrostatic regime

    International Nuclear Information System (INIS)

    Shen, Colin Y.; Evans, Thomas E.

    2004-01-01

    A non-hydrostatic density-stratified hydrodynamic model with a free surface has been developed from the vorticity equations rather than the usual momentum equations. This approach has enabled the model to be obtained in two different forms, weakly non-hydrostatic and fully non-hydrostatic, with the computationally efficient weakly non-hydrostatic form applicable to motions having horizontal scales greater than the local water depth. The hydrodynamic model in both its weakly and fully non-hydrostatic forms is validated numerically using exact nonlinear non-hydrostatic solutions given by the Dubriel-Jacotin-Long equation for periodic internal gravity waves, internal solitary waves, and flow over a ridge. The numerical code is developed based on a semi-Lagrangian scheme and higher order finite-difference spatial differentiation and interpolation. To demonstrate the applicability of the model to coastal ocean situations, the problem of tidal generation of internal solitary waves at a shelf-break is considered. Simulations carried out with the model obtain the evolution of solitary wave generation and propagation consistent with past results. Moreover, the weakly non-hydrostatic simulation is shown to compare favorably with the fully non-hydrostatic simulation. The capability of the present model to simulate efficiently relatively large scale non-hydrostatic motions suggests that the weakly non-hydrostatic form of the model may be suitable for application in a large-area domain while the computationally intensive fully non-hydrostatic form of the model may be used in an embedded sub-domain where higher resolution is needed

  17. Representation of physiological drought at ecosystem level based on model and eddy covariance measurements

    Science.gov (United States)

    Zhang, Y.; Novick, K. A.; Song, C.; Zhang, Q.; Hwang, T.

    2017-12-01

    Drought and heat waves are expected to increase both in frequency and amplitude, exhibiting a major disturbance to global carbon and water cycles under future climate change. However, how these climate anomalies translate into physiological drought, or ecosystem moisture stress are still not clear, especially under the co-limitations from soil moisture supply and atmospheric demand for water. In this study, we characterized the ecosystem-level moisture stress in a deciduous forest in the southeastern United States using the Coupled Carbon and Water (CCW) model and in-situ eddy covariance measurements. Physiologically, vapor pressure deficit (VPD) as an atmospheric water demand indicator largely controls the openness of leaf stomata, and regulates atmospheric carbon and water exchanges during periods of hydrological stress. Here, we tested three forms of VPD-related moisture scalars, i.e. exponent (K2), hyperbola (K3), and logarithm (K4) to quantify the sensitivity of light-use efficiency to VPD along different soil moisture conditions. The sensitivity indicators of K values were calibrated based on the framework of CCW using Monte Carlo simulations on the hourly scale, in which VPD and soil water content (SWC) are largely decoupled and the full carbon and water exchanging information are held. We found that three K values show similar performances in the predictions of ecosystem-level photosynthesis and transpiration after calibration. However, all K values show consistent gradient changes along SWC, indicating that this deciduous forest is less responsive to VPD as soil moisture decreases, a phenomena of isohydricity in which plants tend to close stomata to keep the leaf water potential constant and reduce the risk of hydraulic failure. Our study suggests that accounting for such isohydric information, or spectrum of moisture stress along different soil moisture conditions in models can significantly improve our ability to predict ecosystem responses to future

  18. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    Science.gov (United States)

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Parametric estimation of covariance function in Gaussian-process based Kriging models. Application to uncertainty quantification for computer experiments

    International Nuclear Information System (INIS)

    Bachoc, F.

    2013-01-01

    The parametric estimation of the covariance function of a Gaussian process is studied, in the framework of the Kriging model. Maximum Likelihood and Cross Validation estimators are considered. The correctly specified case, in which the covariance function of the Gaussian process does belong to the parametric set used for estimation, is first studied in an increasing-domain asymptotic framework. The sampling considered is a randomly perturbed multidimensional regular grid. Consistency and asymptotic normality are proved for the two estimators. It is then put into evidence that strong perturbations of the regular grid are always beneficial to Maximum Likelihood estimation. The incorrectly specified case, in which the covariance function of the Gaussian process does not belong to the parametric set used for estimation, is then studied. It is shown that Cross Validation is more robust than Maximum Likelihood in this case. Finally, two applications of the Kriging model with Gaussian processes are carried out on industrial data. For a validation problem of the friction model of the thermal-hydraulic code FLICA 4, where experimental results are available, it is shown that Gaussian process modeling of the FLICA 4 code model error enables to considerably improve its predictions. Finally, for a meta modeling problem of the GERMINAL thermal-mechanical code, the interest of the Kriging model with Gaussian processes, compared to neural network methods, is shown. (author) [fr

  20. Design of dry sand soil stratified sampler

    Science.gov (United States)

    Li, Erkang; Chen, Wei; Feng, Xiao; Liao, Hongbo; Liang, Xiaodong

    2018-04-01

    This paper presents a design of a stratified sampler for dry sand soil, which can be used for stratified sampling of loose sand under certain conditions. Our group designed the mechanical structure of a portable, single - person, dry sandy soil stratified sampler. We have set up a mathematical model for the sampler. It lays the foundation for further development of design research.

  1. Trust but Verify: a spot check for the new stratified model of upper mantle anisotropy beneath North America

    Science.gov (United States)

    Levin, V. L.; Yuan, H.

    2011-12-01

    A newly developed 3D model of shear wave velocity and anisotropy beneath the North American continent (Yuan et al, 2011) offers a Solomonic solution to the long-standing dispute regarding the provenance of seismic anisotropy, with directional dependency of wave speed placed into both the lithosphere and the asthenosphere. However, due to its continent-wide coverage, the new model has lateral resolution on the scale of 500 km and is expected to average, and thus misrepresent, structure in regions with abrupt lateral changes in properties. The north-eastern US, especially along the coast, presents an example of such complex region. One of the earliest cases for stratified anisotropy was built on data from this part of North America (Levin et al., 1999), and also this is a region with significant, and enigmatic, lateral changes in isotropic velocity (van der Lee and Nolet, 1997; Nettles and Dziewonski, 2008). A decade since the initial studies of the region were performed, we have vastly more data that facilitate a new look at the seismic anisotropy parameters of the upper mantle beneath this region. We use shear wave splitting observations and anisotropy-aware receiver functions to develop high-quality constraints on the vertical and lateral variation in attributes of anisotropy, which we then compare (and contrast) with structure predicted for this region by the Yuan et al. (2011) model. Our goals are both to test the new model in one place, and to develop a strategy for such testing. Our primary data set comes from one of the longest-operating broad-band stations, HRV (Harvard, MA). Here, P wave receiver functions (PRFs) confirm the presence of features previously associated with the LAB and a mid-lithosphere discontinuity by Rychert et al. (2007). Notably, both features have very significant anisotropic components, with likely orientation of anisotropic symmetry axes being ~130SE or ~220SW. Similar symmetry is seen in PRFs constructed for other nearby sites

  2. Evaluation of an unsteady flamelet progress variable model for autoignition and flame development in compositionally stratified mixtures

    Science.gov (United States)

    Mukhopadhyay, Saumyadip; Abraham, John

    2012-07-01

    The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this

  3. Threat Object Detection using Covariance Matrix Modeling in X-ray Images

    International Nuclear Information System (INIS)

    Jeon, Byoun Gil; Kim, Jong Yul; Moon, Myung Kook

    2016-01-01

    The X-ray imaging system for the aviation security is one of the applications. In airports, all passengers and properties should be inspected and accepted by security machines before boarding on aircrafts to avoid all treat factors. That treat factors might be directly connected on terrorist threats awfully hazardous to not only passengers but also people in highly populated area such as major cities or buildings. Because the performance of the system is increasing along with the growth of IT technology, information that has various type and good quality can be provided for security check. However, human factors are mainly affected on the inspections. It means that human inspectors should be proficient corresponding to the growth of technology for efficient and effective inspection but there is clear limit of proficiency. Human being is not a computer. Because of the limitation, the aviation security techniques have the tendencies to provide not only numerous and nice information but also effective assistance for security inspectors. Many image processing applications already have been developed to provide efficient assistance for the security systems. Naturally, the security check procedure should not be altered by automatic software because it's not guaranteed that the automatic system will never make any mistake. This paper addressed an application of threat object detection using the covariance matrix modeling. The algorithm is implemented in MATLAB environment and evaluated the performance by comparing with other detection algorithms. Considering the shape of an object on an image is changed by the attitude of that to the imaging machine, the implemented detector has the robustness for rotation and scale of an object

  4. Threat Object Detection using Covariance Matrix Modeling in X-ray Images

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Byoun Gil; Kim, Jong Yul; Moon, Myung Kook [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The X-ray imaging system for the aviation security is one of the applications. In airports, all passengers and properties should be inspected and accepted by security machines before boarding on aircrafts to avoid all treat factors. That treat factors might be directly connected on terrorist threats awfully hazardous to not only passengers but also people in highly populated area such as major cities or buildings. Because the performance of the system is increasing along with the growth of IT technology, information that has various type and good quality can be provided for security check. However, human factors are mainly affected on the inspections. It means that human inspectors should be proficient corresponding to the growth of technology for efficient and effective inspection but there is clear limit of proficiency. Human being is not a computer. Because of the limitation, the aviation security techniques have the tendencies to provide not only numerous and nice information but also effective assistance for security inspectors. Many image processing applications already have been developed to provide efficient assistance for the security systems. Naturally, the security check procedure should not be altered by automatic software because it's not guaranteed that the automatic system will never make any mistake. This paper addressed an application of threat object detection using the covariance matrix modeling. The algorithm is implemented in MATLAB environment and evaluated the performance by comparing with other detection algorithms. Considering the shape of an object on an image is changed by the attitude of that to the imaging machine, the implemented detector has the robustness for rotation and scale of an object.

  5. Background stratified Poisson regression analysis of cohort data.

    Science.gov (United States)

    Richardson, David B; Langholz, Bryan

    2012-03-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.

  6. Background stratified Poisson regression analysis of cohort data

    International Nuclear Information System (INIS)

    Richardson, David B.; Langholz, Bryan

    2012-01-01

    Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)

  7. Evaluating measurement models in clinical research: covariance structure analysis of latent variable models of self-conception.

    Science.gov (United States)

    Hoyle, R H

    1991-02-01

    Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.

  8. Three Cs in Measurement Models: Causal Indicators, Composite Indicators, and Covariates

    OpenAIRE

    Bollen, Kenneth A.; Bauldry, Shawn

    2011-01-01

    In the last two decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that we can classify indicators into two categories, effect (reflective) indicators and causal (formative) indicators. This paper argues that the dichotomous view is too simple. Instead, there are effect indicators and three types of variables on which a latent variable depends: causal indicators, composite (formative) indicators, and covariates (the “three Cs”). Caus...

  9. Semiparametric approach for non-monotone missing covariates in a parametric regression model

    KAUST Repository

    Sinha, Samiran

    2014-02-26

    Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.

  10. The impact of covariance misspecification in multivariate Gaussian mixtures on estimation and inference: an application to longitudinal modeling.

    Science.gov (United States)

    Heggeseth, Brianna C; Jewell, Nicholas P

    2013-07-20

    Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Improvement of Modeling HTGR Neutron Physics by Uncertainty Analysis with the Use of Cross-Section Covariance Information

    Science.gov (United States)

    Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu

    2017-01-01

    This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).

  12. An evolutionary-network model reveals stratified interactions in the V3 loop of the HIV-1 envelope.

    Directory of Open Access Journals (Sweden)

    Art F Y Poon

    2007-11-01

    Full Text Available The third variable loop (V3 of the human immunodeficiency virus type 1 (HIV-1 envelope is a principal determinant of antibody neutralization and progression to AIDS. Although it is undoubtedly an important target for vaccine research, extensive genetic variation in V3 remains an obstacle to the development of an effective vaccine. Comparative methods that exploit the abundance of sequence data can detect interactions between residues of rapidly evolving proteins such as the HIV-1 envelope, revealing biological constraints on their variability. However, previous studies have relied implicitly on two biologically unrealistic assumptions: (1 that founder effects in the evolutionary history of the sequences can be ignored, and; (2 that statistical associations between residues occur exclusively in pairs. We show that comparative methods that neglect the evolutionary history of extant sequences are susceptible to a high rate of false positives (20%-40%. Therefore, we propose a new method to detect interactions that relaxes both of these assumptions. First, we reconstruct the evolutionary history of extant sequences by maximum likelihood, shifting focus from extant sequence variation to the underlying substitution events. Second, we analyze the joint distribution of substitution events among positions in the sequence as a Bayesian graphical model, in which each branch in the phylogeny is a unit of observation. We perform extensive validation of our models using both simulations and a control case of known interactions in HIV-1 protease, and apply this method to detect interactions within V3 from a sample of 1,154 HIV-1 envelope sequences. Our method greatly reduces the number of false positives due to founder effects, while capturing several higher-order interactions among V3 residues. By mapping these interactions to a structural model of the V3 loop, we find that the loop is stratified into distinct evolutionary clusters. We extend our model to

  13. Earth Observing System Covariance Realism

    Science.gov (United States)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  14. Bayesian structural equations model for multilevel data with missing responses and missing covariates

    CSIR Research Space (South Africa)

    Kim, S

    2008-03-01

    Full Text Available used in Table 4 are as follow — βk: direct effect; βTk : total effect; and βsbk : superbeta. There are some interesting findings from the results presented in Table 4. For out- come variable Customer satisfaction, the superbeta measure was strongest... corresponding 95% HPD interval contains 0. This suggests that ignoring the heterogeneity and/or covariates gives different conclusions based on the total-effect measure. Also from Table 4, we see that for outcome variable Customer satisfaction, all the 3...

  15. Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.

    Science.gov (United States)

    Gilson, Matthieu

    2018-04-01

    Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.

  16. Evaluation of Global Photosynthesis and BVOC Emission Covariance with Climate in NASA ModelE2-Y

    Science.gov (United States)

    Unger, N.

    2012-12-01

    -dependent fluxes across a broad range of different ecosystem types. In tropical ecosystems, the model simulates the campaign-average diurnal cycle with remarkable fidelity (root-mean-square error = 0.20 mgC/m2/hr; normalized mean bias = -5%). The model underpredicts in broadleaf deciduous ecosystems in the United States and Europe. We probe the GPP and BVOC emission covariance with climate in tropical, temperate and boreal ecosystems, and the GPP-HCHO correlation using fire-free HCHO columns from OMI and SCIAMACHY 2005-2008.

  17. Covariation in Natural Causal Induction.

    Science.gov (United States)

    Cheng, Patricia W.; Novick, Laura R.

    1991-01-01

    Biases and models usually offered by cognitive and social psychology and by philosophy to explain causal induction are evaluated with respect to focal sets (contextually determined sets of events over which covariation is computed). A probabilistic contrast model is proposed as underlying covariation computation in natural causal induction. (SLD)

  18. A global bioheat model with self-tuning optimal regulation of body temperature using Hebbian feedback covariance learning.

    Science.gov (United States)

    Ong, M L; Ng, E Y K

    2005-12-01

    In the lower brain, body temperature is continually being regulated almost flawlessly despite huge fluctuations in ambient and physiological conditions that constantly threaten the well-being of the body. The underlying control problem defining thermal homeostasis is one of great enormity: Many systems and sub-systems are involved in temperature regulation and physiological processes are intrinsically complex and intertwined. Thus the defining control system has to take into account the complications of nonlinearities, system uncertainties, delayed feedback loops as well as internal and external disturbances. In this paper, we propose a self-tuning adaptive thermal controller based upon Hebbian feedback covariance learning where the system is to be regulated continually to best suit its environment. This hypothesis is supported in part by postulations of the presence of adaptive optimization behavior in biological systems of certain organisms which face limited resources vital for survival. We demonstrate the use of Hebbian feedback covariance learning as a possible self-adaptive controller in body temperature regulation. The model postulates an important role of Hebbian covariance adaptation as a means of reinforcement learning in the thermal controller. The passive system is based on a simplified 2-node core and shell representation of the body, where global responses are captured. Model predictions are consistent with observed thermoregulatory responses to conditions of exercise and rest, and heat and cold stress. An important implication of the model is that optimal physiological behaviors arising from self-tuning adaptive regulation in the thermal controller may be responsible for the departure from homeostasis in abnormal states, e.g., fever. This was previously unexplained using the conventional "set-point" control theory.

  19. Filtering remotely sensed chlorophyll concentrations in the Red Sea using a space-time covariance model and a Kalman filter

    KAUST Repository

    Dreano, Denis

    2015-04-27

    A statistical model is proposed to filter satellite-derived chlorophyll concentration from the Red Sea, and to predict future chlorophyll concentrations. The seasonal trend is first estimated after filling missing chlorophyll data using an Empirical Orthogonal Function (EOF)-based algorithm (Data Interpolation EOF). The anomalies are then modeled as a stationary Gaussian process. A method proposed by Gneiting (2002) is used to construct positive-definite space-time covariance models for this process. After choosing an appropriate statistical model and identifying its parameters, Kriging is applied in the space-time domain to make a one step ahead prediction of the anomalies. The latter serves as the prediction model of a reduced-order Kalman filter, which is applied to assimilate and predict future chlorophyll concentrations. The proposed method decreases the root mean square (RMS) prediction error by about 11% compared with the seasonal average.

  20. Filtering remotely sensed chlorophyll concentrations in the Red Sea using a space-time covariance model and a Kalman filter

    KAUST Repository

    Dreano, Denis; Mallick, Bani; Hoteit, Ibrahim

    2015-01-01

    A statistical model is proposed to filter satellite-derived chlorophyll concentration from the Red Sea, and to predict future chlorophyll concentrations. The seasonal trend is first estimated after filling missing chlorophyll data using an Empirical Orthogonal Function (EOF)-based algorithm (Data Interpolation EOF). The anomalies are then modeled as a stationary Gaussian process. A method proposed by Gneiting (2002) is used to construct positive-definite space-time covariance models for this process. After choosing an appropriate statistical model and identifying its parameters, Kriging is applied in the space-time domain to make a one step ahead prediction of the anomalies. The latter serves as the prediction model of a reduced-order Kalman filter, which is applied to assimilate and predict future chlorophyll concentrations. The proposed method decreases the root mean square (RMS) prediction error by about 11% compared with the seasonal average.

  1. Bootstrapping integrated covariance matrix estimators in noisy jump-diffusion models with non-synchronous trading

    DEFF Research Database (Denmark)

    Hounyo, Ulrich

    to a gneral class of estimators of integrated covolatility. We then show the first-order asymptotic validity of this method in the multivariate context with a potential presence of jumps, dependent microsturcture noise, irregularly spaced and non-synchronous data. Due to our focus on non...... covariance estimator. As an application of our results, we also consider the bootstrap for regression coefficients. We show that the wild blocks of bootstrap, appropriately centered, is able to mimic both the dependence and heterogeneity of the scores, thus justifying the construction of bootstrap percentile...... intervals as well as variance estimates in this context. This contrasts with the traditional pairs bootstrap which is not able to mimic the score heterogeneity even in the simple case where no microsturcture noise is present. Our Monte Carlo simulations show that the wild blocks of blocks bootstrap improves...

  2. Covariance Bell inequalities

    Science.gov (United States)

    Pozsgay, Victor; Hirsch, Flavien; Branciard, Cyril; Brunner, Nicolas

    2017-12-01

    We introduce Bell inequalities based on covariance, one of the most common measures of correlation. Explicit examples are discussed, and violations in quantum theory are demonstrated. A crucial feature of these covariance Bell inequalities is their nonlinearity; this has nontrivial consequences for the derivation of their local bound, which is not reached by deterministic local correlations. For our simplest inequality, we derive analytically tight bounds for both local and quantum correlations. An interesting application of covariance Bell inequalities is that they can act as "shared randomness witnesses": specifically, the value of the Bell expression gives device-independent lower bounds on both the dimension and the entropy of the shared random variable in a local model.

  3. One-stage individual participant data meta-analysis models: estimation of treatment-covariate interactions must avoid ecological bias by separating out within-trial and across-trial information.

    Science.gov (United States)

    Hua, Hairui; Burke, Danielle L; Crowther, Michael J; Ensor, Joie; Tudur Smith, Catrin; Riley, Richard D

    2017-02-28

    Stratified medicine utilizes individual-level covariates that are associated with a differential treatment effect, also known as treatment-covariate interactions. When multiple trials are available, meta-analysis is used to help detect true treatment-covariate interactions by combining their data. Meta-regression of trial-level information is prone to low power and ecological bias, and therefore, individual participant data (IPD) meta-analyses are preferable to examine interactions utilizing individual-level information. However, one-stage IPD models are often wrongly specified, such that interactions are based on amalgamating within- and across-trial information. We compare, through simulations and an applied example, fixed-effect and random-effects models for a one-stage IPD meta-analysis of time-to-event data where the goal is to estimate a treatment-covariate interaction. We show that it is crucial to centre patient-level covariates by their mean value in each trial, in order to separate out within-trial and across-trial information. Otherwise, bias and coverage of interaction estimates may be adversely affected, leading to potentially erroneous conclusions driven by ecological bias. We revisit an IPD meta-analysis of five epilepsy trials and examine age as a treatment effect modifier. The interaction is -0.011 (95% CI: -0.019 to -0.003; p = 0.004), and thus highly significant, when amalgamating within-trial and across-trial information. However, when separating within-trial from across-trial information, the interaction is -0.007 (95% CI: -0.019 to 0.005; p = 0.22), and thus its magnitude and statistical significance are greatly reduced. We recommend that meta-analysts should only use within-trial information to examine individual predictors of treatment effect and that one-stage IPD models should separate within-trial from across-trial information to avoid ecological bias. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd

  4. Modelling carbon and water exchange of a grazed pasture in New Zealand constrained by eddy covariance measurements.

    Science.gov (United States)

    Kirschbaum, Miko U F; Rutledge, Susanna; Kuijper, Isoude A; Mudge, Paul L; Puche, Nicolas; Wall, Aaron M; Roach, Chris G; Schipper, Louis A; Campbell, David I

    2015-04-15

    We used two years of eddy covariance (EC) measurements collected over an intensively grazed dairy pasture to better understand the key drivers of changes in soil organic carbon stocks. Analysing grazing systems with EC measurements poses significant challenges as the respiration from grazing animals can result in large short-term CO2 fluxes. As paddocks are grazed only periodically, EC observations derive from a mosaic of paddocks with very different exchange rates. This violates the assumptions implicit in the use of EC methodology. To test whether these challenges could be overcome, and to develop a tool for wider scenario testing, we compared EC measurements with simulation runs with the detailed ecosystem model CenW 4.1. Simulations were run separately for 26 paddocks around the EC tower and coupled to a footprint analysis to estimate net fluxes at the EC tower. Overall, we obtained good agreement between modelled and measured fluxes, especially for the comparison of evapotranspiration rates, with model efficiency of 0.96 for weekly averaged values of the validation data. For net ecosystem productivity (NEP) comparisons, observations were omitted when cattle grazed the paddocks immediately around the tower. With those points omitted, model efficiencies for weekly averaged values of the validation data were 0.78, 0.67 and 0.54 for daytime, night-time and 24-hour NEP, respectively. While not included for model parameterisation, simulated gross primary production also agreed closely with values inferred from eddy covariance measurements (model efficiency of 0.84 for weekly averages). The study confirmed that CenW simulations could adequately model carbon and water exchange in grazed pastures. It highlighted the critical role of animal respiration for net CO2 fluxes, and showed that EC studies of grazed pastures need to consider the best approach of accounting for this important flux to avoid unbalanced accounting. Copyright © 2015. Published by Elsevier B.V.

  5. Stratifying Parkinson's Patients With STN-DBS Into High-Frequency or 60 Hz-Frequency Modulation Using a Computational Model.

    Science.gov (United States)

    Khojandi, Anahita; Shylo, Oleg; Mannini, Lucia; Kopell, Brian H; Ramdhani, Ritesh A

    2017-07-01

    High frequency stimulation (HFS) of the subthalamic nucleus (STN) is a well-established therapy for Parkinson's disease (PD), particularly the cardinal motor symptoms and levodopa induced motor complications. Recent studies have suggested the possible role of 60 Hz stimulation in STN-deep brain stimulation (DBS) for patients with gait disorder. The objective of this study was to develop a computational model, which stratifies patients a priori based on symptomatology into different frequency settings (i.e., high frequency or 60 Hz). We retrospectively analyzed preoperative MDS-Unified Parkinson's Disease Rating Scale III scores (32 indicators) collected from 20 PD patients implanted with STN-DBS at Mount Sinai Medical Center on either 60 Hz stimulation (ten patients) or HFS (130-185 Hz) (ten patients) for an average of 12 months. Predictive models using the Random Forest classification algorithm were built to associate patient/disease characteristics at surgery to the stimulation frequency. These models were evaluated objectively using leave-one-out cross-validation approach. The computational models produced, stratified patients into 60 Hz or HFS (130-185 Hz) with 95% accuracy. The best models relied on two or three predictors out of the 32 analyzed for classification. Across all predictors, gait and rest tremor of the right hand were consistently the most important. Computational models were developed using preoperative clinical indicators in PD patients treated with STN-DBS. These models were able to accurately stratify PD patients into 60 Hz stimulation or HFS (130-185 Hz) groups a priori, offering a unique potential to enhance the utilization of this therapy based on clinical subtypes. © 2017 International Neuromodulation Society.

  6. Modelling carbon fluxes of forest and grassland ecosystems in Western Europe using the CARAIB dynamic vegetation model: evaluation against eddy covariance data.

    Science.gov (United States)

    Henrot, Alexandra-Jane; François, Louis; Dury, Marie; Hambuckers, Alain; Jacquemin, Ingrid; Minet, Julien; Tychon, Bernard; Heinesch, Bernard; Horemans, Joanna; Deckmyn, Gaby

    2015-04-01

    Eddy covariance measurements are an essential resource to understand how ecosystem carbon fluxes react in response to climate change, and to help to evaluate and validate the performance of land surface and vegetation models at regional and global scale. In the framework of the MASC project (« Modelling and Assessing Surface Change impacts on Belgian and Western European climate »), vegetation dynamics and carbon fluxes of forest and grassland ecosystems simulated by the CARAIB dynamic vegetation model (Dury et al., iForest - Biogeosciences and Forestry, 4:82-99, 2011) are evaluated and validated by comparison of the model predictions with eddy covariance data. Here carbon fluxes (e.g. net ecosystem exchange (NEE), gross primary productivity (GPP), and ecosystem respiration (RECO)) and evapotranspiration (ET) simulated with the CARAIB model are compared with the fluxes measured at several eddy covariance flux tower sites in Belgium and Western Europe, chosen from the FLUXNET global network (http://fluxnet.ornl.gov/). CARAIB is forced either with surface atmospheric variables derived from the global CRU climatology, or with in situ meteorological data. Several tree (e.g. Pinus sylvestris, Fagus sylvatica, Picea abies) and grass species (e.g. Poaceae, Asteraceae) are simulated, depending on the species encountered on the studied sites. The aim of our work is to assess the model ability to reproduce the daily, seasonal and interannual variablility of carbon fluxes and the carbon dynamics of forest and grassland ecosystems in Belgium and Western Europe.

  7. Order-Constrained Reference Priors with Implications for Bayesian Isotonic Regression, Analysis of Covariance and Spatial Models

    Science.gov (United States)

    Gong, Maozhen

    Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.

  8. Generalized Linear Covariance Analysis

    Science.gov (United States)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  9. Forecasting Multivariate Volatility using the VARFIMA Model on Realized Covariance Cholesky Factors

    DEFF Research Database (Denmark)

    Halbleib, Roxana; Voev, Valeri

    2011-01-01

    This paper analyzes the forecast accuracy of the multivariate realized volatility model introduced by Chiriac and Voev (2010), subject to different degrees of model parametrization and economic evaluation criteria. Bymodelling the Cholesky factors of the covariancematrices, the model generates......, regardless of the type of utility function or return distribution, would be better-off from using this model than from using some standard approaches....

  10. Covariance-based synaptic plasticity in an attractor network model accounts for fast adaptation in free operant learning.

    Science.gov (United States)

    Neiman, Tal; Loewenstein, Yonatan

    2013-01-23

    In free operant experiments, subjects alternate at will between targets that yield rewards stochastically. Behavior in these experiments is typically characterized by (1) an exponential distribution of stay durations, (2) matching of the relative time spent at a target to its relative share of the total number of rewards, and (3) adaptation after a change in the reward rates that can be very fast. The neural mechanism underlying these regularities is largely unknown. Moreover, current decision-making neural network models typically aim at explaining behavior in discrete-time experiments in which a single decision is made once in every trial, making these models hard to extend to the more natural case of free operant decisions. Here we show that a model based on attractor dynamics, in which transitions are induced by noise and preference is formed via covariance-based synaptic plasticity, can account for the characteristics of behavior in free operant experiments. We compare a specific instance of such a model, in which two recurrently excited populations of neurons compete for higher activity, to the behavior of rats responding on two levers for rewarding brain stimulation on a concurrent variable interval reward schedule (Gallistel et al., 2001). We show that the model is consistent with the rats' behavior, and in particular, with the observed fast adaptation to matching behavior. Further, we show that the neural model can be reduced to a behavioral model, and we use this model to deduce a novel "conservation law," which is consistent with the behavior of the rats.

  11. Multilevel Regression Models for Mean and (Co)variance: with Applications in Nursing Research

    OpenAIRE

    Li, Bayoue

    2014-01-01

    markdownabstract__Abstract__ In this chapter, a concise overview is provided for the statistical techniques that are applied in this thesis. This includes two classes of statistical modeling approaches which have been commonly applied in plenty of research areas for many decades. Namely, we will describe the fundamental ideas about mixed effects models and factor analytic (FA) models. To be specific, this chapter covers several types of these two classes of modeling approaches. For the mixed ...

  12. RADIAL STABILITY IN STRATIFIED STARS

    International Nuclear Information System (INIS)

    Pereira, Jonas P.; Rueda, Jorge A.

    2015-01-01

    We formulate within a generalized distributional approach the treatment of the stability against radial perturbations for both neutral and charged stratified stars in Newtonian and Einstein's gravity. We obtain from this approach the boundary conditions connecting any two phases within a star and underline its relevance for realistic models of compact stars with phase transitions, owing to the modification of the star's set of eigenmodes with respect to the continuous case

  13. Branching fractions of semileptonic D and D{sub s} decays from the covariant light-front quark model

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Hai-Yang; Kang, Xian-Wei [Academia Sinica, Institute of Physics, Taipei (China)

    2017-09-15

    Based on the predictions of the relevant form factors from the covariant light-front quark model, we show the branching fractions for the D(D{sub s}) → (P, S, V, A) lν{sub l} (l = e or μ) decays, where P denotes the pseudoscalar meson, S the scalar meson with a mass above 1 GeV, V the vector meson and A the axial-vector one. Comparison with the available experimental results are made, and we find an excellent agreement. The predictions for other decay modes can be tested in a charm factory, e.g., the BESIII detector. The future measurements will definitely further enrich our knowledge of the hadronic transition form factors as well as the inner structure of the even-parity mesons (S and A). (orig.)

  14. Evaluation of protocol change in burn-care management using the Cox proportional hazards model with time-dependent covariates.

    Science.gov (United States)

    Ichida, J M; Wassell, J T; Keller, M D; Ayers, L W

    1993-02-01

    Survival analysis methods are valuable for detecting intervention effects because detailed information from patient records and sensitive outcome measures are used. The burn unit at a large university hospital replaced routine bathing with total body bathing using chlorhexidine gluconate for antimicrobial effect. A Cox proportional hazards model was used to analyse time from admission until either infection with Staphylococcus aureus or discharge for 155 patients, controlling for burn severity and two time-dependent covariates: days until first wound excision and days until first administration of prophylactic antibiotics. The risk of infection was 55 per cent higher in the historical control group, although not statistically significant. There was also some indication that early wound excision may be important as an infection-control measure for burn patients.

  15. Bias Correction in the Dynamic Panel Data Model with a Nonscalar Disturbance Covariance Matrix

    NARCIS (Netherlands)

    Bun, M.J.G.

    2003-01-01

    Approximation formulae are developed for the bias of ordinary and generalized Least Squares Dummy Variable (LSDV) estimators in dynamic panel data models. Results from Kiviet [Kiviet, J. F. (1995), on bias, inconsistency, and efficiency of various estimators in dynamic panel data models, J.

  16. Uncertainty in eddy covariance measurements and its application to physiological models

    Science.gov (United States)

    D.Y. Hollinger; A.D. Richardson; A.D. Richardson

    2005-01-01

    Flux data are noisy, and this uncertainty is largely due to random measurement error. Knowledge of uncertainty is essential for the statistical evaluation of modeled andmeasured fluxes, for comparison of parameters derived by fitting models to measured fluxes and in formal data-assimilation efforts. We used the difference between simultaneous measurements from two...

  17. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    Science.gov (United States)

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  18. Bias corrrection in the dynamic panel data model with a nonscalar disturbance covariance matrix

    NARCIS (Netherlands)

    Bun, M.J.G.

    2001-01-01

    Approximation formulae are developed for the bias of ordinary andgeneralized Least Squares Dummy Variable (LSDV) estimators in dynamicpanel data models. Results from Kiviet (1995, 1999) are extended tohigher-order dynamic panel data models with general covariancestructure. The focus is on estimation

  19. A Two-Stage Approach to Synthesizing Covariance Matrices in Meta-Analytic Structural Equation Modeling

    Science.gov (United States)

    Cheung, Mike W. L.; Chan, Wai

    2009-01-01

    Structural equation modeling (SEM) is widely used as a statistical framework to test complex models in behavioral and social sciences. When the number of publications increases, there is a need to systematically synthesize them. Methodology of synthesizing findings in the context of SEM is known as meta-analytic SEM (MASEM). Although correlation…

  20. Utilizing a Coupled Nonlinear Schrödinger Model to Solve the Linear Modal Problem for Stratified Flows

    Science.gov (United States)

    Liu, Tianyang; Chan, Hiu Ning; Grimshaw, Roger; Chow, Kwok Wing

    2017-11-01

    The spatial structure of small disturbances in stratified flows without background shear, usually named the `Taylor-Goldstein equation', is studied by employing the Boussinesq approximation (variation in density ignored except in the buoyancy). Analytical solutions are derived for special wavenumbers when the Brunt-Väisälä frequency is quadratic in hyperbolic secant, by comparison with coupled systems of nonlinear Schrödinger equations intensively studied in the literature. Cases of coupled Schrödinger equations with four, five and six components are utilized as concrete examples. Dispersion curves for arbitrary wavenumbers are obtained numerically. The computations of the group velocity, second harmonic, induced mean flow, and the second derivative of the angular frequency can all be facilitated by these exact linear eigenfunctions of the Taylor-Goldstein equation in terms of hyperbolic function, leading to a cubic Schrödinger equation for the evolution of a wavepacket. The occurrence of internal rogue waves can be predicted if the dispersion and cubic nonlinearity terms of the Schrödinger equations are of the same sign. Partial financial support has been provided by the Research Grants Council contract HKU 17200815.

  1. A cautionary note on the use of information fit indexes in covariance structure modeling with means

    NARCIS (Netherlands)

    Wicherts, J.M.; Dolan, C.V.

    2004-01-01

    Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases

  2. Covariant two-particle wave functions for model quasipotentials admitting exact solutions

    International Nuclear Information System (INIS)

    Kapshaj, V.N.; Skachkov, N.B.

    1983-01-01

    Two formulations of quasipotential equations in the relativistic configurational representation are considered for the wave function of the internal motion of the bound system of two relativistic particles. Exact solutions of these equations are found for some model quasipotentials

  3. Covariant two-particle wave functions for model quasipotential allowing exact solutions

    International Nuclear Information System (INIS)

    Kapshaj, V.N.; Skachkov, N.B.

    1982-01-01

    Two formulations of quasipotential equations in the relativistic configurational representation are considered for the wave function of relative motion of a bound state of two relativistic particles. Exact solutions of these equations are found for some model quasipotentials

  4. How to deal with the high condition number of the noise covariance matrix of gravity field functionals synthesised from a satellite-only global gravity field model?

    Science.gov (United States)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-03-01

    The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.

  5. Improved Monte Carlo modelling of multi-energy a-rays penetration through thick stratified shielding slabs

    International Nuclear Information System (INIS)

    Bakos, G.C.

    2001-01-01

    This paper deals with the application of Monte Carlo method for the calculation of dose build up factor of, mixed 1.37 and 2.75 MeV, a-rays penetration through stratified shielding slabs. Six double layer shielding slabs namely, 12 A l+Fe, 12 A l+Pb, 6 F e+Al, 6 F e+Pb, 4 P b+Al, 4 P b+Fe were examined. Furthermore, experimental and theoretical results are also presented. The experimental results were taken from the experimental facility installed at the Universities Research reactor Center (Risley, UK). Activated Na2SO3 solution provided a uniform Na-24 disc source of a-rays at both energies (1.37 and 2.75 MeV) with equal intensity. The theoretical results were calculated using the Bowman and Trubey formula. This formula takes into account an exponentially decaying function of the shield thickness (in mfp) to the end point of the multi-layer slab. The experimental and theoretical results were used to evaluate the simulation results produced from a Monte Carlo program (DUTMONCA code) which was developed in Democritus University of Thrace (Xanthi, Greece). The DUTMONCA code was written in Pascal language and run on an Intel PIII-800 microprocessor. The developed code (which is an improved version of an existing Monte Carlo program) has the ability to produce good results for thick shielding slabs overcoming the problems encountered in older version program. The simulation results are compared with experimental and theoretical results. Good agreement can be observed, even for thick layer shielding slabs, although there are some wayward experimental values which are due to sources of error associated with the experimental procedure

  6. Temperature Covariance in Tree Ring Reconstructions and Model Simulations Over the Past Millennium

    Czech Academy of Sciences Publication Activity Database

    Hartl-Meier, C. T. M.; Büntgen, Ulf; Smerdon, J. E.; Zorita, E.; Krusic, P. J.; Ljungqvist, F. C.; Schneider, L.; Esper, J.

    2017-01-01

    Roč. 44, č. 18 (2017), s. 9458-9469 ISSN 0094-8276 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:68378076 Keywords : last millennium * northern-hemisphere * summer temperatures * american southwest * volcanic-eruptions * tibetan plateau * sierra-nevada * system model * central-asia * climate * paleoclimate * spatial temperature synchrony * millennial scale * radiative forcing * proxy model comparison Subject RIV: EH - Ecology, Behaviour OBOR OECD: Environmental sciences (social aspects to be 5.7) Impact factor: 4.253, year: 2016

  7. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  8. Seasonal variation of photosynthetic model parameters and leaf area index from global Fluxnet eddy covariance data

    NARCIS (Netherlands)

    Groenendijk, M.; Dolman, A.J.; Ammann, C.; Arneth, A.; Cescatti, A.; Molen, van der M.K.; Moors, E.J.

    2011-01-01

    Global vegetation models require the photosynthetic parameters, maximum carboxylation capacity (Vcm), and quantum yield (a) to parameterize their plant functional types (PFTs). The purpose of this work is to determine how much the scaling of the parameters from leaf to ecosystem level through a

  9. Multilevel Regression Models for Mean and (Co)variance: with Applications in Nursing Research

    NARCIS (Netherlands)

    B. Li (Bayoue)

    2014-01-01

    markdownabstract__Abstract__ In this chapter, a concise overview is provided for the statistical techniques that are applied in this thesis. This includes two classes of statistical modeling approaches which have been commonly applied in plenty of research areas for many decades. Namely, we

  10. A Covariance Structure Model Test of Antecedents of Adolescent Alcohol Misuse and a Prevention Effort.

    Science.gov (United States)

    Dielman, T. E.; And Others

    1989-01-01

    Questionnaires were administered to 4,157 junior high school students to determine levels of alcohol misuse, exposure to peer use and misuse of alcohol, susceptibility to peer pressure, internal health locus of control, and self-esteem. Conceptual model of antecendents of adolescent alcohol misuse and effectiveness of a prevention effort was…

  11. ADVANCES IN RENEWAL DECISION-MAKING UTILISING THE PROPORTIONAL HAZARDS MODEL WITH VIBRATION COVARIATES

    Directory of Open Access Journals (Sweden)

    Pieter-Jan Vlok

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Increased competitiveness in the production world necessitates improved maintenance strategies to increase availabilities and drive down cost . The maintenance engineer is thus faced with the need to make more intelligent pre ventive renewal decisions . Two of the main techniques to achieve this is through Condition Monitoring (such as vibrat ion monitoring and oil anal ysis and Statistical Failure Analysis (typically using probabilistic techniques . The present paper discusses these techniques, their uses and weaknesses and then presents th e Proportional Hazard Model as an solution to most of these weaknesses. It then goes on to compare the results of the different techniques in monetary terms, using a South African case study. This comparison shows clearly that the Proportional Hazards Model is sup erior to the present t echniques and should be the preferred model for many actual maintenance situations.

    AFRIKAANSE OPSOMMING: Verhoogde vlakke van mededinging in die produksie omgewing noodsaak verbeterde instandhouding strategies om beskikbaarheid van toerusting te verhoog en koste te minimeer. Instandhoudingsingenieurs moet gevolglik meer intellegente voorkomende hernuwings besluite neem. Twee prominente tegnieke om hierdie doelwit te bereik is Toestandsmonitering (soos vibrasie monitering of olie analise en Statistiese Falingsanalise (gewoonlik m.b.v. probabilistiese metodes. In hierdie artikel beskou ons beide hierdie tegnieke, hulle gebruike en tekortkominge en stel dan die Proporsionele Gevaarkoers Model voor as 'n oplossing vir meeste van die tekortkominge. Die artikel vergelyk ook die verskillende tegnieke in geldelike terme deur gebruik te maak van 'n Suid-Afrikaanse gevalle studie. Hierdie vergelyking wys duidelik-uit dat die Proporsionele Gevaarkoers Model groter beloft e inhou as die huidige tegni eke en dat dit die voorkeur oplossing behoort te wees in baie werklike instandhoudings situasies.

  12. Keratinocytes propagated in serum-free, feeder-free culture conditions fail to form stratified epidermis in a reconstituted skin model.

    Directory of Open Access Journals (Sweden)

    Rebecca Lamb

    Full Text Available Primary human epidermal stem cells isolated from skin tissues and subsequently expanded in tissue culture are used for human therapeutic use to reconstitute skin on patients and to generate artificial skin in culture for academic and commercial research. Classically, epidermal cells, known as keratinocytes, required fibroblast feeder support and serum-containing media for serial propagation. In alignment with global efforts to remove potential animal contaminants, many serum-free, feeder-free culture methods have been developed that support derivation and growth of these cells in 2-dimensional culture. Here we show that keratinocytes grown continually in serum-free and feeder-free conditions were unable to form into a stratified, mature epidermis in a skin equivalent model. This is not due to loss of cell potential as keratinocytes propagated in serum-free, feeder-free conditions retain their ability to form stratified epidermis when re-introduced to classic serum-containing media. Extracellular calcium supplementation failed to improve epidermis development. In contrast, the addition of serum to commercial, growth media developed for serum-free expansion of keratinocytes facilitated 3-dimensional stratification in our skin equivalent model. Moreover, the addition of heat-inactivated serum improved the epidermis structure and thickness, suggesting that serum contains factors that both aid and inhibit stratification.

  13. Relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro: application of a stratified model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kang Il [Kangwon National University, Chuncheon (Korea, Republic of)

    2012-08-15

    The present study aims to provide insight into the relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21 - 0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  14. Relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro: application of a stratified model

    International Nuclear Information System (INIS)

    Lee, Kang Il

    2012-01-01

    The present study aims to provide insight into the relationships of the phase velocity with the micro architectural parameters in bovine trabecular bone in vitro. The frequency-dependent phase velocity was measured in 22 bovine femoral trabecular bone samples by using a pair of transducers with a diameter of 25.4 mm and a center frequency of 0.5 MHz. The phase velocity exhibited positive correlation coefficients of 0.48 and 0.32 with the ratio of bone volume to total volume and the trabecular thickness, respectively, but a negative correlation coefficient of -0.62 with the trabecular separation. The best univariate predictor of the phase velocity was the trabecular separation, yielding an adjusted squared correlation coefficient of 0.36. The multivariate regression models yielded adjusted squared correlation coefficients of 0.21 - 0.36. The theoretical phase velocity predicted by using a stratified model for wave propagation in periodically stratified media consisting of alternating parallel solid-fluid layers showed reasonable agreements with the experimental measurements.

  15. Combining eddy-covariance measurements and Penman-Monteith type models to estimate evapotranspiration of flooded and aerobic rice

    Science.gov (United States)

    Facchi, Arianna; Masseroni, Daniele; Gharsallah, Olfa; Gandolfi, Claudio

    2014-05-01

    Rice is of great importance both from a food supply point of view, since it represents the main food in the diet of over half the world's population, and from a water resources point of view, since it consumes almost 40% of the water amount used for irrigation. About 90% of global production takes place in Asia, while European production is quantitatively modest (about 3 million tons). However, Italy is the Europe's leading producer, with over half of total production, almost totally concentrated in a large traditional paddy rice area between the Lombardy and Piedmont regions, in the north-western part of the country. In this area, irrigation of rice is traditionally carried out by continuous flooding. The high water requirement of this irrigation regime encourages the introduction of water saving irrigation practices, as flood irrigation after sowing in dry soil and intermittent irrigation (aerobic rice). In the agricultural season 2013 an intense monitoring activity was conducted on three experimental fields located in the Padana plain (northern Italy) and characterized by different irrigation regimes (traditional flood irrigation, flood irrigation after sowing in dry soil, intermittent irrigation), with the aim of comparing the water balance terms for the three irrigation treatments. Actual evapotranspiration (ET) is one of the terms, but, unlike others water balance components, its field monitoring requires expensive instrumentation. This work explores the possibility of using only one eddy covariance system and Penman-Monteith (PM) type models for the determination of ET fluxes for the three irrigation regimes. An eddy covariance station was installed on the levee between the traditional flooded and the aerobic rice fields, to contemporaneously monitor the ET fluxes from this two treatments as a function of the wind direction. A detailed footprint analysis was conducted - through the application of three different analytical models - to determine the position

  16. LuxGLM: a probabilistic covariate model for quantification of DNA methylation modifications with complex experimental designs.

    Science.gov (United States)

    Äijö, Tarmo; Yue, Xiaojing; Rao, Anjana; Lähdesmäki, Harri

    2016-09-01

    5-methylcytosine (5mC) is a widely studied epigenetic modification of DNA. The ten-eleven translocation (TET) dioxygenases oxidize 5mC into oxidized methylcytosines (oxi-mCs): 5-hydroxymethylcytosine (5hmC), 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC). DNA methylation modifications have multiple functions. For example, 5mC is shown to be associated with diseases and oxi-mC species are reported to have a role in active DNA demethylation through 5mC oxidation and DNA repair, among others, but the detailed mechanisms are poorly understood. Bisulphite sequencing and its various derivatives can be used to gain information about all methylation modifications at single nucleotide resolution. Analysis of bisulphite based sequencing data is complicated due to the convoluted read-outs and experiment-specific variation in biochemistry. Moreover, statistical analysis is often complicated by various confounding effects. How to analyse 5mC and oxi-mC data sets with arbitrary and complex experimental designs is an open and important problem. We propose the first method to quantify oxi-mC species with arbitrary covariate structures from bisulphite based sequencing data. Our probabilistic modeling framework combines a previously proposed hierarchical generative model for oxi-mC-seq data and a general linear model component to account for confounding effects. We show that our method provides accurate methylation level estimates and accurate detection of differential methylation when compared with existing methods. Analysis of novel and published data gave insights into to the demethylation of the forkhead box P3 (Foxp3) locus during the induced T regulatory cell differentiation. We also demonstrate how our covariate model accurately predicts methylation levels of the Foxp3 locus. Collectively, LuxGLM method improves the analysis of DNA methylation modifications, particularly for oxi-mC species. An implementation of the proposed method is available under MIT license at https

  17. Mixed model with spatial variance-covariance structure for accommodating of local stationary trend and its influence on multi-environmental crop variety trial assessment

    Energy Technology Data Exchange (ETDEWEB)

    Negash, A. W.; Mwambi, H.; Zewotir, T.; Eweke, G.

    2014-06-01

    The most common procedure for analyzing multi-environmental trials is based on the assumption that the residual error variance is homogenous across all locations considered. However, this may often be unrealistic, and therefore limit the accuracy of variety evaluation or the reliability of variety recommendations. The objectives of this study were to show the advantages of mixed models with spatial variance-covariance structures, and direct implications of model choice on the inference of varietal performance, ranking and testing based on two multi-environmental data sets from realistic national trials. A model comparison with a {chi}{sup 2}-test for the trials in the two data sets (wheat data set BW00RVTI and barley data set BW01RVII) suggested that selected spatial variance-covariance structures fitted the data significantly better than the ANOVA model. The forms of optimally-fitted spatial variance-covariance, ranking and consistency ratio test were not the same from one trial (location) to the other. Linear mixed models with single stage analysis including spatial variance-covariance structure with a group factor of location on the random model also improved the real estimation of genotype effect and their ranking. The model also improved varietal performance estimation because of its capacity to handle additional sources of variation, location and genotype by location (environment) interaction variation and accommodating of local stationary trend. (Author)

  18. Poincare covariance and κ-Minkowski spacetime

    International Nuclear Information System (INIS)

    Dabrowski, Ludwik; Piacitelli, Gherardo

    2011-01-01

    A fully Poincare covariant model is constructed as an extension of the κ-Minkowski spacetime. Covariance is implemented by a unitary representation of the Poincare group, and thus complies with the original Wigner approach to quantum symmetries. This provides yet another example (besides the DFR model), where Poincare covariance is realised a la Wigner in the presence of two characteristic dimensionful parameters: the light speed and the Planck length. In other words, a Doubly Special Relativity (DSR) framework may well be realised without deforming the meaning of 'Poincare covariance'. -- Highlights: → We construct a 4d model of noncommuting coordinates (quantum spacetime). → The coordinates are fully covariant under the undeformed Poincare group. → Covariance a la Wigner holds in presence of two dimensionful parameters. → Hence we are not forced to deform covariance (e.g. as quantum groups). → The underlying κ-Minkowski model is unphysical; covariantisation does not cure this.

  19. Self-Dual Configurations in a Generalized Abelian Chern-Simons-Higgs Model with Explicit Breaking of the Lorentz Covariance

    International Nuclear Information System (INIS)

    Sourrouille, Lucas; Casana, Rodolfo

    2016-01-01

    We have studied the existence of self-dual solitonic solutions in a generalization of the Abelian Chern-Simons-Higgs model. Such a generalization introduces two different nonnegative functions, ω_1(|ϕ|) and ω(|ϕ|), which split the kinetic term of the Higgs field, |D_μϕ|"2→ω_1(|ϕ|)|D_0ϕ|"2-ω(|ϕ|)|D_kϕ|"2, breaking explicitly the Lorentz covariance. We have shown that a clean implementation of the Bogomolnyi procedure only can be implemented whether ω(|ϕ|)∝β|ϕ|"2"β"-"2 with β≥1. The self-dual or Bogomolnyi equations produce an infinity number of soliton solutions by choosing conveniently the generalizing function ω_1(|ϕ|) which must be able to provide a finite magnetic field. Also, we have shown that by properly choosing the generalizing functions it is possible to reproduce the Bogomolnyi equations of the Abelian Maxwell-Higgs and Chern-Simons-Higgs models. Finally, some new self-dual |ϕ|"6-vortex solutions have been analyzed from both theoretical and numerical point of view.

  20. Seasonal variation of photosynthetic model parameters and leaf area index from global Fluxnet eddy covariance data

    Science.gov (United States)

    Groenendijk, M.; Dolman, A. J.; Ammann, C.; Arneth, A.; Cescatti, A.; Dragoni, D.; Gash, J. H. C.; Gianelle, D.; Gioli, B.; Kiely, G.; Knohl, A.; Law, B. E.; Lund, M.; Marcolla, B.; van der Molen, M. K.; Montagnani, L.; Moors, E.; Richardson, A. D.; Roupsard, O.; Verbeeck, H.; Wohlfahrt, G.

    2011-12-01

    Global vegetation models require the photosynthetic parameters, maximum carboxylation capacity (Vcm), and quantum yield (α) to parameterize their plant functional types (PFTs). The purpose of this work is to determine how much the scaling of the parameters from leaf to ecosystem level through a seasonally varying leaf area index (LAI) explains the parameter variation within and between PFTs. Using Fluxnet data, we simulate a seasonally variable LAIF for a large range of sites, comparable to the LAIM derived from MODIS. There are discrepancies when LAIF reach zero levels and LAIM still provides a small positive value. We find that temperature is the most common constraint for LAIF in 55% of the simulations, while global radiation and vapor pressure deficit are the key constraints for 18% and 27% of the simulations, respectively, while large differences in this forcing still exist when looking at specific PFTs. Despite these differences, the annual photosynthesis simulations are comparable when using LAIF or LAIM (r2 = 0.89). We investigated further the seasonal variation of ecosystem-scale parameters derived with LAIF. Vcm has the largest seasonal variation. This holds for all vegetation types and climates. The parameter α is less variable. By including ecosystem-scale parameter seasonality we can explain a considerable part of the ecosystem-scale parameter variation between PFTs. The remaining unexplained leaf-scale PFT variation still needs further work, including elucidating the precise role of leaf and soil level nitrogen.

  1. A Note on the Power Provided by Sibships of Sizes 2, 3, and 4 in Genetic Covariance Modeling of a Codominant QTL.

    NARCIS (Netherlands)

    Dolan, C.V.; Boomsma, D.I.; Neale, M.C.

    1999-01-01

    The contribution of size 3 and size 4 sibships to power in covariance structure modeling of a codominant QTL is investigated. Power calculations are based on the noncentral chi-square distribution. Sixteen sets of parameter values are considered. Results indicate that size 3 and size 4 sibships

  2. Modeling the safety impacts of driving hours and rest breaks on truck drivers considering time-dependent covariates.

    Science.gov (United States)

    Chen, Chen; Xie, Yuanchang

    2014-12-01

    Driving hours and rest breaks are closely related to driver fatigue, which is a major contributor to truck crashes. This study investigates the effects of driving hours and rest breaks on commercial truck driver safety. A discrete-time logistic regression model is used to evaluate the crash odds ratios of driving hours and rest breaks. Driving time is divided into 11 one hour intervals. These intervals and rest breaks are modeled as dummy variables. In addition, a Cox proportional hazards regression model with time-dependent covariates is used to assess the transient effects of rest breaks, which consists of a fixed effect and a variable effect. Data collected from two national truckload carriers in 2009 and 2010 are used. The discrete-time logistic regression result indicates that only the crash odds ratio of the 11th driving hour is statistically significant. Taking one, two, and three rest breaks can reduce drivers' crash odds by 68%, 83%, and 85%, respectively, compared to drivers who did not take any rest breaks. The Cox regression result shows clear transient effects for rest breaks. It also suggests that drivers may need some time to adjust themselves to normal driving tasks after a rest break. Overall, the third rest break's safety benefit is very limited based on the results of both models. The findings of this research can help policy makers better understand the impact of driving time and rest breaks and develop more effective rules to improve commercial truck safety. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.

  3. Clustered multistate models with observation level random effects, mover-stayer effects and dynamic covariates: modelling transition intensities and sojourn times in a study of psoriatic arthritis.

    Science.gov (United States)

    Yiu, Sean; Farewell, Vernon T; Tom, Brian D M

    2018-02-01

    In psoriatic arthritis, it is important to understand the joint activity (represented by swelling and pain) and damage processes because both are related to severe physical disability. The paper aims to provide a comprehensive investigation into both processes occurring over time, in particular their relationship, by specifying a joint multistate model at the individual hand joint level, which also accounts for many of their important features. As there are multiple hand joints, such an analysis will be based on the use of clustered multistate models. Here we consider an observation level random-effects structure with dynamic covariates and allow for the possibility that a subpopulation of patients is at minimal risk of damage. Such an analysis is found to provide further understanding of the activity-damage relationship beyond that provided by previous analyses. Consideration is also given to the modelling of mean sojourn times and jump probabilities. In particular, a novel model parameterization which allows easily interpretable covariate effects to act on these quantities is proposed.

  4. Serial grey-box model of a stratified thermal tank for hierarchical control of a solar plant

    Energy Technology Data Exchange (ETDEWEB)

    Arahal, Manuel R. [Universidad de Sevilla, Dpto. de Ingenieria de Sistemas y Automatica, Camino de los Descubrimientos s/n, 41092 Sevilla (Spain); Cirre, Cristina M. [Convenio Universidad de Almeria-Plataforma Solar de Almeria, Ctra. Senes s/n, 04200 Tabernas, Almeria (Spain); Berenguel, Manuel [Universidad de Almeria, Dpto. Lenguajes y Computacion, Ctra. Sacramento s/n, 04120, Almeria (Spain)

    2008-05-15

    The ACUREX collector field together with a thermal storage tank and a power conversion system forms the Small Solar Power Systems plant of the Plataforma Solar de Almeria, a facility that has been used for research for the last 25 years. A simulator of the collector field produced by the last author has been available to and used as a test-bed for control strategies. Up to now, however, there is not a model for the whole plant. Such model is needed for hierarchical control schemes also proposed by the authors. In this paper a model of the thermal storage tank is derived using the Simultaneous Perturbation Stochastic Approximation technique to adjust the parameters of a serial grey-box model structure. The benefits of the proposed approach are discussed in the context of the intended use, requiring a model capable of simulating the behavior of the storage tank with low computational load and low error over medium to large horizons. The model is tested against real data in a variety of situations showing its performance in terms of simulation error in the temperature profile and in the usable energy stored in the tank. The results obtained demonstrate the viability of the proposed approach. (author)

  5. Multi-state model for studying an intermediate event using time-dependent covariates: application to breast cancer.

    Science.gov (United States)

    Meier-Hirmer, Carolina; Schumacher, Martin

    2013-06-20

    The aim of this article is to propose several methods that allow to investigate how and whether the shape of the hazard ratio after an intermediate event depends on the waiting time to occurrence of this event and/or the sojourn time in this state. A simple multi-state model, the illness-death model, is used as a framework to investigate the occurrence of this intermediate event. Several approaches are shown and their advantages and disadvantages are discussed. All these approaches are based on Cox regression. As different time-scales are used, these models go beyond Markov models. Different estimation methods for the transition hazards are presented. Additionally, time-varying covariates are included into the model using an approach based on fractional polynomials. The different methods of this article are then applied to a dataset consisting of four studies conducted by the German Breast Cancer Study Group (GBSG). The occurrence of the first isolated locoregional recurrence (ILRR) is studied. The results contribute to the debate on the role of the ILRR with respect to the course of the breast cancer disease and the resulting prognosis. We have investigated different modelling strategies for the transition hazard after ILRR or in general after an intermediate event. Including time-dependent structures altered the resulting hazard functions considerably and it was shown that this time-dependent structure has to be taken into account in the case of our breast cancer dataset. The results indicate that an early recurrence increases the risk of death. A late ILRR increases the hazard function much less and after the successful removal of the second tumour the risk of death is almost the same as before the recurrence. With respect to distant disease, the appearance of the ILRR only slightly increases the risk of death if the recurrence was treated successfully. It is important to realize that there are several modelling strategies for the intermediate event and that

  6. Numerical simulations with a FSI-calibrated actuator disk model of wind turbines operating in stratified ABLs

    Science.gov (United States)

    Gohari, S. M. Iman; Sarkar, Sutanu; Korobenko, Artem; Bazilevs, Yuri

    2017-11-01

    Numerical simulations of wind turbines operating under different regimes of stability are performed using LES. A reduced model, based on the generalized actuator disk model (ADM), is implemented to represent the wind turbines within the ABL. Data from the fluid-solid interaction (FSI) simulations of wind turbines have been used to calibrate and validate the reduced model. The computational cost of this method to include wind turbines is affordable and incurs an overhead as low as 1.45%. Using this reduced model, we study the coupling of unsteady turbulent flow with the wind turbine under different ABL conditions: (i) A neutral ABL with zero heat-flux and inversion layer at 350m, in which the incoming wind has the maximum mean shear between the heights of upper-tip and lower-tip; (2) A shallow ABL with surface cooling rate of -1 K/hr wherein the low level jet occurs at the wind turbine hub height. We will discuss how the differences in the unsteady flow between the two ABL regimes impact the wind turbine performance.

  7. Activities on covariance estimation in Japanese Nuclear Data Committee

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)

  8. Illustration of Step-Wise Latent Class Modeling With Covariates and Taxometric Analysis in Research Probing Children's Mental Models in Learning Sciences.

    Science.gov (United States)

    Stamovlasis, Dimitrios; Papageorgiou, George; Tsitsipis, Georgios; Tsikalas, Themistoklis; Vaiopoulou, Julie

    2018-01-01

    This paper illustrates two psychometric methods, latent class analysis (LCA) and taxometric analysis (TA) using empirical data from research probing children's mental representation in science learning. LCA is used to obtain a typology based on observed variables and to further investigate how the encountered classes might be related to external variables, where the effectiveness of classification process and the unbiased estimations of parameters become the main concern. In the step-wise LCA, the class membership is assigned and subsequently its relationship with covariates is established. This leading-edge modeling approach suffers from severe downward-biased estimations. The illustration of LCA is focused on alternative bias correction approaches and demonstrates the effect of modal and proportional class-membership assignment along with BCH and ML correction procedures. The illustration of LCA is presented with three covariates, which are psychometric variables operationalizing formal reasoning, divergent thinking and field dependence-independence, respectively. Moreover, taxometric analysis, a method designed to detect the type of the latent structural model, categorical or dimensional, is introduced, along with the relevant basic concepts and tools. TA was applied complementarily in the same data sets to answer the fundamental hypothesis about children's naïve knowledge on the matters under study and it comprises an additional asset in building theory which is fundamental for educational practices. Taxometric analysis provided results that were ambiguous as far as the type of the latent structure. This finding initiates further discussion and sets a problematization within this framework rethinking fundamental assumptions and epistemological issues.

  9. Efficacy of deferoxamine in animal models of intracerebral hemorrhage: a systematic review and stratified meta-analysis.

    Directory of Open Access Journals (Sweden)

    Han-Jin Cui

    Full Text Available Intracerebral hemorrhage (ICH is a subtype of stroke associated with high morbidity and mortality rates. No proven treatments are available for this condition. Iron-mediated free radical injury is associated with secondary damage following ICH. Deferoxamine (DFX, a ferric-iron chelator, is a candidate drug for the treatment of ICH. We performed a systematic review of studies involving the administration of DFX following ICH. In total, 20 studies were identified that described the efficacy of DFX in animal models of ICH and assessed changes in the brain water content, neurobehavioral score, or both. DFX reduced the brain water content by 85.7% in animal models of ICH (-0.86, 95% CI: -.48- -0.23; P < 0.01; 23 comparisons, and improved the neurobehavioral score by -1.08 (95% CI: -1.23- -0.92; P < 0.01; 62 comparisons. DFX was most efficacious when administered 2-4 h after ICH at a dose of 10-50 mg/kg depending on species, and this beneficial effect remained for up to 24 h postinjury. The efficacy was higher with phenobarbital anesthesia, intramuscular injection, and lysed erythrocyte infusion, and in Fischer 344 rats or aged animals. Overall, although DFX was found to be effective in experimental ICH, additional confirmation is needed due to possible publication bias, poor study quality, and the limited number of studies conducting clinical trials.

  10. Asymptotic Theory for the QMLE in GARCH-X Models with Stationary and Non-Stationary Covariates

    DEFF Research Database (Denmark)

    Han, Heejoon; Kristensen, Dennis

    as captured by its long-memory parameter dx; in particular, we allow for both stationary and non-stationary covariates. We show that the QMLE'’s of the regression coefficients entering the volatility equation are consistent and normally distributed in large samples independently of the degree of persistence....... This implies that standard inferential tools, such as t-statistics, do not have to be adjusted to the level of persistence. On the other hand, the intercept in the volatility equation is not identifi…ed when the covariate is non-stationary which is akin to the results of Jensen and Rahbek (2004, Econometric...

  11. Evaluation of logistic regression models and effect of covariates for case-control study in RNA-Seq analysis.

    Science.gov (United States)

    Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L

    2017-02-06

    Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.

  12. Spatiotemporal Variability and Covariability of Temperature, Precipitation, Soil Moisture, and Vegetation in North America for Regional Climate Model Applications

    Science.gov (United States)

    Castro, C. L.; Beltran-Przekurat, A. B.; Pielke, R. A.

    2007-05-01

    Previous work has established that the dominant modes of Pacific SSTs influence the summer climate of North America through large-scale forcing, and this effect is most pronounced during the early part of the season. It is hypothesized, then, that land surface influences become more dominant in the latter part of the season as remote teleconnection influences diminish. As a first step toward investigation of this hypothesis in a regional climate model (RCM) framework, the statistically signficant spatiotemporal patterns of variability and covariability in North American precipitation (specified by the standardized precipitation index, or SPI), soil moisture, and vegetation are determined for timescales from a month to six months. To specify these respective data we use: CPC gauge- derived precipitation (1950-2000), Variable Infiltration Capacity (VIC) Model and NOAH Model NLDAS soil moisture and temperature, and the Global Inventory Modeling and Mapping Studies Normalized Difference Vegetation Index (GIMMS-NDVI). The principal statistical tool used is multiple taper frequency singular value decomposition (MTM-SVD), and this is supplemented by wavelet analysis for specific areas of interest. The significant interannual variability in all of these data occur at a timescale of about 7 to 9 years and appears to be the integrated effect of remote SST forcing from the Pacific. Considering the entire year, the spatial pattern for precipitation resembles the typical ENSO winter signature. If the summer season is considered seperately, the out of phase relationship between precipitation anomalies in the central U.S. and core monsoon region is apparent. The largest soil moisture anomalies occur in the central U.S., since precipitation in this region has a consistent relationship to Pacific SSTs for the entire year. This helps to explain the approximately 20 year periodicity in drought conditions there. Unlike soil moisture, the largest anomalies in vegetation occur in the

  13. Free Falling in Stratified Fluids

    Science.gov (United States)

    Lam, Try; Vincent, Lionel; Kanso, Eva

    2017-11-01

    Leaves falling in air and discs falling in water are examples of unsteady descents due to complex interaction between gravitational and aerodynamic forces. Understanding these descent modes is relevant to many branches of engineering and science such as estimating the behavior of re-entry space vehicles to studying biomechanics of seed dispersion. For regularly shaped objects falling in homogenous fluids, the motion is relatively well understood. However, less is known about how density stratification of the fluid medium affects the falling behavior. Here, we experimentally investigate the descent of discs in both pure water and in stable linearly stratified fluids for Froude numbers Fr 1 and Reynolds numbers Re between 1000 -2000. We found that stable stratification (1) enhances the radial dispersion of the disc at landing, (2) increases the descent time, (3) decreases the inclination (or nutation) angle, and (4) decreases the fluttering amplitude while falling. We conclude by commenting on how the corresponding information can be used as a predictive model for objects free falling in stratified fluids.

  14. A generalized partially linear mean-covariance regression model for longitudinal proportional data, with applications to the analysis of quality of life data from cancer clinical trials.

    Science.gov (United States)

    Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng

    2017-05-30

    Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. The stratified Boycott effect

    Science.gov (United States)

    Peacock, Tom; Blanchette, Francois; Bush, John W. M.

    2005-04-01

    We present the results of an experimental investigation of the flows generated by monodisperse particles settling at low Reynolds number in a stably stratified ambient with an inclined sidewall. In this configuration, upwelling beneath the inclined wall associated with the Boycott effect is opposed by the ambient density stratification. The evolution of the system is determined by the relative magnitudes of the container depth, h, and the neutral buoyancy height, hn = c0(ρp-ρf)/|dρ/dz|, where c0 is the particle concentration, ρp the particle density, ρf the mean fluid density and dρ/dz Boycott layer transports dense fluid from the bottom to the top of the system; subsequently, the upper clear layer of dense saline fluid is mixed by convection. For sufficiently strong stratification, h > hn, layering occurs. The lowermost layer is created by clear fluid transported from the base to its neutral buoyancy height, and has a vertical extent hn; subsequently, smaller overlying layers develop. Within each layer, convection erodes the initially linear density gradient, generating a step-like density profile throughout the system that persists after all the particles have settled. Particles are transported across the discrete density jumps between layers by plumes of particle-laden fluid.

  16. The "covariation method" for estimating the parameters of the standard Dynamic Energy Budget model II: Properties and preliminary patterns

    Science.gov (United States)

    Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.

    2011-11-01

    The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different

  17. Brownian distance covariance

    OpenAIRE

    Székely, Gábor J.; Rizzo, Maria L.

    2010-01-01

    Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...

  18. Cartographic modeling of heterogeneous landscape for footprint analysis of Eddy Covariance Measurements (Central Forest and Central Chernozem reserves, Russia)

    Science.gov (United States)

    Kozlov, Daniil

    2014-05-01

    The topographical, soil and vegetation maps of FLUXNET study areas are widely used for interpretation of eddy covariance measurements, for calibration of biogeochemical models and for making regional assessments of carbon balance. The poster presents methodological problems and results of ecosystem mapping using GIS, remote sensing, statistical and field methods on the example of two RusFluxNet sites in the Central Forest (33° E, 56°30'N) and Central Chernozem (36°10' E, 51°36'N) reserves. In the Central Forest reserve tacheometric measurements were used for topographical and peat surveys of bogged sphagnum spruce forest of 20-hectare area. Its common borders and its areas affected by windfall were determined. The supplies and spatial distribution of organic matter were obtained. The datasets of groundwater monitoring measurements on ten wells were compared with each other and the analysis of spatial and temporal groundwater variability was performed. The map of typical ecosystems of the reserve and its surroundings was created on the basis of analysis of multi-temporal Landsat images. In the Central Chernozem reserve the GNSS topographical survey was used for flux tower footprint mapping (22 ha). The features of microrelief predetermine development of different soils within the footprint. Close relationship between soil (73 drilling site) and terrain attributes (DEM with 2.5 m) allowed to build maps of soils and soil properties: carbon content, bulk density, upper boundary of secondary carbonates. Position for chamber-based soil respiration measurements was defined on the basis of these maps. The detailed geodetic and soil surveys of virgin lands and plowland were performed in order to estimate the effect of agrogenic processes such as dehumification, compaction and erosion on soils during the whole period of agricultural use of Central Chernozem reserve area and around. The choice of analogous soils was based on the similarity of their position within the

  19. Covariant diagrams for one-loop matching

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhengkang [Michigan Center for Theoretical Physics (MCTP), University of Michigan,450 Church Street, Ann Arbor, MI 48109 (United States); Deutsches Elektronen-Synchrotron (DESY),Notkestraße 85, 22607 Hamburg (Germany)

    2017-05-30

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  20. Covariant diagrams for one-loop matching

    International Nuclear Information System (INIS)

    Zhang, Zhengkang

    2017-01-01

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  1. Covariant Lyapunov vectors

    International Nuclear Information System (INIS)

    Ginelli, Francesco; Politi, Antonio; Chaté, Hugues; Livi, Roberto

    2013-01-01

    Recent years have witnessed a growing interest in covariant Lyapunov vectors (CLVs) which span local intrinsic directions in the phase space of chaotic systems. Here, we review the basic results of ergodic theory, with a specific reference to the implications of Oseledets’ theorem for the properties of the CLVs. We then present a detailed description of a ‘dynamical’ algorithm to compute the CLVs and show that it generically converges exponentially in time. We also discuss its numerical performance and compare it with other algorithms presented in the literature. We finally illustrate how CLVs can be used to quantify deviations from hyperbolicity with reference to a dissipative system (a chain of Hénon maps) and a Hamiltonian model (a Fermi–Pasta–Ulam chain). This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Lyapunov analysis: from dynamical systems theory to applications’. (paper)

  2. Covariant three-dimensional equation for the wave function of π meson in the composite model of spinor quarks

    International Nuclear Information System (INIS)

    Savron, V.I.; Skachkov, N.B.; Tyumenkov, G.Yu.

    1982-01-01

    A covariant three dimensional equation is derived for a wave function of a pseudoscalar particle, compoused of two equal mass quarks (quark and antiquark) with spins 1/2. This equation describes a relative motion of two quarks in π meson. An asymptotics of the solution of this equation is found in the momentum representation in the case of quarks interaction chosen in a form of a one gluon exchange amplitude [ru

  3. Covariance Function for Nearshore Wave Assimilation Systems

    Science.gov (United States)

    2018-01-30

    which is applicable for any spectral wave model. The four dimensional variational (4DVar) assimilation methods are based on the mathematical ...covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications , the covariance function depends primarily on...SPECTRAL ACTION DENSITY, RESPECTIVELY. ............................ 5 FIGURE 2. TOP ROW: STATISTICAL ANALYSIS OF THE WAVE-FIELD PROPERTIES AT THE

  4. Treatment Effects with Many Covariates and Heteroskedasticity

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.

    The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...

  5. Spatial prediction of Soil Organic Carbon contents in croplands, grasslands and forests using environmental covariates and Generalized Additive Models (Southern Belgium)

    Science.gov (United States)

    Chartin, Caroline; Stevens, Antoine; van Wesemael, Bas

    2015-04-01

    Providing spatially continuous Soil Organic Carbon data (SOC) is needed to support decisions regarding soil management, and inform the political debate with quantified estimates of the status and change of the soil resource. Digital Soil Mapping techniques are based on relations existing between a soil parameter (measured at different locations in space at a defined period) and relevant covariates (spatially continuous data) that are factors controlling soil formation and explaining the spatial variability of the target variable. This study aimed at apply DSM techniques to recent SOC content measurements (2005-2013) in three different landuses, i.e. cropland, grassland, and forest, in the Walloon region (Southern Belgium). For this purpose, SOC databases of two regional Soil Monitoring Networks (CARBOSOL for croplands and grasslands, and IPRFW for forests) were first harmonized, totalising about 1,220 observations. Median values of SOC content for croplands, grasslands, and forests, are respectively of 12.8, 29.0, and 43.1 g C kg-1. Then, a set of spatial layers were prepared with a resolution of 40 meters and with the same grid topology, containing environmental covariates such as, landuses, Digital Elevation Model and its derivatives, soil texture, C factor, carbon inputs by manure, and climate. Here, in addition to the three classical texture classes (clays, silt, and sand), we tested the use of clays + fine silt content (particles < 20 µm and related to stable carbon fraction) as soil covariate explaining SOC variations. For each of the three land uses (cropland, grassland and forest), a Generalized Additive Model (GAM) was calibrated on two thirds of respective dataset. The remaining samples were assigned to a test set to assess model performance. A backward stepwise procedure was followed to select the relevant environmental covariates using their approximate p-values (the level of significance was set at p < 0.05). Standard errors were estimated for each of

  6. Illustration of Step-Wise Latent Class Modeling With Covariates and Taxometric Analysis in Research Probing Children's Mental Models in Learning Sciences

    Directory of Open Access Journals (Sweden)

    Dimitrios Stamovlasis

    2018-04-01

    Full Text Available This paper illustrates two psychometric methods, latent class analysis (LCA and taxometric analysis (TA using empirical data from research probing children's mental representation in science learning. LCA is used to obtain a typology based on observed variables and to further investigate how the encountered classes might be related to external variables, where the effectiveness of classification process and the unbiased estimations of parameters become the main concern. In the step-wise LCA, the class membership is assigned and subsequently its relationship with covariates is established. This leading-edge modeling approach suffers from severe downward-biased estimations. The illustration of LCA is focused on alternative bias correction approaches and demonstrates the effect of modal and proportional class-membership assignment along with BCH and ML correction procedures. The illustration of LCA is presented with three covariates, which are psychometric variables operationalizing formal reasoning, divergent thinking and field dependence-independence, respectively. Moreover, taxometric analysis, a method designed to detect the type of the latent structural model, categorical or dimensional, is introduced, along with the relevant basic concepts and tools. TA was applied complementarily in the same data sets to answer the fundamental hypothesis about children's naïve knowledge on the matters under study and it comprises an additional asset in building theory which is fundamental for educational practices. Taxometric analysis provided results that were ambiguous as far as the type of the latent structure. This finding initiates further discussion and sets a problematization within this framework rethinking fundamental assumptions and epistemological issues.

  7. Estimation of the lifetime distribution of mechatronic systems in the presence of a covariate: A comparison among parametric, semiparametric and nonparametric models

    International Nuclear Information System (INIS)

    Bobrowski, Sebastian; Chen, Hong; Döring, Maik; Jensen, Uwe; Schinköthe, Wolfgang

    2015-01-01

    In practice manufacturers may have lots of failure data of similar products using the same technology basis under different operating conditions. Thus, one can try to derive predictions for the distribution of the lifetime of newly developed components or new application environments through the existing data using regression models based on covariates. Three categories of such regression models are considered: a parametric, a semiparametric and a nonparametric approach. First, we assume that the lifetime is Weibull distributed, where its parameters are modelled as linear functions of the covariate. Second, the Cox proportional hazards model, well-known in Survival Analysis, is applied. Finally, a kernel estimator is used to interpolate between empirical distribution functions. In particular the last case is new in the context of reliability analysis. We propose a goodness of fit measure (GoF), which can be applied to all three types of regression models. Using this GoF measure we discuss a new model selection procedure. To illustrate this method of reliability prediction, the three classes of regression models are applied to real test data of motor experiments. Further the performance of the approaches is investigated by Monte Carlo simulations. - Highlights: • We estimate the lifetime distribution in the presence of a covariate. • Three types of regression models are considered and compared. • A new nonparametric estimator based on our particular data structure is introduced. • We propose a goodness of fit measure and show a new model selection procedure. • A case study with real data and Monte Carlo simulations are performed

  8. Uncertainty covariances in robotics applications

    International Nuclear Information System (INIS)

    Smith, D.L.

    1984-01-01

    The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized

  9. Modelling the fatigue behaviour of a stratified glass-epoxy composite: theoretical and experimental aspects; Modelisation du comportement en fatigue d`un composite stratifie verre-epoxyde: aspects theoriques et experimentaux

    Energy Technology Data Exchange (ETDEWEB)

    Verdiere, N.; Suri, C. [Laboratoire de mecanique appliquee, 25 - Besancon (France)

    1996-01-01

    Composite materials are used in the manufacture of water transport pipework for use in PWR`s. Estimation of their life expectancy relies on long and costly tests (ASTM D2992B standard). It would be extremely advantageous to have another method relying only on short laboratory tests which could be based on a mechanical behaviour and damage model. For several years, the Laboratoire de Mecanique Appliquee de Besancon has been developing a mechanical behaviour model for composite material tubes for different types of multiaxial stresses. However, this model does not take into account the fatigue behaviour. We therefore needed to find out how this type of stress could be incorporated into the model. To this end, research was undertaken in the form of a thesis (by E. Joseph) both to perfect the multiaxial fatigue stress testing machines and to take into account this type of behaviour in the mechanical model. This study covered glass fibre/epoxy resin composite material tubes and allowed their behaviour to be modelled. An important part of the work concerned the instrumentation and adaptation of test machines which hitherto did not exist so that the research could be carried out. For each of the stress axes (traction, internal pressure without vacuum effect ({Sigma}{sup zz}=0) and internal pressure with vacuum effect ({Sigma}{sup zz}=1/2{Sigma}{sup {theta}{theta}})), instantaneous behaviour was studied. Three stress levels and frequency values were used to define the fatigue behaviour. (authors). 23 refs., 41 figs., 5 tabs.

  10. Covariant w∞ gravity

    NARCIS (Netherlands)

    Bergshoeff, E.; Pope, C.N.; Stelle, K.S.

    1990-01-01

    We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.

  11. A class of covariate-dependent spatiotemporal covariance functions

    Science.gov (United States)

    Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.

    2014-01-01

    In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199

  12. Competing risks and time-dependent covariates

    DEFF Research Database (Denmark)

    Cortese, Giuliana; Andersen, Per K

    2010-01-01

    Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates......, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates...

  13. Evaluation of covariance in theoretical calculation of nuclear data

    International Nuclear Information System (INIS)

    Kikuchi, Yasuyuki

    1981-01-01

    Covariances of the cross sections are discussed on the statistical model calculations. Two categories of covariance are discussed: One is caused by the model approximation and the other by the errors in the model parameters. As an example, the covariances are calculated for 100 Ru. (author)

  14. Covariant representations of nuclear *-algebras

    International Nuclear Information System (INIS)

    Moore, S.M.

    1978-01-01

    Extensions of the Csup(*)-algebra theory for covariant representations to nuclear *-algebra are considered. Irreducible covariant representations are essentially unique, an invariant state produces a covariant representation with stable vacuum, and the usual relation between ergodic states and covariant representations holds. There exist construction and decomposition theorems and a possible relation between derivations and covariant representations

  15. Covariant Noncommutative Field Theory

    Energy Technology Data Exchange (ETDEWEB)

    Estrada-Jimenez, S [Licenciaturas en Fisica y en Matematicas, Facultad de Ingenieria, Universidad Autonoma de Chiapas Calle 4a Ote. Nte. 1428, Tuxtla Gutierrez, Chiapas (Mexico); Garcia-Compean, H [Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN P.O. Box 14-740, 07000 Mexico D.F., Mexico and Centro de Investigacion y de Estudios Avanzados del IPN, Unidad Monterrey Via del Conocimiento 201, Parque de Investigacion e Innovacion Tecnologica (PIIT) Autopista nueva al Aeropuerto km 9.5, Lote 1, Manzana 29, cp. 66600 Apodaca Nuevo Leon (Mexico); Obregon, O [Instituto de Fisica de la Universidad de Guanajuato P.O. Box E-143, 37150 Leon Gto. (Mexico); Ramirez, C [Facultad de Ciencias Fisico Matematicas, Universidad Autonoma de Puebla, P.O. Box 1364, 72000 Puebla (Mexico)

    2008-07-02

    The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced.

  16. Covariant Noncommutative Field Theory

    International Nuclear Information System (INIS)

    Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.

    2008-01-01

    The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced

  17. Covariant diagrams for one-loop matching

    International Nuclear Information System (INIS)

    Zhang, Zhengkang

    2016-10-01

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  18. Covariant diagrams for one-loop matching

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2016-10-15

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  19. On estimating cosmology-dependent covariance matrices

    International Nuclear Information System (INIS)

    Morrison, Christopher B.; Schneider, Michael D.

    2013-01-01

    We describe a statistical model to estimate the covariance matrix of matter tracer two-point correlation functions with cosmological simulations. Assuming a fixed number of cosmological simulation runs, we describe how to build a 'statistical emulator' of the two-point function covariance over a specified range of input cosmological parameters. Because the simulation runs with different cosmological models help to constrain the form of the covariance, we predict that the cosmology-dependent covariance may be estimated with a comparable number of simulations as would be needed to estimate the covariance for fixed cosmology. Our framework is a necessary first step in planning a simulations campaign for analyzing the next generation of cosmological surveys

  20. Validation of 3D-CMCC Forest Ecosystem Model (v.5.1) against eddy covariance data for 10 European forest sites

    DEFF Research Database (Denmark)

    Collalti, A.; Marconi, S.; Ibrom, Andreas

    2016-01-01

    This study evaluates the performances of the new version (v.5.1) of 3D-CMCC Forest Ecosystem Model (FEM) in simulating gross primary productivity (GPP), against eddy covariance GPP data for 10 FLUXNET forest sites across Europe. A new carbon allocation module, coupled with new both phenological...... over Europe without a site-related calibration, the model has been deliberately parametrized with a single set of species-specific parametrizations for each forest ecosystem. The model consistently reproduces both in timing and in magnitude daily and monthly GPP variability across all sites...... sites we evaluate whether a more accurate representation of forest structural characteristics (i.e. cohorts, forest layers) and species composition can improve model results. In two of the three sites results reveal that model slightly increases its performances although, statistically speaking...

  1. Analysis of Temporal-spatial Co-variation within Gene Expression Microarray Data in an Organogenesis Model

    Science.gov (United States)

    Ehler, Martin; Rajapakse, Vinodh; Zeeberg, Barry; Brooks, Brian; Brown, Jacob; Czaja, Wojciech; Bonner, Robert F.

    The gene networks underlying closure of the optic fissure during vertebrate eye development are poorly understood. We used a novel clustering method based on Laplacian Eigenmaps, a nonlinear dimension reduction method, to analyze microarray data from laser capture microdissected (LCM) cells at the site and developmental stages (days 10.5 to 12.5) of optic fissure closure. Our new method provided greater biological specificity than classical clustering algorithms in terms of identifying more biological processes and functions related to eye development as defined by Gene Ontology at lower false discovery rates. This new methodology builds on the advantages of LCM to isolate pure phenotypic populations within complex tissues and allows improved ability to identify critical gene products expressed at lower copy number. The combination of LCM of embryonic organs, gene expression microarrays, and extracting spatial and temporal co-variations appear to be a powerful approach to understanding the gene regulatory networks that specify mammalian organogenesis.

  2. Covariance data processing code. ERRORJ

    International Nuclear Information System (INIS)

    Kosako, Kazuaki

    2001-01-01

    The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)

  3. Forecasting Covariance Matrices: A Mixed Frequency Approach

    DEFF Research Database (Denmark)

    Halbleib, Roxana; Voev, Valeri

    This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...

  4. Removal of clouds, dust and shadow pixels from hyperspectral imagery using a non-separable and stationary spatio-temporal covariance model

    KAUST Repository

    Angel, Yoseline

    2016-10-25

    Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multitemporal data and spatio-Temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-Temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.

  5. Removal of clouds, dust and shadow pixels from hyperspectral imagery using a non-separable and stationary spatio-temporal covariance model

    KAUST Repository

    Angel, Yoseline; Houborg, Rasmus; McCabe, Matthew

    2016-01-01

    Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multi-temporal data and spatio-temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.

  6. Removal of clouds, dust and shadow pixels from hyperspectral imagery using a non-separable and stationary spatio-temporal covariance model

    Science.gov (United States)

    Angel, Yoseline; Houborg, Rasmus; McCabe, Matthew F.

    2016-10-01

    Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multitemporal data and spatio-temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.

  7. Removal of clouds, dust and shadow pixels from hyperspectral imagery using a non-separable and stationary spatio-temporal covariance model

    KAUST Repository

    Angel, Yoseline

    2016-09-26

    Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multi-temporal data and spatio-temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.

  8. Covariant electromagnetic field lines

    Science.gov (United States)

    Hadad, Y.; Cohen, E.; Kaminer, I.; Elitzur, A. C.

    2017-08-01

    Faraday introduced electric field lines as a powerful tool for understanding the electric force, and these field lines are still used today in classrooms and textbooks teaching the basics of electromagnetism within the electrostatic limit. However, despite attempts at generalizing this concept beyond the electrostatic limit, such a fully relativistic field line theory still appears to be missing. In this work, we propose such a theory and define covariant electromagnetic field lines that naturally extend electric field lines to relativistic systems and general electromagnetic fields. We derive a closed-form formula for the field lines curvature in the vicinity of a charge, and show that it is related to the world line of the charge. This demonstrates how the kinematics of a charge can be derived from the geometry of the electromagnetic field lines. Such a theory may also provide new tools in modeling and analyzing electromagnetic phenomena, and may entail new insights regarding long-standing problems such as radiation-reaction and self-force. In particular, the electromagnetic field lines curvature has the attractive property of being non-singular everywhere, thus eliminating all self-field singularities without using renormalization techniques.

  9. B→D** lνbar semileptonic decays in the frame of covariant models of Bakamjian-Thomas-type form factors

    International Nuclear Information System (INIS)

    Morenas, Vincent

    1997-01-01

    The study of semileptonic decays is of crucial importance for the physics of beauty. It was usually believed that the rates of these reactions were saturated by the channels leading to the production of ground state D and D * mesons only. Yet, experimental results have shown recently that the contribution of orbitally excited mesons are not that small. In these thesis it is presented a study of the semileptonic decays of B mesons into the first orbitally excited charmed states D ** : by using the formalism of Bakamjian-Thomas to construct the mesonic states, together with the hypothesis of infinite mass limit of the heavy quark, we provide a covariant description of the hadronic transition amplitude; moreover, all the 'good' properties of the heavy quark symmetries are naturally fulfilled. We then fixed the dynamics of the bound states of quarks by introducing four spectroscopic models and made numerical predictions, which are discussed and compared to other theoretical and experimental data when available. Finally, we also applied this formalism to the study of annihilation processes: the transition amplitude are then also written in a covariant way and the properties of heavy quark symmetries fulfilled. Numerical predictions of decay constants were made with the same four spectroscopic models. (author)

  10. Models for the a subunits of the Thermus thermophilus V/A-ATPase and Saccharomyces cerevisiae V-ATPase enzymes by cryo-EM and evolutionary covariance

    Science.gov (United States)

    Schep, Daniel G.; Rubinstein, John L.

    2016-01-01

    Rotary ATPases couple ATP synthesis or hydrolysis to proton translocation across a membrane. However, understanding proton translocation has been hampered by a lack of structural information for the membrane-embedded a subunit. The V/A-ATPase from the eubacterium Thermus thermophilus is similar in structure to the eukaryotic V-ATPase but has a simpler subunit composition and functions in vivo to synthesize ATP rather than pump protons. We determined the T. thermophilus V/A-ATPase structure by cryo-EM at 6.4 Å resolution. Evolutionary covariance analysis allowed tracing of the a subunit sequence within the map, providing a complete model of the rotary ATPase. Comparing the membrane-embedded regions of the T. thermophilus V/A-ATPase and eukaryotic V-ATPase from Saccharomyces cerevisiae allowed identification of the α-helices that belong to the a subunit and revealed the existence of previously unknown subunits in the eukaryotic enzyme. Subsequent evolutionary covariance analysis enabled construction of a model of the a subunit in the S. cerevisae V-ATPase that explains numerous biochemical studies of that enzyme. Comparing the two a subunit structures determined here with a structure of the distantly related a subunit from the bovine F-type ATP synthase revealed a conserved pattern of residues, suggesting a common mechanism for proton transport in all rotary ATPases. PMID:26951669

  11. Electromagnetic waves in stratified media

    CERN Document Server

    Wait, James R; Fock, V A; Wait, J R

    2013-01-01

    International Series of Monographs in Electromagnetic Waves, Volume 3: Electromagnetic Waves in Stratified Media provides information pertinent to the electromagnetic waves in media whose properties differ in one particular direction. This book discusses the important feature of the waves that enables communications at global distances. Organized into 13 chapters, this volume begins with an overview of the general analysis for the electromagnetic response of a plane stratified medium comprising of any number of parallel homogeneous layers. This text then explains the reflection of electromagne

  12. The covariant chiral ring

    Energy Technology Data Exchange (ETDEWEB)

    Bourget, Antoine; Troost, Jan [Laboratoire de Physique Théorique, École Normale Supérieure, 24 rue Lhomond, 75005 Paris (France)

    2016-03-23

    We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N=(4,4) supersymmetry in two dimensions. For seed target spaces K3 and T{sup 4}, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.

  13. Dimension from covariance matrices.

    Science.gov (United States)

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  14. State-dependent errors in a land surface model across biomes inferred from eddy covariance observations on multiple timescales

    NARCIS (Netherlands)

    Wang, T.; Brender, P.; Ciais, P.; Piao, S.; Mahecha, M.D.; Chevallier, F.; Reichstein, M.; Ottle, C.; Maignan, F.; Arain, A.; Bohrer, G.; Cescatti, A.; Kiely, G.; Law, B.E.; Lutz, M.; Montagnani, L.; Moors, E.J.

    2012-01-01

    Characterization of state-dependent model biases in land surface models can highlight model deficiencies, and provide new insights into model development. In this study, artificial neural networks (ANNs) are used to estimate the state-dependent biases of a land surface model (ORCHIDEE: ORganising

  15. In situ measurements of tritium evapotranspiration (³H-ET) flux over grass and soil using the gradient and eddy covariance experimental methods and the FAO-56 model.

    Science.gov (United States)

    Connan, O; Maro, D; Hébert, D; Solier, L; Caldeira Ideas, P; Laguionie, P; St-Amant, N

    2015-10-01

    The behaviour of tritium in the environment is linked to the water cycle. We compare three methods of calculating the tritium evapotranspiration flux from grassland cover. The gradient and eddy covariance methods, together with a method based on the theoretical Penmann-Monteith model were tested in a study carried out in 2013 in an environment characterised by high levels of tritium activity. The results show that each of the three methods gave similar results. The various constraints applying to each method are discussed. The results show a tritium evapotranspiration flux of around 15 mBq m(-2) s(-1) in this environment. These results will be used to improve the entry parameters for the general models of tritium transfers in the environment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Pedigree-based estimation of covariance between dominance deviations and additive genetic effects in closed rabbit lines considering inbreeding and using a computationally simpler equivalent model.

    Science.gov (United States)

    Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M

    2017-06-01

    Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.

  17. Stratified medicine and reimbursement issues

    NARCIS (Netherlands)

    Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten

    2012-01-01

    Stratified Medicine (SM) has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to

  18. Assessing parameter variability in a photosynthesis model within and between plant functional types using global Fluxnet eddy covariance data

    NARCIS (Netherlands)

    Groenendijk, M.; Dolman, A.J.; Molen, van der M.K.; Leuning, R.; Arneth, A.; Delpierre, N.; Gash, J.H.C.; Lindroth, A.; Richardson, A.D.; Verbeeck, H.; Wohlfahrt, G.

    2011-01-01

    The vegetation component in climate models has advanced since the late 1960s from a uniform prescription of surface parameters to plant functional types (PFTs). PFTs are used in global land-surface models to provide parameter values for every model grid cell. With a simple photosynthesis model we

  19. Evaluation of covariance for 238U cross sections

    International Nuclear Information System (INIS)

    Kawano, Toshihiko; Nakamura, Masahiro; Matsuda, Nobuyuki; Kanda, Yukinori

    1995-01-01

    Covariances of 238 U are generated using analytic functions for representation of the cross sections. The covariances of the (n,2n) and (n,3n) reactions are derived with a spline function, while the covariances of the total and the inelastic scattering cross section are estimated with a linearized nuclear model calculation. (author)

  20. Are your covariates under control? How normalization can re-introduce covariate effects.

    Science.gov (United States)

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  1. Covariate analysis of bivariate survival data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  2. Strong decays of sc-bar mesons in the covariant oscillator quark model with the U tilde (4)DS x O(3, 1)L-classification scheme

    International Nuclear Information System (INIS)

    Maeda, Tomohito; Yamada, Kenji; Oda, Masuho; Ishida, Shin

    2010-01-01

    We investigate the strong decays with one pseudoscalar emission of charmed strange mesons in the covariant oscillator quark model. The wave functions of composite sc-bar mesons are constructed as the irreducible representations of the U tilde (4) DS xO(3,1) L . Through the observed mass and results of decay study we discuss a novel assignment of observed charmed strange mesons from the viewpoint of the U tilde (4) DS x O(3,1) L -classification scheme. It is shown that D s0 * (2317) and D s1 (2460) are consistently explained as ground state chiralons, appeared in the U tilde (4) DS xO(3,1) L scheme. Furthermore, it is also found that recently-observed D s1 * (2710) could be described as first excited state chiralon. (author)

  3. Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data

    International Nuclear Information System (INIS)

    Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M

    2006-01-01

    The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We

  4. Cross-covariance functions for multivariate geostatistics

    KAUST Repository

    Genton, Marc G.

    2015-05-01

    Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.

  5. Cross-covariance functions for multivariate geostatistics

    KAUST Repository

    Genton, Marc G.; Kleiber, William

    2015-01-01

    Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.

  6. Survival analysis with covariates in combination with multinomial analysis to parametrize time to event for multi-state models

    NARCIS (Netherlands)

    Feenstra, T.L.; Postmus, D.; Quik, E.H.; Langendijk, H.; Krabbe, P.F.M.

    Objectives: Recent ISPOR Good practice guidelines as well as literature encourage to use a single distribution rather than the latent failure approach to model time to event for patient level simulation models with multiple competing outcomes. Aim was to apply the preferred method of a single

  7. Survival analysis with covariates in combination with multinomial analysis to parametrize time to event for multi-state models

    NARCIS (Netherlands)

    Feenstra, T.L.; Postmus, D.; Quik, E.H.; Langendijk, H.; Krabbe, P.F.M.

    2013-01-01

    Objectives: Recent ISPOR Good practice guidelines as well as literature encourage to use a single distribution rather than the latent failure approach to model time to event for patient level simulation models with multiple competing outcomes. Aim was to apply the preferred method of a single

  8. The Stratified Legitimacy of Abortions.

    Science.gov (United States)

    Kimport, Katrina; Weitz, Tracy A; Freedman, Lori

    2016-12-01

    Roe v. Wade was heralded as an end to unequal access to abortion care in the United States. However, today, despite being common and safe, abortion is performed only selectively in hospitals and private practices. Drawing on 61 interviews with obstetrician-gynecologists in these settings, we examine how they determine which abortions to perform. We find that they distinguish between more and less legitimate abortions, producing a narrative of stratified legitimacy that privileges abortions for intended pregnancies, when the fetus is unhealthy, and when women perform normative gendered sexuality, including distress about the abortion, guilt about failure to contracept, and desire for motherhood. This stratified legitimacy can perpetuate socially-inflected inequality of access and normative gendered sexuality. Additionally, we argue that the practice by physicians of distinguishing among abortions can legitimate legislative practices that regulate and restrict some kinds of abortion, further constraining abortion access. © American Sociological Association 2016.

  9. Covariant field equations in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Vanhecke, Bram [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium); Ghent University, Faculty of Physics, Gent (Belgium); Proeyen, Antoine van [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium)

    2017-12-15

    Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  10. Covariant field equations in supergravity

    International Nuclear Information System (INIS)

    Vanhecke, Bram; Proeyen, Antoine van

    2017-01-01

    Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  11. Generally covariant gauge theories

    International Nuclear Information System (INIS)

    Capovilla, R.

    1992-01-01

    A new class of generally covariant gauge theories in four space-time dimensions is investigated. The field variables are taken to be a Lie algebra valued connection 1-form and a scalar density. Modulo an important degeneracy, complex [euclidean] vacuum general relativity corresponds to a special case in this class. A canonical analysis of the generally covariant gauge theories with the same gauge group as general relativity shows that they describe two degrees of freedom per space point, qualifying therefore as a new set of neighbors of general relativity. The modification of the algebra of the constraints with respect to the general relativity case is computed; this is used in addressing the question of how general relativity stands out from its neighbors. (orig.)

  12. The Bayesian Covariance Lasso.

    Science.gov (United States)

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  13. White dwarf stars with chemically stratified atmospheres

    Science.gov (United States)

    Muchmore, D.

    1982-01-01

    Recent observations and theory suggest that some white dwarfs may have chemically stratified atmospheres - thin layers of hydrogen lying above helium-rich envelopes. Models of such atmospheres show that a discontinuous temperature inversion can occur at the boundary between the layers. Model spectra for layered atmospheres at 30,000 K and 50,000 K tend to have smaller decrements at 912 A, 504 A, and 228 A than uniform atmospheres would have. On the basis of their continuous extreme ultraviolet spectra, it is possible to distinguish observationally between uniform and layered atmospheres for hot white dwarfs.

  14. Item Response Theory with Covariates (IRT-C): Assessing Item Recovery and Differential Item Functioning for the Three-Parameter Logistic Model

    Science.gov (United States)

    Tay, Louis; Huang, Qiming; Vermunt, Jeroen K.

    2016-01-01

    In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…

  15. On an extension of covariance

    International Nuclear Information System (INIS)

    Sebestyen, A.

    1975-07-01

    The principle of covariance is extended to coordinates corresponding to internal degrees of freedom. The conditions for a system to be isolated are given. It is shown how internal forces arise in such systems. Equations for internal fields are derived. By an interpretation of the generalized coordinates based on group theory it is shown how particles of the ordinary sense enter into the model and as a simple application the gravitational interaction of two pointlike particles is considered and the shift of the perihelion is deduced. (Sz.Z.)

  16. Estimating bat and bird mortality occurring at wind energy turbines from covariates and carcass searches using mixture models.

    Science.gov (United States)

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

  17. Estimating bat and bird mortality occurring at wind energy turbines from covariates and carcass searches using mixture models.

    Directory of Open Access Journals (Sweden)

    Fränzi Korner-Nievergelt

    Full Text Available Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

  18. Exploring a physico-chemical multi-array explanatory model with a new multiple covariance-based technique: structural equation exploratory regression.

    Science.gov (United States)

    Bry, X; Verron, T; Cazes, P

    2009-05-29

    In this work, we consider chemical and physical variable groups describing a common set of observations (cigarettes). One of the groups, minor smoke compounds (minSC), is assumed to depend on the others (minSC predictors). PLS regression (PLSR) of m inSC on the set of all predictors appears not to lead to a satisfactory analytic model, because it does not take into account the expert's knowledge. PLS path modeling (PLSPM) does not use the multidimensional structure of predictor groups. Indeed, the expert needs to separate the influence of several pre-designed predictor groups on minSC, in order to see what dimensions this influence involves. To meet these needs, we consider a multi-group component-regression model, and propose a method to extract from each group several strong uncorrelated components that fit the model. Estimation is based on a global multiple covariance criterion, used in combination with an appropriate nesting approach. Compared to PLSR and PLSPM, the structural equation exploratory regression (SEER) we propose fully uses predictor group complementarity, both conceptually and statistically, to predict the dependent group.

  19. Impact of covariate models on the assessment of the air pollution-mortality association in a single- and multipollutant context.

    Science.gov (United States)

    Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M

    2012-10-01

    With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.

  20. Lorentz Covariance of Langevin Equation

    International Nuclear Information System (INIS)

    Koide, T.; Denicol, G.S.; Kodama, T.

    2008-01-01

    Relativistic covariance of a Langevin type equation is discussed. The requirement of Lorentz invariance generates an entanglement between the force and noise terms so that the noise itself should not be a covariant quantity. (author)

  1. Benefits of statistical molecular design, covariance analysis, and reference models in QSAR: a case study on acetylcholinesterase

    Science.gov (United States)

    Andersson, C. David; Hillgren, J. Mikael; Lindgren, Cecilia; Qian, Weixing; Akfur, Christine; Berg, Lotta; Ekström, Fredrik; Linusson, Anna

    2015-03-01

    Scientific disciplines such as medicinal- and environmental chemistry, pharmacology, and toxicology deal with the questions related to the effects small organic compounds exhort on biological targets and the compounds' physicochemical properties responsible for these effects. A common strategy in this endeavor is to establish structure-activity relationships (SARs). The aim of this work was to illustrate benefits of performing a statistical molecular design (SMD) and proper statistical analysis of the molecules' properties before SAR and quantitative structure-activity relationship (QSAR) analysis. Our SMD followed by synthesis yielded a set of inhibitors of the enzyme acetylcholinesterase (AChE) that had very few inherent dependencies between the substructures in the molecules. If such dependencies exist, they cause severe errors in SAR interpretation and predictions by QSAR-models, and leave a set of molecules less suitable for future decision-making. In our study, SAR- and QSAR models could show which molecular sub-structures and physicochemical features that were advantageous for the AChE inhibition. Finally, the QSAR model was used for the prediction of the inhibition of AChE by an external prediction set of molecules. The accuracy of these predictions was asserted by statistical significance tests and by comparisons to simple but relevant reference models.

  2. A Mixture Rasch Model with a Covariate: A Simulation Study via Bayesian Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Dai, Yunyun

    2013-01-01

    Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…

  3. Differential Item Functioning Analysis Using a Mixture 3-Parameter Logistic Model with a Covariate on the TIMSS 2007 Mathematics Test

    Science.gov (United States)

    Choi, Youn-Jeng; Alexeev, Natalia; Cohen, Allan S.

    2015-01-01

    The purpose of this study was to explore what may be contributing to differences in performance in mathematics on the Trends in International Mathematics and Science Study 2007. This was done by using a mixture item response theory modeling approach to first detect latent classes in the data and then to examine differences in performance on items…

  4. Structural equation modeling in the genetically informative study of the covariation of intelligence, working memory and planning

    Directory of Open Access Journals (Sweden)

    Voronin I.

    2016-01-01

    Full Text Available Structural equation modelling (SEM has become an important tool in behaviour genetic research. The application of SEM for multivariate twin analysis allows revealing the structure of genetic and environmental factors underlying individual differences in human traits. We outline the framework of twin method and SEM, describe SEM implementation of a multivariate twin model and provide an example of a multivariate twin study. The study included 901 adolescent twin pairs from Russia. We measured general cognitive ability and characteristics of working memory and planning. The individual differences in working memory and planning were explained mostly by person-specific environment. The variability of intelligence is related to genes, family environment, and person specific environment. Moderate and weak associations between intelligence, working memory, and planning were entirely explained by shared environmental effects.

  5. Distance covariance for stochastic processes

    DEFF Research Database (Denmark)

    Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady

    2017-01-01

    The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...

  6. ENDF-6 File 30: Data covariances obtained from parameter covariances and sensitivities

    International Nuclear Information System (INIS)

    Muir, D.W.

    1989-01-01

    File 30 is provided as a means of describing the covariances of tabulated cross sections, multiplicities, and energy-angle distributions that result from propagating the covariances of a set of underlying parameters (for example, the input parameters of a nuclear-model code), using an evaluator-supplied set of parameter covariances and sensitivities. Whenever nuclear data are evaluated primarily through the application of nuclear models, the covariances of the resulting data can be described very adequately, and compactly, by specifying the covariance matrix for the underlying nuclear parameters, along with a set of sensitivity coefficients giving the rate of change of each nuclear datum of interest with respect to each of the model parameters. Although motivated primarily by these applications of nuclear theory, use of File 30 is not restricted to any one particular evaluation methodology. It can be used to describe data covariances of any origin, so long as they can be formally separated into a set of parameters with specified covariances and a set of data sensitivities

  7. Generalized linear mixed model for binary outcomes when covariates are subject to measurement errors and detection limits.

    Science.gov (United States)

    Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D

    2018-01-15

    Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Modeling spatially explicit fire impact on gross primary production in interior Alaska using satellite images coupled with eddy covariance

    Science.gov (United States)

    Huang, Shengli; Liu, Heping; Dahal, Devendra; Jin, Suming; Welp, Lisa R.; Liu, Jinxun; Liu, Shuguang

    2013-01-01

    In interior Alaska, wildfires change gross primary production (GPP) after the initial disturbance. The impact of fires on GPP is spatially heterogeneous, which is difficult to evaluate by limited point-based comparisons or is insufficient to assess by satellite vegetation index. The direct prefire and postfire comparison is widely used, but the recovery identification may become biased due to interannual climate variability. The objective of this study is to propose a method to quantify the spatially explicit GPP change caused by fires and succession. We collected three Landsat images acquired on 13 July 2004, 5 August 2004, and 6 September 2004 to examine the GPP recovery of burned area from 1987 to 2004. A prefire Landsat image acquired in 1986 was used to reconstruct satellite images assuming that the fires of 1987–2004 had not occurred. We used a light-use efficiency model to estimate the GPP. This model was driven by maximum light-use efficiency (Emax) and fraction of photosynthetically active radiation absorbed by vegetation (FPAR). We applied this model to two scenarios (i.e., an actual postfire scenario and an assuming-no-fire scenario), where the changes in Emax and FPAR were taken into account. The changes in Emax were represented by the change in land cover of evergreen needleleaf forest, deciduous broadleaf forest, and shrub/grass mixed, whose Emax was determined from three fire chronosequence flux towers as 1.1556, 1.3336, and 0.5098 gC/MJ PAR. The changes in FPAR were inferred from NDVI change between the actual postfire NDVI and the reconstructed NDVI. After GPP quantification for July, August, and September 2004, we calculated the difference between the two scenarios in absolute and percent GPP changes. Our results showed rapid recovery of GPP post-fire with a 24% recovery immediately after burning and 43% one year later. For the fire scars with an age range of 2–17 years, the recovery rate ranged from 54% to 95%. In addition to the averaging

  9. Remarks on Bousso's covariant entropy bound

    CERN Document Server

    Mayo, A E

    2002-01-01

    Bousso's covariant entropy bound is put to the test in the context of a non-singular cosmological solution of general relativity found by Bekenstein. Although the model complies with every assumption made in Bousso's original conjecture, the entropy bound is violated due to the occurrence of negative energy density associated with the interaction of some the matter components in the model. We demonstrate how this property allows for the test model to 'elude' a proof of Bousso's conjecture which was given recently by Flanagan, Marolf and Wald. This corroborates the view that the covariant entropy bound should be applied only to stable systems for which every matter component carries positive energy density.

  10. B{yields}D{sup **} l{nu}bar semileptonic decays in the frame of covariant models of Bakamjian-Thomas-type form factors; Desintegrations semileptoniques B{yields}D{sup **} l{nu}bar dans le cadre de modeles covariants de facteurs de forme a la Bakamjian-Thomas

    Energy Technology Data Exchange (ETDEWEB)

    Morenas, Vincent [Ecole Doctorale des Sciences Fondamentales, Universite Blaise Pascal, U.F.R. de Recherche Scientifique et Technique, F-63177 Aubiere (France)

    1997-12-19

    The study of semileptonic decays is of crucial importance for the physics of beauty. It was usually believed that the rates of these reactions were saturated by the channels leading to the production of ground state D and D{sup *} mesons only. Yet, experimental results have shown recently that the contribution of orbitally excited mesons are not that small. In these thesis it is presented a study of the semileptonic decays of B mesons into the first orbitally excited charmed states D{sup **}: by using the formalism of Bakamjian-Thomas to construct the mesonic states, together with the hypothesis of infinite mass limit of the heavy quark, we provide a covariant description of the hadronic transition amplitude; moreover, all the `good` properties of the heavy quark symmetries are naturally fulfilled. We then fixed the dynamics of the bound states of quarks by introducing four spectroscopic models and made numerical predictions, which are discussed and compared to other theoretical and experimental data when available. Finally, we also applied this formalism to the study of annihilation processes: the transition amplitude are then also written in a covariant way and the properties of heavy quark symmetries fulfilled. Numerical predictions of decay constants were made with the same four spectroscopic models. (author) 87 refs., 20 figs., 13 tabs.

  11. Behavioral, cellular and molecular maladaptations covary with exposure to pyridostigmine bromide in a rat model of gulf war illness pain.

    Science.gov (United States)

    Cooper, B Y; Flunker, L D; Johnson, R D; Nutter, T J

    2018-08-01

    Many veterans of Operation Desert Storm (ODS) struggle with the chronic pain of Gulf War Illness (GWI). Exposure to insecticides and pyridostigmine bromide (PB) have been implicated in the etiology of this multisymptom disease. We examined the influence of 3 (DEET (N,N-diethyl-meta-toluamide), permethrin, chlorpyrifos) or 4 GW agents (DEET, permethrin, chlorpyrifos, pyridostigmine bromide (PB)) on the post-exposure ambulatory and resting behaviors of rats. In three independent studies, rats that were exposed to all 4 agents consistently developed both immediate and delayed ambulatory deficits that persisted at least 16 weeks after exposures had ceased. Rats exposed to a 3 agent protocol (PB excluded) did not develop any ambulatory deficits. Cellular and molecular studies on nociceptors harvested from 16WP (weeks post-exposure) rats indicated that vascular nociceptor Na v 1.9 mediated currents were chronically potentiated following the 4 agent protocol but not following the 3 agent protocol. Muscarinic linkages to muscle nociceptor TRPA1 were also potentiated in the 4 agent but not the 3 agent, PB excluded, protocol. Although K v 7 activity changes diverged from the behavioral data, a K v 7 opener, retigabine, transiently reversed ambulation deficits. We concluded that PB played a critical role in the development of pain-like signs in a GWI rat model and that shifts in Na v 1.9 and TRPA1 activity were critical to the expression of these pain behaviors. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Stratified Medicine and Reimbursement Issues

    Directory of Open Access Journals (Sweden)

    Hans-Joerg eFugel

    2012-10-01

    Full Text Available Stratified Medicine (SM has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to strengthen the value proposition to pricing and reimbursement (P&R authorities. However, the introduction of SM challenges current reimbursement schemes in many EU countries and the US as different P&R policies have been adopted for drugs and diagnostics. Also, there is a lack of a consistent process for value assessment of more complex diagnostics in these markets. New, innovative approaches and more flexible P&R systems are needed to reflect the added value of diagnostic tests and to stimulate investments in new technologies. Yet, the framework for access of diagnostic–based therapies still requires further development while setting the right incentives and appropriate align stakeholders interests when realizing long- term patient benefits. This article addresses the reimbursement challenges of SM approaches in several EU countries and the US outlining some options to overcome existing reimbursement barriers for stratified medicine.

  13. Evaluation of cellular effects of fine particulate matter from combustion of solid fuels used for indoor heating on the Navajo Nation using a stratified oxidative stress response model

    Science.gov (United States)

    Li, Ning; Champion, Wyatt M.; Imam, Jemal; Sidhu, Damansher; Salazar, Joseph R.; Majestic, Brian J.; Montoya, Lupita D.

    2018-06-01

    Communities in the Navajo Nation face public health burdens caused in part by the combustion of wood and coal for indoor heating using stoves that are old or in disrepair. Wood and coal combustion emits particulate matter (PM) with aerodynamic diameter combustion-derived PM2.5 on Navajo Nation residents. This study tested the hypothesis that PM2.5 generated from solid fuel combustion in stoves commonly used by Navajo residents would induce stratified oxidative stress responses ranging from activation of antioxidant defense to inflammation and cell death in mouse macrophages (RAW 264.7). PM2.5 emitted from burning Ponderosa Pine (PP) and Utah Juniper (UJ) wood and Black Mesa (BM) and Fruitland (FR) coal in a stove representative of those widely used by Navajo residents were collected, and their aqueous suspensions used for cellular exposure. PM from combustion of wood had significantly more elemental carbon (EC) (15%) and soluble Ni (0.0029%) than the samples from coal combustion (EC: 3%; Ni: 0.0019%) and was also a stronger activator of antioxidant enzyme heme oxygenase-1 (11-fold increase vs. control) than that from coal (5-fold increase). Only PM from PP-wood (12-fold) and BM-coal (3-fold) increased the release of inflammatory cytokine tumor necrosis factor alpha. Among all samples, PP-wood consistently had the strongest oxidative stress and inflammatory effects. PM components, i.e. low-volatility organic carbon, EC, Cu, Ni and K were positively correlated with the cellular responses. Results showed that, at the concentrations tested, emissions from all fuels did not have significant cytotoxicity. These findings suggest that PM2.5 emitted from combustion of wood and coal commonly used by Navajo residents may negatively impact the health of this community.

  14. Soil mixing of stratified contaminated sands.

    Science.gov (United States)

    Al-Tabba, A; Ayotamuno, M J; Martin, R J

    2000-02-01

    Validation of soil mixing for the treatment of contaminated ground is needed in a wide range of site conditions to widen the application of the technology and to understand the mechanisms involved. Since very limited work has been carried out in heterogeneous ground conditions, this paper investigates the effectiveness of soil mixing in stratified sands using laboratory-scale augers. This enabled a low cost investigation of factors such as grout type and form, auger design, installation procedure, mixing mode, curing period, thickness of soil layers and natural moisture content on the unconfined compressive strength, leachability and leachate pH of the soil-grout mixes. The results showed that the auger design plays a very important part in the mixing process in heterogeneous sands. The variability of the properties measured in the stratified soils and the measurable variations caused by the various factors considered, highlighted the importance of duplicating appropriate in situ conditions, the usefulness of laboratory-scale modelling of in situ conditions and the importance of modelling soil and contaminant heterogeneities at the treatability study stage.

  15. Contributions to Large Covariance and Inverse Covariance Matrices Estimation

    OpenAIRE

    Kang, Xiaoning

    2016-01-01

    Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...

  16. Suppression of stratified explosive interactions

    Energy Technology Data Exchange (ETDEWEB)

    Meeks, M.K.; Shamoun, B.I.; Bonazza, R.; Corradini, M.L. [Wisconsin Univ., Madison, WI (United States). Dept. of Nuclear Engineering and Engineering Physics

    1998-01-01

    Stratified Fuel-Coolant Interaction (FCI) experiments with Refrigerant-134a and water were performed in a large-scale system. Air was uniformly injected into the coolant pool to establish a pre-existing void which could suppress the explosion. Two competing effects due to the variation of the air flow rate seem to influence the intensity of the explosion in this geometrical configuration. At low flow rates, although the injected air increases the void fraction, the concurrent agitation and mixing increases the intensity of the interaction. At higher flow rates, the increase in void fraction tends to attenuate the propagated pressure wave generated by the explosion. Experimental results show a complete suppression of the vapor explosion at high rates of air injection, corresponding to an average void fraction of larger than 30%. (author)

  17. Deriving covariant holographic entanglement

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Xi [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Lewkowycz, Aitor [Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Rangamani, Mukund [Center for Quantum Mathematics and Physics (QMAP), Department of Physics, University of California, Davis, CA 95616 (United States)

    2016-11-07

    We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.

  18. Networks of myelin covariance.

    Science.gov (United States)

    Melie-Garcia, Lester; Slater, David; Ruef, Anne; Sanabria-Diaz, Gretel; Preisig, Martin; Kherif, Ferath; Draganski, Bogdan; Lutti, Antoine

    2018-04-01

    Networks of anatomical covariance have been widely used to study connectivity patterns in both normal and pathological brains based on the concurrent changes of morphometric measures (i.e., cortical thickness) between brain structures across subjects (Evans, ). However, the existence of networks of microstructural changes within brain tissue has been largely unexplored so far. In this article, we studied in vivo the concurrent myelination processes among brain anatomical structures that gathered together emerge to form nonrandom networks. We name these "networks of myelin covariance" (Myelin-Nets). The Myelin-Nets were built from quantitative Magnetization Transfer data-an in-vivo magnetic resonance imaging (MRI) marker of myelin content. The synchronicity of the variations in myelin content between anatomical regions was measured by computing the Pearson's correlation coefficient. We were especially interested in elucidating the effect of age on the topological organization of the Myelin-Nets. We therefore selected two age groups: Young-Age (20-31 years old) and Old-Age (60-71 years old) and a pool of participants from 48 to 87 years old for a Myelin-Nets aging trajectory study. We found that the topological organization of the Myelin-Nets is strongly shaped by aging processes. The global myelin correlation strength, between homologous regions and locally in different brain lobes, showed a significant dependence on age. Interestingly, we also showed that the aging process modulates the resilience of the Myelin-Nets to damage of principal network structures. In summary, this work sheds light on the organizational principles driving myelination and myelin degeneration in brain gray matter and how such patterns are modulated by aging. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  19. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.

    Science.gov (United States)

    Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z

    2015-11-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. COVARIANCE ASSISTED SCREENING AND ESTIMATION.

    Science.gov (United States)

    Ke, By Tracy; Jin, Jiashun; Fan, Jianqing

    2014-11-01

    Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.

  1. Development of covariance date for fast reactor cores. 3

    International Nuclear Information System (INIS)

    Shibata, Keiichi; Hasegawa, Akira

    1999-03-01

    Covariances have been estimated for nuclear data contained in JENDL-3.2. As for Cr and Ni, the physical quantities for which covariances are deduced are cross sections and the first order Legendre-polynomial coefficient for the angular distribution of elastically scattered neutrons. The covariances were estimated by using the same methodology that had been used in the JENDL-3.2 evaluation in order to keep a consistency between mean values and their covariances. In a case where evaluated data were based on experimental data, the covariances were estimated from the same experimental data. For cross section that had been evaluated by nuclear model calculations, the same model was applied to generate the covariances. The covariances obtained were compiled into ENDF-6 format files. The covariances, which had been prepared by the previous fiscal year, were re-examined, and some improvements were performed. Parts of Fe and 235 U covariances were updated. Covariances of nu-p and nu-d for 241 Pu and of fission neutron spectra for 233,235,238 U and 239,240 Pu were newly added to data files. (author)

  2. General Galilei Covariant Gaussian Maps

    Science.gov (United States)

    Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo

    2017-09-01

    We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].

  3. Fast Computing for Distance Covariance

    OpenAIRE

    Huo, Xiaoming; Szekely, Gabor J.

    2014-01-01

    Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...

  4. Introduction to covariant formulation of superstring (field) theory

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    The author discusses covariant formulation of superstring theories based on BRS invariance. New formulation of superstring was constructed by Green and Schwarz in the light-cone gauge first and then a covariant action was discovered. The covariant action has some interesting geometrical interpretation, however, covariant quantizations are difficult to perform because of existence of local supersymmetries. Introducing extra variables into the action, a modified action has been proposed. However, it would be difficult to prescribe constraints to define a physical subspace, or to reproduce the correct physical spectrum. Hence the old formulation, i.e., the Neveu-Schwarz-Ramond (NSR) model for covariant quantization is used. The author begins by quantizing the NSR model in a covariant way using BRS charges. Then the author discusses the field theory of (free) superstring

  5. Non-Critical Covariant Superstrings

    CERN Document Server

    Grassi, P A

    2005-01-01

    We construct a covariant description of non-critical superstrings in even dimensions. We construct explicitly supersymmetric hybrid type variables in a linear dilaton background, and study an underlying N=2 twisted superconformal algebra structure. We find similarities between non-critical superstrings in 2n+2 dimensions and critical superstrings compactified on CY_(4-n) manifolds. We study the spectrum of the non-critical strings, and in particular the Ramond-Ramond massless fields. We use the supersymmetric variables to construct the non-critical superstrings sigma-model action in curved target space backgrounds with coupling to the Ramond-Ramond fields. We consider as an example non-critical type IIA strings on AdS_2 background with Ramond-Ramond 2-form flux.

  6. Optimal covariate designs theory and applications

    CERN Document Server

    Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar

    2015-01-01

    This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...

  7. Monte Carlo stratified source-sampling

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    1997-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo open-quotes eigenvalue of the worldclose quotes problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. The original test-problem was treated by a special code designed specifically for that purpose. Recently ANL started work on a method for dealing with more realistic eigenvalue of the world configurations, and has been incorporating this method into VIM. The original method has been modified to take into account real-world statistical noise sources not included in the model problem. This paper constitutes a status report on work still in progress

  8. Multilevel maximum likelihood estimation with application to covariance matrices

    Czech Academy of Sciences Publication Activity Database

    Turčičová, Marie; Mandel, J.; Eben, Kryštof

    Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016

  9. Convex Banding of the Covariance Matrix.

    Science.gov (United States)

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  10. The covariant entropy bound in gravitational collapse

    International Nuclear Information System (INIS)

    Gao, Sijie; Lemos, Jose P. S.

    2004-01-01

    We study the covariant entropy bound in the context of gravitational collapse. First, we discuss critically the heuristic arguments advanced by Bousso. Then we solve the problem through an exact model: a Tolman-Bondi dust shell collapsing into a Schwarzschild black hole. After the collapse, a new black hole with a larger mass is formed. The horizon, L, of the old black hole then terminates at the singularity. We show that the entropy crossing L does not exceed a quarter of the area of the old horizon. Therefore, the covariant entropy bound is satisfied in this process. (author)

  11. PHOTOSPHERIC EMISSION FROM STRATIFIED JETS

    International Nuclear Information System (INIS)

    Ito, Hirotaka; Nagataki, Shigehiro; Ono, Masaomi; Lee, Shiu-Hang; Mao, Jirong; Yamada, Shoichi; Pe'er, Asaf; Mizuta, Akira; Harikae, Seiji

    2013-01-01

    We explore photospheric emissions from stratified two-component jets, wherein a highly relativistic spine outflow is surrounded by a wider and less relativistic sheath outflow. Thermal photons are injected in regions of high optical depth and propagated until the photons escape at the photosphere. Because of the presence of shear in velocity (Lorentz factor) at the boundary of the spine and sheath region, a fraction of the injected photons are accelerated using a Fermi-like acceleration mechanism such that a high-energy power-law tail is formed in the resultant spectrum. We show, in particular, that if a velocity shear with a considerable variance in the bulk Lorentz factor is present, the high-energy part of observed gamma-ray bursts (GRBs) photon spectrum can be explained by this photon acceleration mechanism. We also show that the accelerated photons might also account for the origin of the extra-hard power-law component above the bump of the thermal-like peak seen in some peculiar bursts (e.g., GRB 090510, 090902B, 090926A). We demonstrate that time-integrated spectra can also reproduce the low-energy spectrum of GRBs consistently using a multi-temperature effect when time evolution of the outflow is considered. Last, we show that the empirical E p -L p relation can be explained by differences in the outflow properties of individual sources

  12. Conformally covariant composite operators in quantum chromodynamics

    International Nuclear Information System (INIS)

    Craigie, N.S.; Dobrev, V.K.; Todorov, I.T.

    1983-03-01

    Conformal covariance is shown to determine renormalization properties of composite operators in QCD and in the C 6 3 -model at the one-loop level. Its relevance to higher order (renormalization group improved) perturbative calculations in the short distance limit is also discussed. Light cone operator product expansions and spectral representations for wave functions in QCD are derived. (author)

  13. Information content of household-stratified epidemics

    Directory of Open Access Journals (Sweden)

    T.M. Kinyanjui

    2016-09-01

    Full Text Available Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs.

  14. Information content of household-stratified epidemics.

    Science.gov (United States)

    Kinyanjui, T M; Pellis, L; House, T

    2016-09-01

    Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Unravelling spatiotemporal tree-ring signals in Mediterranean oaks: a variance-covariance modelling approach of carbon and oxygen isotope ratios.

    Science.gov (United States)

    Shestakova, Tatiana A; Aguilera, Mònica; Ferrio, Juan Pedro; Gutiérrez, Emilia; Voltas, Jordi

    2014-08-01

    Identifying how physiological responses are structured across environmental gradients is critical to understanding in what manner ecological factors determine tree performance. Here, we investigated the spatiotemporal patterns of signal strength of carbon isotope discrimination (Δ(13)C) and oxygen isotope composition (δ(18)O) for three deciduous oaks (Quercus faginea (Lam.), Q. humilis Mill. and Q. petraea (Matt.) Liebl.) and one evergreen oak (Q. ilex L.) co-occurring in Mediterranean forests along an aridity gradient. We hypothesized that contrasting strategies in response to drought would lead to differential climate sensitivities between functional groups. Such differential sensitivities could result in a contrasting imprint on stable isotopes, depending on whether the spatial or temporal organization of tree-ring signals was analysed. To test these hypotheses, we proposed a mixed modelling framework to group isotopic records into potentially homogeneous subsets according to taxonomic or geographical criteria. To this end, carbon and oxygen isotopes were modelled through different variance-covariance structures for the variability among years (at the temporal level) or sites (at the spatial level). Signal-strength parameters were estimated from the outcome of selected models. We found striking differences between deciduous and evergreen oaks in the organization of their temporal and spatial signals. Therefore, the relationships with climate were examined independently for each functional group. While Q. ilex exhibited a large spatial dependence of isotopic signals on the temperature regime, deciduous oaks showed a greater dependence on precipitation, confirming their higher susceptibility to drought. Such contrasting responses to drought among oak types were also observed at the temporal level (interannual variability), with stronger associations with growing-season water availability in deciduous oaks. Thus, our results indicate that Mediterranean deciduous

  16. Performance of STICS model to predict rainfed corn evapotranspiration and biomass evaluated for 6 years between 1995 and 2006 using daily aggregated eddy covariance fluxes and ancillary measurements.

    Science.gov (United States)

    Pattey, Elizabeth; Jégo, Guillaume; Bourgeois, Gaétan

    2010-05-01

    Verifying the performance of process-based crop growth models to predict evapotranspiration and crop biomass is a key component of the adaptation of agricultural crop production to climate variations. STICS, developed by INRA, was part of the models selected by Agriculture and Agri-Food Canada to be implemented for environmental assessment studies on climate variations, because of its built-in ability to assimilate biophysical descriptors such as LAI derived from satellite imagery and its open architecture. The model prediction of shoot biomass was calibrated using destructive biomass measurements over one season, by adjusting six cultivar parameters and three generic plant parameters to define two grain corn cultivars adapted to the 1000-km long Mixedwood Plains ecozone. Its performance was then evaluated using a database of 40 years-sites of corn destructive biomass and yield. In this study we evaluate the temporal response of STICS evapotranspiration and biomass accumulation predictions against estimates using daily aggregated eddy covariance fluxes. The flux tower was located in an experimental farm south of Ottawa and measurements carried out over corn fields in 1995, 1996, 1998, 2000, 2002 and 2006. Daytime and nighttime fluxes were QC/QA and gap-filled separately. Soil respiration was partitioned to calculate the corn net daily CO2 uptake, which was converted into dry biomass. Out of the six growing seasons, three (1995, 1998, 2002) had water stress periods during corn grain filling. Year 2000 was cool and wet, while 1996 had heat and rainfall distributed evenly over the season and 2006 had a wet spring. STICS can predict evapotranspiration using either crop coefficients, when wind speed and air moisture are not available, or resistance. The first approach provided higher prediction for all the years than the resistance approach and the flux measurements. The dynamic of evapotranspiration prediction of STICS was very good for the growing seasons without

  17. Covariant single-hole optical potential

    International Nuclear Information System (INIS)

    Kam, J. de

    1982-01-01

    In this investigation a covariant optical potential model is constructed for scattering processes of mesons from nuclei in which the meson interacts repeatedly with one of the target nucleons. The nuclear binding interactions in the intermediate scattering state are consistently taken into account. In particular for pions and K - projectiles this is important in view of the strong energy dependence of the elementary projectile-nucleon amplitude. Furthermore, this optical potential satisfies unitarity and relativistic covariance. The starting point in our discussion is the three-body model for the optical potential. To obtain a practical covariant theory I formulate the three-body model as a relativistic quasi two-body problem. Expressions for the transition interactions and propagators in the quasi two-body equations are found by imposing the correct s-channel unitarity relations and by using dispersion integrals. This is done in such a way that the correct non-relativistic limit is obtained, avoiding clustering problems. Corrections to the quasi two-body treatment from the Pauli principle and the required ground-state exclusion are taken into account. The covariant equations that we arrive at are amenable to practical calculations. (orig.)

  18. Smooth individual level covariates adjustment in disease mapping.

    Science.gov (United States)

    Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise

    2018-05-01

    Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Networks of myelin covariance

    Science.gov (United States)

    Slater, David; Ruef, Anne; Sanabria‐Diaz, Gretel; Preisig, Martin; Kherif, Ferath; Draganski, Bogdan; Lutti, Antoine

    2017-01-01

    Abstract Networks of anatomical covariance have been widely used to study connectivity patterns in both normal and pathological brains based on the concurrent changes of morphometric measures (i.e., cortical thickness) between brain structures across subjects (Evans, 2013). However, the existence of networks of microstructural changes within brain tissue has been largely unexplored so far. In this article, we studied in vivo the concurrent myelination processes among brain anatomical structures that gathered together emerge to form nonrandom networks. We name these “networks of myelin covariance” (Myelin‐Nets). The Myelin‐Nets were built from quantitative Magnetization Transfer data—an in‐vivo magnetic resonance imaging (MRI) marker of myelin content. The synchronicity of the variations in myelin content between anatomical regions was measured by computing the Pearson's correlation coefficient. We were especially interested in elucidating the effect of age on the topological organization of the Myelin‐Nets. We therefore selected two age groups: Young‐Age (20–31 years old) and Old‐Age (60–71 years old) and a pool of participants from 48 to 87 years old for a Myelin‐Nets aging trajectory study. We found that the topological organization of the Myelin‐Nets is strongly shaped by aging processes. The global myelin correlation strength, between homologous regions and locally in different brain lobes, showed a significant dependence on age. Interestingly, we also showed that the aging process modulates the resilience of the Myelin‐Nets to damage of principal network structures. In summary, this work sheds light on the organizational principles driving myelination and myelin degeneration in brain gray matter and how such patterns are modulated by aging. PMID:29271053

  20. Thermal instability in a stratified plasma

    International Nuclear Information System (INIS)

    Hermanns, D.F.M.; Priest, E.R.

    1989-01-01

    The thermal instability mechansism has been studied in connection to observed coronal features, like, e.g. prominences or cool cores in loops. Although these features show a lot of structure, most studies concern the thermal instability in an uniform medium. In this paper, we investigate the thermal instability and the interaction between thermal modes and the slow magneto-acoustic subspectrum for a stratified plasma slab. We fomulate the relevant system of equations and give some straightforward properties of the linear spectrum of a non-uniform plasma slab, i.e. the existence of continuous parts in the spectrum. We present a numerical scheme with which we can investigate the linear spectrum for equilibrium states with stratification. The slow and thermal subspectra of a crude coronal model are given as a preliminary result. (author). 6 refs.; 1 fig

  1. The initial establishment and epithelial morphogenesis of the esophagus: a new model of tracheal–esophageal separation and transition of simple columnar into stratified squamous epithelium in the developing esophagus

    Science.gov (United States)

    Que, Jianwen

    2016-01-01

    The esophagus and trachea are tubular organs that initially share a single common lumen in the anterior foregut. Several models have been proposed to explain how this single-lumen developmental intermediate generates two tubular organs. However, new evidence suggests that these models are not comprehensive. I will first briefly review these models and then propose a novel ‘splitting and extension’ model based on our in vitro modeling of the foregut separation process. Signaling molecules (e.g., SHHs, WNTs, BMPs) and transcription factors (e.g., NKX2.1 and SOX2) are critical for the separation of the foregut. Intriguingly, some of these molecules continue to play essential roles during the transition of simple columnar into stratified squamous epithelium in the developing esophagus, and they are also closely involved in epithelial maintenance in the adults. Alterations in the levels of these molecules have been associated with the initiation and progression of several esophageal diseases and cancer in adults. PMID:25727889

  2. Grassland gross carbon dioxide uptake based on an improved model tree ensemble approach considering human interventions: global estimation and covariation with climate.

    Science.gov (United States)

    Liang, Wei; Lü, Yihe; Zhang, Weibin; Li, Shuai; Jin, Zhao; Ciais, Philippe; Fu, Bojie; Wang, Shuai; Yan, Jianwu; Li, Junyi; Su, Huimin

    2017-07-01

    Grassland ecosystems act as a crucial role in the global carbon cycle and provide vital ecosystem services for many species. However, these low-productivity and water-limited ecosystems are sensitive and vulnerable to climate perturbations and human intervention, the latter of which is often not considered due to lack of spatial information regarding the grassland management. Here by the application of a model tree ensemble (MTE-GRASS) trained on local eddy covariance data and using as predictors gridded climate and management intensity field (grazing and cutting), we first provide an estimate of global grassland gross primary production (GPP). GPP from our study compares well (modeling efficiency NSE = 0.85 spatial; NSE between 0.69 and 0.94 interannual) with that from flux measurement. Global grassland GPP was on average 11 ± 0.31 Pg C yr -1 and exhibited significantly increasing trend at both annual and seasonal scales, with an annual increase of 0.023 Pg C (0.2%) from 1982 to 2011. Meanwhile, we found that at both annual and seasonal scale, the trend (except for northern summer) and interannual variability of the GPP are primarily driven by arid/semiarid ecosystems, the latter of which is due to the larger variation in precipitation. Grasslands in arid/semiarid regions have a stronger (33 g C m -2  yr -1 /100 mm) and faster (0- to 1-month time lag) response to precipitation than those in other regions. Although globally spatial gradients (71%) and interannual changes (51%) in GPP were mainly driven by precipitation, where most regions with arid/semiarid climate zone, temperature and radiation together shared half of GPP variability, which is mainly distributed in the high-latitude or cold regions. Our findings and the results of other studies suggest the overwhelming importance of arid/semiarid regions as a control on grassland ecosystems carbon cycle. Similarly, under the projected future climate change, grassland ecosystems in these regions will

  3. A scale invariant covariance structure on jet space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2005-01-01

    This paper considers scale invariance of statistical image models. We study statistical scale invariance of the covariance structure of jet space under scale space blurring and derive the necessary structure and conditions of the jet covariance matrix in order for it to be scale invariant. As par...

  4. A three domain covariance framework for EEG/MEG data

    NARCIS (Netherlands)

    Ros, B.P.; Bijma, F.; de Gunst, M.C.M.; de Munck, J.C.

    2015-01-01

    In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three

  5. Covariance Manipulation for Conjunction Assessment

    Science.gov (United States)

    Hejduk, M. D.

    2016-01-01

    The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.

  6. Covariance matrices of experimental data

    International Nuclear Information System (INIS)

    Perey, F.G.

    1978-01-01

    A complete statement of the uncertainties in data is given by its covariance matrix. It is shown how the covariance matrix of data can be generated using the information available to obtain their standard deviations. Determination of resonance energies by the time-of-flight method is used as an example. The procedure for combining data when the covariance matrix is non-diagonal is given. The method is illustrated by means of examples taken from the recent literature to obtain an estimate of the energy of the first resonance in carbon and for five resonances of 238 U

  7. Evaluation and processing of covariance data

    International Nuclear Information System (INIS)

    Wagner, M.

    1993-01-01

    These proceedings of a specialists'meeting on evaluation and processing of covariance data is divided into 4 parts bearing on: part 1- Needs for evaluated covariance data (2 Papers), part 2- generation of covariance data (15 Papers), part 3- Processing of covariance files (2 Papers), part 4-Experience in the use of evaluated covariance data (2 Papers)

  8. A New Approach for Nuclear Data Covariance and Sensitivity Generation

    International Nuclear Information System (INIS)

    Leal, L.C.; Larson, N.M.; Derrien, H.; Kawano, T.; Chadwick, M.B.

    2005-01-01

    Covariance data are required to correctly assess uncertainties in design parameters in nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the U.S. Evaluated Nuclear Data File, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. The computer code SAMMY is used in the analysis of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on generalized least-squares formalism (Bayes' theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance-parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, in addition, it also provides the resonance-parameter covariances. For existing resonance-parameter evaluations where no resonance-parameter covariance data are available, the alternative is to use an approach called the 'retroactive' resonance-parameter covariance generation. In the high-energy region the methodology for generating covariance data consists of least-squares fitting and model parameter adjustment. The least-squares fitting method calculates covariances directly from experimental data. The parameter adjustment method employs a nuclear model calculation such as the optical model and the Hauser-Feshbach model, and estimates a covariance for the nuclear model parameters. In this paper we describe the application of the retroactive method and the parameter adjustment method to generate covariance data for the gadolinium isotopes

  9. On Galilean covariant quantum mechanics

    International Nuclear Information System (INIS)

    Horzela, A.; Kapuscik, E.; Kempczynski, J.; Joint Inst. for Nuclear Research, Dubna

    1991-08-01

    Formalism exhibiting the Galilean covariance of wave mechanics is proposed. A new notion of quantum mechanical forces is introduced. The formalism is illustrated on the example of the harmonic oscillator. (author)

  10. The effect of existing turbulence on stratified shear instability

    Science.gov (United States)

    Kaminski, Alexis; Smyth, William

    2017-11-01

    Ocean turbulence is an essential process governing, for example, heat uptake by the ocean. In the stably-stratified ocean interior, this turbulence occurs in discrete events driven by vertical variations of the horizontal velocity. Typically, these events have been modelled by assuming an initially laminar stratified shear flow which develops wavelike instabilities, becomes fully turbulent, and then relaminarizes into a stable state. However, in the real ocean there is always some level of turbulence left over from previous events, and it is not yet understood how this turbulence impacts the evolution of future mixing events. Here, we perform a series of direct numerical simulations of turbulent events developing in stratified shear flows that are already at least weakly turbulent. We do so by varying the amplitude of the initial perturbations, and examine the subsequent development of the instability and the impact on the resulting turbulent fluxes. This work is supported by NSF Grant OCE1537173.

  11. Covariate-adjusted measures of discrimination for survival data

    DEFF Research Database (Denmark)

    White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth

    2015-01-01

    by the study design (e.g. age and sex) influence discrimination and can make it difficult to compare model discrimination between studies. Although covariate adjustment is a standard procedure for quantifying disease-risk factor associations, there are no covariate adjustment methods for discrimination...... statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...

  12. FDTD scattered field formulation for scatterers in stratified dispersive media.

    Science.gov (United States)

    Olkkonen, Juuso

    2010-03-01

    We introduce a simple scattered field (SF) technique that enables finite difference time domain (FDTD) modeling of light scattering from dispersive objects residing in stratified dispersive media. The introduced SF technique is verified against the total field scattered field (TFSF) technique. As an application example, we study surface plasmon polariton enhanced light transmission through a 100 nm wide slit in a silver film.

  13. Cross-covariance functions for multivariate random fields based on latent dimensions

    KAUST Repository

    Apanasovich, T. V.; Genton, M. G.

    2010-01-01

    The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable

  14. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  15. Large Eddy Simulation of stratified flows over structures

    OpenAIRE

    Brechler J.; Fuka V.

    2013-01-01

    We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.

  16. Large Eddy Simulation of stratified flows over structures

    Directory of Open Access Journals (Sweden)

    Brechler J.

    2013-04-01

    Full Text Available We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.

  17. Large Eddy Simulation of stratified flows over structures

    Science.gov (United States)

    Fuka, V.; Brechler, J.

    2013-04-01

    We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.

  18. Covariant holography of a tachyonic accelerating universe

    Energy Technology Data Exchange (ETDEWEB)

    Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)

    2014-08-15

    We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)

  19. Torsion and geometrostasis in covariant superstrings

    International Nuclear Information System (INIS)

    Zachos, C.

    1985-01-01

    The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs

  20. Torsion and geometrostasis in covariant superstrings

    Energy Technology Data Exchange (ETDEWEB)

    Zachos, C.

    1985-01-01

    The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.

  1. Multiple feature fusion via covariance matrix for visual tracking

    Science.gov (United States)

    Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui

    2018-04-01

    Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.

  2. Precomputing Process Noise Covariance for Onboard Sequential Filters

    Science.gov (United States)

    Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell

    2017-01-01

    Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.

  3. Duality ensures modular covariance

    International Nuclear Information System (INIS)

    Li Miao; Yu Ming

    1989-11-01

    We show that the modular transformations for one point functions on the torus, S(n), satisfy the polynomial equations derived by Moore and Seiberg, provided the duality property of the model is ensured. The formula for S(n) is derived by us previously and should be valid for any conformal field theory. As a consequence, the full consistency conditions for modular invariance at higher genus are completely guaranteed by duality of the theory on the sphere. (orig.)

  4. Analysis of stratified flow mixing

    International Nuclear Information System (INIS)

    Soo, S.L.; Lyczkowski, R.W.

    1985-01-01

    The Creare 1/5-scale Phase II experiments which model fluid and thermal mixing of relatively cold high pressure injection (HPI) water into a cold leg of a full-scale pressurized water reactor (PWR) having loop flow are analyzed and found that they cannot achieve complete similarity with respect to characteristic Reynolds and Froude numbers and developing hydrodynamic entry length. Several analyses show that these experiments fall into two distinct regimes of mixing: momentum controlled and gravity controlled (stratification). 18 refs., 9 figs

  5. Experimental study of unsteady thermally stratified flow

    International Nuclear Information System (INIS)

    Lee, Sang Jun; Chung, Myung Kyoon

    1985-01-01

    Unsteady thermally stratified flow caused by two-dimensional surface discharge of warm water into a oblong channel was investigated. Experimental study was focused on the rapidly developing thermal diffusion at small Richardson number. The basic objectives were to study the interfacial mixing between a flowing layer of warm water and an underlying body of cold water and to accumulate experimental data to test computational turbulence models. Mean velocity field measurements were carried out by using NMR-CT(Nuclear Magnetic Resonance-Computerized Tomography). It detects quantitative flow image of any desired section in any direction of flow in short time. Results show that at small Richardson number warm layer rapidly penetrates into the cold layer because of strong turbulent mixing and instability between the two layers. It is found that the transfer of heat across the interface is more vigorous than that of momentum. It is also proved that the NMR-CT technique is a very valuable tool to measure unsteady three dimensional flow field. (Author)

  6. GLq(N)-covariant quantum algebras and covariant differential calculus

    International Nuclear Information System (INIS)

    Isaev, A.P.; Pyatov, P.N.

    1992-01-01

    GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations are considered. It is that, up to some innessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. 25 refs

  7. GLq(N)-covariant quantum algebras and covariant differential calculus

    International Nuclear Information System (INIS)

    Isaev, A.P.; Pyatov, P.N.

    1993-01-01

    We consider GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations. We show that, up to some inessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. The connection with the bicovariant differential calculus on the linear quantum groups is discussed. (orig.)

  8. Cosmic censorship conjecture revisited: covariantly

    International Nuclear Information System (INIS)

    Hamid, Aymen I M; Goswami, Rituparno; Maharaj, Sunil D

    2014-01-01

    In this paper we study the dynamics of the trapped region using a frame independent semi-tetrad covariant formalism for general locally rotationally symmetric (LRS) class II spacetimes. We covariantly prove some important geometrical results for the apparent horizon, and state the necessary and sufficient conditions for a singularity to be locally naked. These conditions bring out, for the first time in a quantitative and transparent manner, the importance of the Weyl curvature in deforming and delaying the trapped region during continual gravitational collapse, making the central singularity locally visible. (paper)

  9. Covariant transport theory

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Yang [Columbia Univ., New York, NY (United States)]|[Brookhaven National Labs., Upton, NY (United States)

    1997-09-22

    Many phenomenological models for relativistic heavy ion collisions share a common framework - the relativistic Boltzmann equations. Within this framework, a nucleus-nucleus collision is described by the evolution of phase-space distributions of several species of particles. The equations can be effectively solved with the cascade algorithm by sampling each phase-space distribution with points, i.e. {delta}-functions, and by treating the interaction terms as collisions of these points. In between collisions, each point travels on a straight line trajectory. In most implementations of the cascade algorithm, each physical particle, e.g. a hadron or a quark, is often represented by one point. Thus, the cross-section for a collision of two points is just the cross-section of the physical particles, which can be quite large compared to the local density of the system. For an ultra-relativistic nucleus-nucleus collision, this could lead to a large violation of the Lorentz invariance. By using the invariance property of the Boltzmann equation under a scale transformation, a Lorentz invariant cascade algorithm can be obtained. The General Cascade Program - GCP - is a tool for solving the relativistic Boltzmann equation with any number of particle species and very general interactions with the cascade algorithm.

  10. Modeling the Covariance Structure of Complex Datasets Using Cognitive Models: An Application to Individual Differences and the Heritability of Cognitive Ability.

    Science.gov (United States)

    Evans, Nathan J; Steyvers, Mark; Brown, Scott D

    2018-06-05

    Understanding individual differences in cognitive performance is an important part of understanding how variations in underlying cognitive processes can result in variations in task performance. However, the exploration of individual differences in the components of the decision process-such as cognitive processing speed, response caution, and motor execution speed-in previous research has been limited. Here, we assess the heritability of the components of the decision process, with heritability having been a common aspect of individual differences research within other areas of cognition. Importantly, a limitation of previous work on cognitive heritability is the underlying assumption that variability in response times solely reflects variability in the speed of cognitive processing. This assumption has been problematic in other domains, due to the confounding effects of caution and motor execution speed on observed response times. We extend a cognitive model of decision-making to account for relatedness structure in a twin study paradigm. This approach can separately quantify different contributions to the heritability of response time. Using data from the Human Connectome Project, we find strong evidence for the heritability of response caution, and more ambiguous evidence for the heritability of cognitive processing speed and motor execution speed. Our study suggests that the assumption made in previous studies-that the heritability of cognitive ability is based on cognitive processing speed-may be incorrect. More generally, our methodology provides a useful avenue for future research in complex data that aims to analyze cognitive traits across different sources of related data, whether the relation is between people, tasks, experimental phases, or methods of measurement. © 2018 Cognitive Science Society, Inc.

  11. Large eddy simulation of turbulent and stably-stratified flows

    International Nuclear Information System (INIS)

    Fallon, Benoit

    1994-01-01

    The unsteady turbulent flow over a backward-facing step is studied by mean of Large Eddy Simulations with structure function sub grid model, both in isothermal and stably-stratified configurations. Without stratification, the flow develops highly-distorted Kelvin-Helmholtz billows, undergoing to helical pairing, with A-shaped vortices shed downstream. We show that forcing injected by recirculation fluctuations governs this oblique mode instabilities development. The statistical results show good agreements with the experimental measurements. For stably-stratified configurations, the flow remains more bi-dimensional. We show with increasing stratification, how the shear layer growth is frozen by inhibition of pairing process then of Kelvin-Helmholtz instabilities, and the development of gravity waves or stable density interfaces. Eddy structures of the flow present striking analogies with the stratified mixing layer. Additional computations show the development of secondary Kelvin-Helmholtz instabilities on the vorticity layers between two primary structures. This important mechanism based on baroclinic effects (horizontal density gradients) constitutes an additional part of the turbulent mixing process. Finally, the feasibility of Large Eddy Simulation is demonstrated for industrial flows, by studying a complex stratified cavity. Temperature fluctuations are compared to experimental measurements. We also develop three-dimensional un-stationary animations, in order to understand and visualize turbulent interactions. (author) [fr

  12. Graphical representation of covariant-contravariant modal formulae

    Directory of Open Access Journals (Sweden)

    Miguel Palomino

    2011-08-01

    Full Text Available Covariant-contravariant simulation is a combination of standard (covariant simulation, its contravariant counterpart and bisimulation. We have previously studied its logical characterization by means of the covariant-contravariant modal logic. Moreover, we have investigated the relationships between this model and that of modal transition systems, where two kinds of transitions (the so-called may and must transitions were combined in order to obtain a simple framework to express a notion of refinement over state-transition models. In a classic paper, Boudol and Larsen established a precise connection between the graphical approach, by means of modal transition systems, and the logical approach, based on Hennessy-Milner logic without negation, to system specification. They obtained a (graphical representation theorem proving that a formula can be represented by a term if, and only if, it is consistent and prime. We show in this paper that the formulae from the covariant-contravariant modal logic that admit a "graphical" representation by means of processes, modulo the covariant-contravariant simulation preorder, are also the consistent and prime ones. In order to obtain the desired graphical representation result, we first restrict ourselves to the case of covariant-contravariant systems without bivariant actions. Bivariant actions can be incorporated later by means of an encoding that splits each bivariant action into its covariant and its contravariant parts.

  13. Covariance matrix estimation for stationary time series

    OpenAIRE

    Xiao, Han; Wu, Wei Biao

    2011-01-01

    We obtain a sharp convergence rate for banded covariance matrix estimates of stationary processes. A precise order of magnitude is derived for spectral radius of sample covariance matrices. We also consider a thresholded covariance matrix estimator that can better characterize sparsity if the true covariance matrix is sparse. As our main tool, we implement Toeplitz [Math. Ann. 70 (1911) 351–376] idea and relate eigenvalues of covariance matrices to the spectral densities or Fourier transforms...

  14. Covariance Partition Priors: A Bayesian Approach to Simultaneous Covariance Estimation for Longitudinal Data.

    Science.gov (United States)

    Gaskins, J T; Daniels, M J

    2016-01-02

    The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.

  15. Grain distinct stratified nanolayers in aluminium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Donatus, U., E-mail: uyimedonatus@yahoo.com [School of Materials, The University of Manchester, Manchester, M13 9PL, England (United Kingdom); Thompson, G.E.; Zhou, X.; Alias, J. [School of Materials, The University of Manchester, Manchester, M13 9PL, England (United Kingdom); Tsai, I.-L. [Oxford Instruments NanoAnalysis, HP12 2SE, High Wycombe (United Kingdom)

    2017-02-15

    The grains of aluminium alloys have stratified nanolayers which determine their mechanical and chemical responses. In this study, the nanolayers were revealed in the grains of AA6082 (T6 and T7 conditions), AA5083-O and AA2024-T3 alloys by etching the alloys in a solution comprising 20 g Cr{sub 2}O{sub 3} + 30 ml HPO{sub 3} in 1 L H{sub 2}O. Microstructural examination was conducted on selected grains of interest using scanning electron microscopy and electron backscatter diffraction technique. It was observed that the nanolayers are orientation dependent and are parallel to the {100} planes. They have ordered and repeated tunnel squares that are flawed at the sides which are aligned in the <100> directions. These flawed tunnel squares dictate the tunnelling corrosion morphology as well as appearing to have an affect on the arrangement and sizes of the precipitation hardening particles. The inclination of the stratified nanolayers, their interpacing, and the groove sizes have significant influence on the corrosion behaviour and seeming influence on the strengthening mechanism of the investigated aluminium alloys. - Highlights: • Stratified nanolayers in aluminium alloy grains. • Relationship of the stratified nanolayers with grain orientation. • Influence of the inclinations of the stratified nanolayers on corrosion. • Influence of the nanolayers interspacing and groove sizes on hardness and corrosion.

  16. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  17. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  18. Covariant Gauss law commutator anomaly

    International Nuclear Information System (INIS)

    Dunne, G.V.; Trugenberger, C.A.; Massachusetts Inst. of Tech., Cambridge

    1990-01-01

    Using a (fixed-time) hamiltonian formalism we derive a covariant form for the anomaly in the commutator algebra of Gauss law generators for chiral fermions interacting with a dynamical non-abelian gauge field in 3+1 dimensions. (orig.)

  19. Covariant gauges for constrained systems

    International Nuclear Information System (INIS)

    Gogilidze, S.A.; Khvedelidze, A.M.; Pervushin, V.N.

    1995-01-01

    The method of constructing of extended phase space for singular theories which permits the consideration of covariant gauges without the introducing of a ghost fields, is proposed. The extension of the phase space is carried out by the identification of the initial theory with an equivalent theory with higher derivatives and applying to it the Ostrogradsky method of Hamiltonian description. 7 refs

  20. Covarient quantization of heterotic strings in supersymmetric chiral boson formulation

    International Nuclear Information System (INIS)

    Yu, F.

    1992-01-01

    This dissertation presents the covariant supersymmetric chiral boson formulation of the heterotic strings. The main feature of this formulation is the covariant quantization of the so-called leftons and rightons -- the (1,0) supersymmetric generalizations of the world-sheet chiral bosons -- that constitute basic building blocks of general heterotic-type string models. Although the (Neveu-Schwarz-Ramond or Green-Schwarz) heterotic strings provide the most realistic string models, their covariant quantization, with the widely-used Siegel formalism, has never been rigorously carried out. It is clarified in this dissertation that the covariant Siegel formalism is pathological upon quantization. As a test, a general classical covariant (NSR) heterotic string action that has the Siegel symmetry is constructed in arbitrary curved space-time coupled to (1,0) world-sheet super-gravity. In the light-cone gauge quantization, the critical dimensions are derived for such an action with leftons and rightons compactified on group manifolds G L x G R . The covariant quantization of this action does not agree with the physical results in the light-cone gauge quantization. This dissertation establishes a new formalism for the covariant quantization of heterotic strings. The desired consistent covariant path integral quantization of supersymmetric chiral bosons, and thus the general (NSR) heterotic-type strings with leftons and rightons compactified on torus circle-times d L S 1 x circle-times d R S 1 are carried out. An infinite set of auxiliary (1,0) scalar superfields is introduced to convert the second-class chiral constraint into first-class ones. The covariant gauge-fixed action has an extended BRST symmetry described by the graded algebra GL(1/1). A regularization respecting this symmetry is proposed to deal with the contributions of the infinite towers of auxiliary fields and associated ghosts

  1. Cosmology of a covariant Galilean field.

    Science.gov (United States)

    De Felice, Antonio; Tsujikawa, Shinji

    2010-09-10

    We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.

  2. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  3. Covariant effective action for loop quantum cosmology from order reduction

    International Nuclear Information System (INIS)

    Sotiriou, Thomas P.

    2009-01-01

    Loop quantum cosmology (LQC) seems to be predicting modified effective Friedmann equations without extra degrees of freedom. A puzzle arises if one decides to seek for a covariant effective action which would lead to the given Friedmann equation: The Einstein-Hilbert action is the only action that leads to second order field equations and, hence, there exists no covariant action which, under metric variation, leads to a modified Friedmann equation without extra degrees of freedom. It is shown that, at least for isotropic models in LQC, this issue is naturally resolved and a covariant effective action can be found if one considers higher order theories of gravity but faithfully follows effective field theory techniques. However, our analysis also raises doubts on whether a covariant description without background structures can be found for anisotropic models.

  4. Using Covariant Lyapunov Vectors to Understand Spatiotemporal Chaos in Fluids

    Science.gov (United States)

    Paul, Mark; Xu, Mu; Barbish, Johnathon; Mukherjee, Saikat

    2017-11-01

    The spatiotemporal chaos of fluids present many difficult and fascinating challenges. Recent progress in computing covariant Lyapunov vectors for a variety of model systems has made it possible to probe fundamental ideas from dynamical systems theory including the degree of hyperbolicity, the fractal dimension, the dimension of the inertial manifold, and the decomposition of the dynamics into a finite number of physical modes and spurious modes. We are interested in building upon insights such as these for fluid systems. We first demonstrate the power of covariant Lyapunov vectors using a system of maps on a lattice with a nonlinear coupling. We then compute the covariant Lyapunov vectors for chaotic Rayleigh-Bénard convection for experimentally accessible conditions. We show that chaotic convection is non-hyperbolic and we quantify the spatiotemporal features of the spectrum of covariant Lyapunov vectors. NSF DMS-1622299 and DARPA/DSO Models, Dynamics, and Learning (MoDyL).

  5. Stratified charge rotary engine for general aviation

    Science.gov (United States)

    Mount, R. E.; Parente, A. M.; Hady, W. F.

    1986-01-01

    A development history, a current development status assessment, and a design feature and performance capabilities account are given for stratified-charge rotary engines applicable to aircraft propulsion. Such engines are capable of operating on Jet-A fuel with substantial cost savings, improved altitude capability, and lower fuel consumption by comparison with gas turbine powerplants. Attention is given to the current development program of a 400-hp engine scheduled for initial operations in early 1990. Stratified charge rotary engines are also applicable to ground power units, airborne APUs, shipboard generators, and vehicular engines.

  6. Galaxy-galaxy lensing estimators and their covariance properties

    Science.gov (United States)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose

    2017-11-01

    We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  7. Galaxy–galaxy lensing estimators and their covariance properties

    International Nuclear Information System (INIS)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; Slosar, Anze; Gonzalez, Jose Vazquez

    2017-01-01

    Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  8. Properties of Endogenous Post-Stratified Estimation using remote sensing data

    Science.gov (United States)

    John Tipton; Jean Opsomer; Gretchen Moisen

    2013-01-01

    Post-stratification is commonly used to improve the precision of survey estimates. In traditional poststratification methods, the stratification variable must be known at the population level. When suitable covariates are available at the population level, an alternative approach consists of fitting a model on the covariates, making predictions for the population and...

  9. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis

    International Nuclear Information System (INIS)

    Mohamed, A.

    1998-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results

  10. Group covariance and metrical theory

    International Nuclear Information System (INIS)

    Halpern, L.

    1983-01-01

    The a priori introduction of a Lie group of transformations into a physical theory has often proved to be useful; it usually serves to describe special simplified conditions before a general theory can be worked out. Newton's assumptions of absolute space and time are examples where the Euclidian group and translation group have been introduced. These groups were extended to the Galilei group and modified in the special theory of relativity to the Poincare group to describe physics under the given conditions covariantly in the simplest way. The criticism of the a priori character leads to the formulation of the general theory of relativity. The general metric theory does not really give preference to a particular invariance group - even the principle of equivalence can be adapted to a whole family of groups. The physical laws covariantly inserted into the metric space are however adapted to the Poincare group. 8 references

  11. Nitrogen transformations in stratified aquatic microbial ecosystems

    DEFF Research Database (Denmark)

    Revsbech, N. P.; Risgaard-Petersen, N.; Schramm, A.

    2006-01-01

    Abstract  New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about n...

  12. Direct contact condensation induced transition from stratified to slug flow

    International Nuclear Information System (INIS)

    Strubelj, Luka; Ezsoel, Gyoergy; Tiselj, Iztok

    2010-01-01

    Selected condensation-induced water hammer experiments performed on PMK-2 device were numerically modelled with three-dimensional two-fluid models of computer codes NEPTUNE C FD and CFX. Experimental setup consists of the horizontal pipe filled with the hot steam that is being slowly flooded with cold water. In most of the experimental cases, slow flooding of the pipe was abruptly interrupted by a strong slugging and water hammer, while in the selected experimental runs performed at higher initial pressures and temperatures that are analysed in the present work, the transition from the stratified into the slug flow was not accompanied by the water hammer pressure peak. That makes these cases more suitable tests for evaluation of the various condensation models in the horizontally stratified flows and puts them in the range of the available CFD (Computational Fluid Dynamics) codes. The key models for successful simulation appear to be the condensation model of the hot vapour on the cold liquid and the interfacial momentum transfer model. The surface renewal types of condensation correlations, developed for condensation in the stratified flows, were used in the simulations and were applied also in the regions of the slug flow. The 'large interface' model for inter-phase momentum transfer model was compared to the bubble drag model. The CFD simulations quantitatively captured the main phenomena of the experiments, while the stochastic nature of the particular condensation-induced water hammer experiments did not allow detailed prediction of the time and position of the slug formation in the pipe. We have clearly shown that even the selected experiments without water hammer present a tough test for the applied CFD codes, while modelling of the water hammer pressure peaks in two-phase flow, being a strongly compressible flow phenomena, is beyond the capability of the current CFD codes.

  13. Phenotypic covariance at species' borders.

    Science.gov (United States)

    Caley, M Julian; Cripps, Edward; Game, Edward T

    2013-05-28

    Understanding the evolution of species limits is important in ecology, evolution, and conservation biology. Despite its likely importance in the evolution of these limits, little is known about phenotypic covariance in geographically marginal populations, and the degree to which it constrains, or facilitates, responses to selection. We investigated phenotypic covariance in morphological traits at species' borders by comparing phenotypic covariance matrices (P), including the degree of shared structure, the distribution of strengths of pair-wise correlations between traits, the degree of morphological integration of traits, and the ranks of matricies, between central and marginal populations of three species-pairs of coral reef fishes. Greater structural differences in P were observed between populations close to range margins and conspecific populations toward range centres, than between pairs of conspecific populations that were both more centrally located within their ranges. Approximately 80% of all pair-wise trait correlations within populations were greater in the north, but these differences were unrelated to the position of the sampled population with respect to the geographic range of the species. Neither the degree of morphological integration, nor ranks of P, indicated greater evolutionary constraint at range edges. Characteristics of P observed here provide no support for constraint contributing to the formation of these species' borders, but may instead reflect structural change in P caused by selection or drift, and their potential to evolve in the future.

  14. Analysis of photonic band-gap structures in stratified medium

    DEFF Research Database (Denmark)

    Tong, Ming-Sze; Yinchao, Chen; Lu, Yilong

    2005-01-01

    in electromagnetic and microwave applications once the Maxwell's equations are appropriately modeled. Originality/value - The method validates its values and properties through extensive studies on regular and defective 1D PBG structures in stratified medium, and it can be further extended to solving more......Purpose - To demonstrate the flexibility and advantages of a non-uniform pseudo-spectral time domain (nu-PSTD) method through studies of the wave propagation characteristics on photonic band-gap (PBG) structures in stratified medium Design/methodology/approach - A nu-PSTD method is proposed...... in solving the Maxwell's equations numerically. It expands the temporal derivatives using the finite differences, while it adopts the Fourier transform (FT) properties to expand the spatial derivatives in Maxwell's equations. In addition, the method makes use of the chain-rule property in calculus together...

  15. Visualization and assessment of spatio-temporal covariance properties

    KAUST Repository

    Huang, Huang

    2017-11-23

    Spatio-temporal covariances are important for describing the spatio-temporal variability of underlying random fields in geostatistical data. For second-order stationary random fields, there exist subclasses of covariance functions that assume a simpler spatio-temporal dependence structure with separability and full symmetry. However, it is challenging to visualize and assess separability and full symmetry from spatio-temporal observations. In this work, we propose a functional data analysis approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use functional boxplots to visualize the functional median and the variability of the test functions, where the extent of departure from zero at all temporal lags indicates the degree of non-separability or asymmetry. We also develop a rank-based nonparametric testing procedure for assessing the significance of the non-separability or asymmetry. Essentially, the proposed methods only require the analysis of temporal covariance functions. Thus, a major advantage over existing approaches is that there is no need to estimate any covariance matrix for selected spatio-temporal lags. The performances of the proposed methods are examined by simulations with various commonly used spatio-temporal covariance models. To illustrate our methods in practical applications, we apply it to real datasets, including weather station data and climate model outputs.

  16. A three domain covariance framework for EEG/MEG data.

    Science.gov (United States)

    Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C

    2015-10-01

    In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Using eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements, and PhenoCams to constrain a process-based biogeochemical model for carbon market-funded wetland restoration

    Science.gov (United States)

    Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.

    2015-12-01

    We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1

  18. Proofs of Contracted Length Non-covariance

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1994-01-01

    Different proofs of contracted length non covariance are discussed. The way based on the establishment of interval inconstancy (dependence on velocity) seems to be the most convincing one. It is stressed that the known non covariance of the electromagnetic field energy and momentum of a moving charge ('the problem 4/3') is a direct consequence of contracted length non covariance. 8 refs

  19. Structural Analysis of Covariance and Correlation Matrices.

    Science.gov (United States)

    Joreskog, Karl G.

    1978-01-01

    A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…

  20. Construction of covariance matrix for experimental data

    International Nuclear Information System (INIS)

    Liu Tingjin; Zhang Jianhua

    1992-01-01

    For evaluators and experimenters, the information is complete only in the case when the covariance matrix is given. The covariance matrix of the indirectly measured data has been constructed and discussed. As an example, the covariance matrix of 23 Na(n, 2n) cross section is constructed. A reasonable result is obtained

  1. Behavior of genetic (covariance components in populations simulated from non-additive genetic models of dominance and overdominance Comportamento dos componentes de (covariância genética em populações simuladas a partir de modelos genéticos não-aditivos de dominância e sobredominância

    Directory of Open Access Journals (Sweden)

    Elizângela Emídio Cunha

    2010-09-01

    Full Text Available The aim of this work was to investigate the short-term behavior of the genetic variability of quantitative traits simulated from models with additive and non-additive gene action in control and phenotypic selection populations. Both traits, one with low (h² = 0.10 and the other with high (h² = 0.60 heritability, were controlled by 600 biallelic loci. From a standard genome, it was obtained six genetic models which included the following: only the additive gene effects; complete and positive dominance for 25, 50, 75 and 100% of the loci; and positive overdominance for 50% of the loci. In the models with dominance deviation, the additive allelic effects were also included for 100% of the loci. Genetic variability was quantified from generation to generation using the genetic variance components. In the absence of selection, genotypic and additive genetic variances were higher. In the models with non-additive gene action, a small magnitude covariance component raised between the additive and dominance genetic effects whose correlation tended to be positive on the control population and negative under selection. Dominance variance increased as the number of loci with dominance deviation or the value of the deviation increased, implying on the increase in genotypic and additive genetic variances among the successive models.Objetivou-se estudar a variabilidade genética a curto prazo de características quantitativas simuladas a partir de modelos com ação gênica aditiva e não-aditiva em populações controle e de seleção fenotípica. As duas características, uma de baixa (h² = 0,10 e outra de alta (h² = 0,60 herdabilidade, foram controladas por 600 locos bialélicos. A partir de um genoma-padrão, foram obtidos seis modelos genéticos que incluíram: apenas efeitos aditivos dos genes; dominância completa e positiva para 25, 50, 75 e 100% dos locos; e sobredominância positiva para 50% dos locos. Nos modelos com desvio da dominância tamb

  2. Equipment for extracting and conveying stratified minerals

    Energy Technology Data Exchange (ETDEWEB)

    Blumenthal, G.; Kunzer, H.; Plaga, K.

    1991-08-14

    This invention relates to equipment for extracting stratified minerals and conveying the said minerals along the working face, comprising a trough shaped conveyor run assembled from lengths, a troughed extraction run in lengths matching the lengths of conveyor troughing, which is linked to the top edge of the working face side of the conveyor troughing with freedom to swivel vertically, and a positively guided chain carrying extraction tools and scrapers along the conveyor and extraction runs.

  3. Inviscid incompressible limits of strongly stratified fluids

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Jin, B.J.; Novotný, A.

    2014-01-01

    Roč. 89, 3-4 (2014), s. 307-329 ISSN 0921-7134 R&D Projects: GA ČR GA201/09/0917 Institutional support: RVO:67985840 Keywords : compressible Navier-Stokes system * anelastic approximation * stratified fluid Subject RIV: BA - General Mathematics Impact factor: 0.528, year: 2014 http://iospress.metapress.com/content/d71255745tl50125/?p=969b60ae82634854ab8bd25505ce1f71&pi=3

  4. Ethanol dehydration to ethylene in a stratified autothermal millisecond reactor.

    Science.gov (United States)

    Skinner, Michael J; Michor, Edward L; Fan, Wei; Tsapatsis, Michael; Bhan, Aditya; Schmidt, Lanny D

    2011-08-22

    The concurrent decomposition and deoxygenation of ethanol was accomplished in a stratified reactor with 50-80 ms contact times. The stratified reactor comprised an upstream oxidation zone that contained Pt-coated Al(2)O(3) beads and a downstream dehydration zone consisting of H-ZSM-5 zeolite films deposited on Al(2)O(3) monoliths. Ethanol conversion, product selectivity, and reactor temperature profiles were measured for a range of fuel:oxygen ratios for two autothermal reactor configurations using two different sacrificial fuel mixtures: a parallel hydrogen-ethanol feed system and a series methane-ethanol feed system. Increasing the amount of oxygen relative to the fuel resulted in a monotonic increase in ethanol conversion in both reaction zones. The majority of the converted carbon was in the form of ethylene, where the ethanol carbon-carbon bonds stayed intact while the oxygen was removed. Over 90% yield of ethylene was achieved by using methane as a sacrificial fuel. These results demonstrate that noble metals can be successfully paired with zeolites to create a stratified autothermal reactor capable of removing oxygen from biomass model compounds in a compact, continuous flow system that can be configured to have multiple feed inputs, depending on process restrictions. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Lorentz covariant theory of gravitation

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    1974-12-01

    An alternative method for the calculation of second order effects, like the secular shift of Mercury's perihelium is developed. This method uses the basic ideas of thirring combined with the more mathematical approach of Feyman. In the case of a static source, the treatment used is greatly simplified. Besides, Einstein-Infeld-Hoffmann's Lagrangian for a system of two particles and spin-orbit and spin-spin interactions of two particles with classical spin, ie, internal angular momentum in Moller's sense, are obtained from the Lorentz covariant theory

  6. Covariant gauges at finite temperature

    CERN Document Server

    Landshoff, Peter V

    1992-01-01

    A prescription is presented for real-time finite-temperature perturbation theory in covariant gauges, in which only the two physical degrees of freedom of the gauge-field propagator acquire thermal parts. The propagators for the unphysical degrees of freedom of the gauge field, and for the Faddeev-Popov ghost field, are independent of temperature. This prescription is applied to the calculation of the one-loop gluon self-energy and the two-loop interaction pressure, and is found to be simpler to use than the conventional one.

  7. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  8. Massive data compression for parameter-dependent covariance matrices

    Science.gov (United States)

    Heavens, Alan F.; Sellentin, Elena; de Mijolla, Damien; Vianello, Alvise

    2017-12-01

    We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated data sets which are required to estimate the covariance matrix required for the analysis of Gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next generation of weak lensing surveys, such as proposed for Euclid and Large Synoptic Survey Telescope, for which the number of summary data (such as band power or shear correlation estimates) is very large, ∼104, due to the large number of tomographic redshift bins which the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an Monte Carlo Markov Chain analysis, this may require an unfeasible 109 simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of ∼106 if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable 103, making an otherwise intractable analysis feasible.

  9. Large Covariance Estimation by Thresholding Principal Orthogonal Complements.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2013-09-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.

  10. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  11. Nitrogen transformations in stratified aquatic microbial ecosystems

    DEFF Research Database (Denmark)

    Revsbech, Niels Peter; Risgaard-Petersen, N.; Schramm, Andreas

    2006-01-01

    Abstract  New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about n...... performing dissimilatory reduction of nitrate to ammonium have given new dimensions to the understanding of nitrogen cycling in nature, and the occurrence of these organisms and processes in stratified microbial communities will be described in detail.......Abstract  New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about...... nitrogen fixation, nitrification, denitrification, and dissimilatory reduction of nitrate to ammonium, and about the microorganisms performing the processes, has been produced by use of these techniques. During the last decade the discovery of anammmox bacteria and migrating, nitrate accumulating bacteria...

  12. Large eddy simulation of stably stratified turbulence

    International Nuclear Information System (INIS)

    Shen Zhi; Zhang Zhaoshun; Cui Guixiang; Xu Chunxiao

    2011-01-01

    Stably stratified turbulence is a common phenomenon in atmosphere and ocean. In this paper the large eddy simulation is utilized for investigating homogeneous stably stratified turbulence numerically at Reynolds number Re = uL/v = 10 2 ∼10 3 and Froude number Fr = u/NL = 10 −2 ∼10 0 in which u is root mean square of velocity fluctuations, L is integral scale and N is Brunt-Vaïsälä frequency. Three sets of computation cases are designed with different initial conditions, namely isotropic turbulence, Taylor Green vortex and internal waves, to investigate the statistical properties from different origins. The computed horizontal and vertical energy spectra are consistent with observation in atmosphere and ocean when the composite parameter ReFr 2 is greater than O(1). It has also been found in this paper that the stratification turbulence can be developed under different initial velocity conditions and the internal wave energy is dominated in the developed stably stratified turbulence.

  13. Covariance Evaluation Methodology for Neutron Cross Sections

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  14. Integrated age-structured length-based stock assessment model with uncertain process variances, structural uncertainty and environmental covariates: case of Central Baltic herring

    DEFF Research Database (Denmark)

    Mäntyniemi, Samu; Uusitalo, Laura; Peltonen, Heikki

    2013-01-01

    We developed a generic, age-structured, state-space stock assessment model that can be used as a platform for including information elicited from stakeholders. The model tracks the mean size-at-age and then uses it to explain rates of natural and fishing mortality. The fishery selectivity is divided...... to two components, which makes it possible to model the active seeking of the fleet for certain sizes of fish, as well as the selectivity of the gear itself. The model can account for uncertainties that are not currently accounted for in state-of-the-art models for integrated assessments: (i) The form...... of the stock–recruitment function is considered uncertain and is accounted for by using Bayesian model averaging. (ii) In addition to recruitment variation, process variation in natural mortality, growth parameters, and fishing mortality can also be treated as uncertain parameters...

  15. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha

    2014-12-08

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  16. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha; Huang, Jianhua Z.

    2014-01-01

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  17. Covariance and correlation estimation in electron-density maps.

    Science.gov (United States)

    Altomare, Angela; Cuocci, Corrado; Giacovazzo, Carmelo; Moliterni, Anna; Rizzi, Rosanna

    2012-03-01

    Quite recently two papers have been published [Giacovazzo & Mazzone (2011). Acta Cryst. A67, 210-218; Giacovazzo et al. (2011). Acta Cryst. A67, 368-382] which calculate the variance in any point of an electron-density map at any stage of the phasing process. The main aim of the papers was to associate a standard deviation to each pixel of the map, in order to obtain a better estimate of the map reliability. This paper deals with the covariance estimate between points of an electron-density map in any space group, centrosymmetric or non-centrosymmetric, no matter the correlation between the model and target structures. The aim is as follows: to verify if the electron density in one point of the map is amplified or depressed as an effect of the electron density in one or more other points of the map. High values of the covariances are usually connected with undesired features of the map. The phases are the primitive random variables of our probabilistic model; the covariance changes with the quality of the model and therefore with the quality of the phases. The conclusive formulas show that the covariance is also influenced by the Patterson map. Uncertainty on measurements may influence the covariance, particularly in the final stages of the structure refinement; a general formula is obtained taking into account both phase and measurement uncertainty, valid at any stage of the crystal structure solution.

  18. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  19. Assessment and simulation of global terrestrial latent heat flux by synthesis of CMIP5 climate models and surface eddy covariance observations

    Science.gov (United States)

    Yunjun Yao; Shunlin Liang; Xianglan Li; Shaomin Liu; Jiquan Chen; Xiaotong Zhang; Kun Jia; Bo Jiang; Xianhong Xie; Simon Munier; Meng Liu; Jian Yu; Anders Lindroth; Andrej Varlagin; Antonio Raschi; Asko Noormets; Casimiro Pio; Georg Wohlfahrt; Ge Sun; Jean-Christophe Domec; Leonardo Montagnani; Magnus Lund; Moors Eddy; Peter D. Blanken; Thomas Grunwald; Sebastian Wolf; Vincenzo Magliulo

    2016-01-01

    The latent heat flux (LE) between the terrestrial biosphere and atmosphere is a major driver of the globalhydrological cycle. In this study, we evaluated LE simulations by 45 general circulation models (GCMs)in the Coupled Model Intercomparison Project Phase 5 (CMIP5) by a comparison...

  20. Estimation of Covariance Matrix on Bi-Response Longitudinal Data Analysis with Penalized Spline Regression

    Science.gov (United States)

    Islamiyati, A.; Fatmawati; Chamidah, N.

    2018-03-01

    The correlation assumption of the longitudinal data with bi-response occurs on the measurement between the subjects of observation and the response. It causes the auto-correlation of error, and this can be overcome by using a covariance matrix. In this article, we estimate the covariance matrix based on the penalized spline regression model. Penalized spline involves knot points and smoothing parameters simultaneously in controlling the smoothness of the curve. Based on our simulation study, the estimated regression model of the weighted penalized spline with covariance matrix gives a smaller error value compared to the error of the model without covariance matrix.