Partially linear varying coefficient models stratified by a functional covariate
Maity, Arnab
2012-10-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.
Partially Linear Varying Coefficient Models Stratified by a Functional Covariate.
Maity, Arnab; Huang, Jianhua Z
2012-10-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.
Hussey, Michael A; Koch, Gary G; Preisser, John S; Saville, Benjamin R
2016-01-01
Time-to-event or dichotomous outcomes in randomized clinical trials often have analyses using the Cox proportional hazards model or conditional logistic regression, respectively, to obtain covariate-adjusted log hazard (or odds) ratios. Nonparametric Randomization-Based Analysis of Covariance (NPANCOVA) can be applied to unadjusted log hazard (or odds) ratios estimated from a model containing treatment as the only explanatory variable. These adjusted estimates are stratified population-averaged treatment effects and only require a valid randomization to the two treatment groups and avoid key modeling assumptions (e.g., proportional hazards in the case of a Cox model) for the adjustment variables. The methodology has application in the regulatory environment where such assumptions cannot be verified a priori. Application of the methodology is illustrated through three examples on real data from two randomized trials.
Szekeres models: a covariant approach
Apostolopoulos, Pantelis S
2016-01-01
We exploit the 1+1+2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an \\emph{average scale length} can be defined \\emph{covariantly} which satisfies a 2d equation of motion driven from the \\emph{effective gravitational mass} (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field $E_{ab}$. In addition the notions of the Apparent and Absolute Apparent Horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express the Sachs optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...... are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions...
Covariance structure models of expectancy.
Henderson, M J; Goldman, M S; Coovert, M D; Carnevalla, N
1994-05-01
Antecedent variables under the broad categories of genetic, environmental and cultural influences have been linked to the risk for alcohol abuse. Such risk factors have not been shown to result in high correlations with alcohol consumption and leave unclear an understanding of the mechanism by which these variables lead to increased risk. This study employed covariance structure modeling to examine the mediational influence of stored information in memory about alcohol, alcohol expectancies in relation to two biologically and environmentally driven antecedent variables, family history of alcohol abuse and a sensation-seeking temperament in a college population. We also examined the effect of criterion contamination on the relationship between sensation-seeking and alcohol consumption. Results indicated that alcohol expectancy acts as a significant, partial mediator of the relationship between sensation-seeking and consumption, that family history of alcohol abuse is not related to drinking outcome and that overlap in items on sensation-seeking and alcohol consumption measures may falsely inflate their relationship.
Wishart distributions for decomposable covariance graph models
Khare, Kshitij; 10.1214/10-AOS841
2011-01-01
Gaussian covariance graph models encode marginal independence among the components of a multivariate random vector by means of a graph $G$. These models are distinctly different from the traditional concentration graph models (often also referred to as Gaussian graphical models or covariance selection models) since the zeros in the parameter are now reflected in the covariance matrix $\\Sigma$, as compared to the concentration matrix $\\Omega =\\Sigma^{-1}$. The parameter space of interest for covariance graph models is the cone $P_G$ of positive definite matrices with fixed zeros corresponding to the missing edges of $G$. As in Letac and Massam [Ann. Statist. 35 (2007) 1278--1323], we consider the case where $G$ is decomposable. In this paper, we construct on the cone $P_G$ a family of Wishart distributions which serve a similar purpose in the covariance graph setting as those constructed by Letac and Massam [Ann. Statist. 35 (2007) 1278--1323] and Dawid and Lauritzen [Ann. Statist. 21 (1993) 1272--1317] do in ...
EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS
LUIJBEN, TCW
1991-01-01
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank def
Drainage in a model stratified porous medium
Datta, Sujit S; 10.1209/0295-5075/101/14002
2013-01-01
We show that when a non-wetting fluid drains a stratified porous medium at sufficiently small capillary numbers Ca, it flows only through the coarsest stratum of the medium; by contrast, above a threshold Ca, the non-wetting fluid is also forced laterally, into part of the adjacent, finer strata. The spatial extent of this partial invasion increases with Ca. We quantitatively understand this behavior by balancing the stratum-scale viscous pressure driving the flow with the capillary pressure required to invade individual pores. Because geological formations are frequently stratified, we anticipate that our results will be relevant to a number of important applications, including understanding oil migration, preventing groundwater contamination, and sub-surface CO$_{2}$ storage.
Adaptive Covariance Estimation with model selection
Biscay, Rolando; Loubes, Jean-Michel
2012-01-01
We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.
Model selection for Poisson processes with covariates
Sart, Mathieu
2011-01-01
We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.
Fuel Burning Rate Model for Stratified Charge Engine
Institute of Scientific and Technical Information of China (English)
SONG Jin'ou; JIANG Zejun; YAO Chunde; WANG Hongfu
2006-01-01
A zero-dimensional single-zone double-curve model is presented to predict fuel burning rate in stratified charge engines, and it is integrated with GT-Power to predict the overall performance of the stratified charge engines.The model consists of two exponential functions for calculating the fuel burning rate in different charge zones.The model factors are determined by a non-linear curve fitting technique, based on the experimental data obtained from 30 cases in middle and low loads.The results show good agreement between the measured and calculated cylinder pressures,and the deviation between calculated and measured cylinder pressures is less than 5%.The zerodimensional single-zone double-curve model is successful in the combustion modeling for stratified charge engines.
Inferences from Genomic Models in Stratified Populations
DEFF Research Database (Denmark)
Janss, Luc; de los Campos, Gustavo; Sheehan, Nuala
2012-01-01
Unaccounted population stratification can lead to spurious associations in genome-wide association studies (GWAS) and in this context several methods have been proposed to deal with this problem. An alternative line of research uses whole-genome random regression (WGRR) models that fit all markers...... are unsatisfactory. Here we address this problem and describe a reparameterization of a WGRR model, based on an eigenvalue decomposition, for simultaneous inference of parameters and unobserved population structure. This allows estimation of genomic parameters with and without inclusion of marker......-derived eigenvectors that account for stratification. The method is illustrated with grain yield in wheat typed for 1279 genetic markers, and with height, HDL cholesterol and systolic blood pressure from the British 1958 cohort study typed for 1 million SNP genotypes. Both sets of data show signs of population...
SINDA/FLUINT Stratified Tank Modeling for Cryrogenic Propellant Tanks
Sakowski, Barbara
2014-01-01
A general purpose SINDA/FLUINT (S/F) stratified tank model was created to simulate self-pressurization and axial jet TVS; Stratified layers in the vapor and liquid are modeled using S/F lumps.; The stratified tank model was constructed to permit incorporating the following additional features:, Multiple or singular lumps in the liquid and vapor regions of the tank, Real gases (also mixtures) and compressible liquids, Venting, pressurizing, and draining, Condensation and evaporation/boiling, Wall heat transfer, Elliptical, cylindrical, and spherical tank geometries; Extensive user logic is used to allow detailed tailoring - Don't have to rebuilt everything from scratch!!; Most code input for a specific case is done through the Registers Data Block:, Lump volumes are determined through user input:; Geometric tank dimensions (height, width, etc); Liquid level could be input as either a volume percentage of fill level or actual liquid level height
Baryon Wave Functions in Covariant Relativistic Quark Models
Dillig, M
2002-01-01
We derive covariant baryon wave functions for arbitrary Lorentz boosts. Modeling baryons as quark-diquark systems, we reduce their manifestly covariant Bethe-Salpeter equation to a covariant 3-dimensional form by projecting on the relative quark-diquark energy. Guided by a phenomenological multigluon exchange representation of a covariant confining kernel, we derive for practical applications explicit solutions for harmonic confinement and for the MIT Bag Model. We briefly comment on the interplay of boosts and center-of-mass corrections in relativistic quark models.
Validity of covariance models for the analysis of geographical variation
DEFF Research Database (Denmark)
Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio
2014-01-01
1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained...
Validity of covariance models for the analysis of geographical variation
DEFF Research Database (Denmark)
Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio
2014-01-01
attention lately and show that the conditions under which they are valid mathematical models have been overlooked so far. 3. We provide rigorous results for the construction of valid covariance models in this family. 4. We also outline how to construct alternative covariance models for the analysis...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
1996-01-01
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continuous-time system excited by Gaussian white noise. This result is generalize...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
1996-01-01
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
Covariance of the selfdual vector model
2004-01-01
The Poisson algebra between the fields involved in the vectorial selfdual action is obtained by means of the reduced action. The conserved charges associated with the invariance under the inhomogeneous Lorentz group are obtained and its action on the fields. The covariance of the theory is proved using the Schwinger-Dirac algebra. The spin of the excitations is discussed.
A pure S-wave covariant model for the nucleon
Gross, F; Peña, M T; Gross, Franz
2006-01-01
Using the manifestly covariant spectator theory, and modeling the nucleon as a system of three constituent quarks with their own electromagnetic structure, we show that all four nucleon electromagnetic form factors can be very well described by a manifestly covariant nucleon wave function with zero orbital angular momentum.
Some covariance models based on normal scale mixtures
Schlather, Martin
2011-01-01
Modelling spatio-temporal processes has become an important issue in current research. Since Gaussian processes are essentially determined by their second order structure, broad classes of covariance functions are of interest. Here, a new class is described that merges and generalizes various models presented in the literature, in particular models in Gneiting (J. Amer. Statist. Assoc. 97 (2002) 590--600) and Stein (Nonstationary spatial covariance functions (2005) Univ. Chicago). Furthermore, new models and a multivariate extension are introduced.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
High-dimensional covariance matrix estimation in approximate factor models
Fan, Jianqing; Mincheva, Martina; 10.1214/11-AOS944
2012-01-01
The variance--covariance matrix plays a central role in the inferential theories of high-dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu [J. Amer. Statist. Assoc. 106 (2011) 672--684], taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studi...
Bayes linear covariance matrix adjustment for multivariate dynamic linear models
Wilkinson, Darren J
2008-01-01
A methodology is developed for the adjustment of the covariance matrices underlying a multivariate constant time series dynamic linear model. The covariance matrices are embedded in a distribution-free inner-product space of matrix objects which facilitates such adjustment. This approach helps to make the analysis simple, tractable and robust. To illustrate the methods, a simple model is developed for a time series representing sales of certain brands of a product from a cash-and-carry depot. The covariance structure underlying the model is revised, and the benefits of this revision on first order inferences are then examined.
Generalized linear models with coarsened covariates: a practical Bayesian approach.
Johnson, Timothy R; Wiest, Michelle M
2014-06-01
Coarsened covariates are a common and sometimes unavoidable phenomenon encountered in statistical modeling. Covariates are coarsened when their values or categories have been grouped. This may be done to protect privacy or to simplify data collection or analysis when researchers are not aware of their drawbacks. Analyses with coarsened covariates based on ad hoc methods can compromise the validity of inferences. One valid method for accounting for a coarsened covariate is to use a marginal likelihood derived by summing or integrating over the unknown realizations of the covariate. However, algorithms for estimation based on this approach can be tedious to program and can be computationally expensive. These are significant obstacles to their use in practice. To overcome these limitations, we show that when expressed as a Bayesian probability model, a generalized linear model with a coarsened covariate can be posed as a tractable missing data problem where the missing data are due to censoring. We also show that this model is amenable to widely available general-purpose software for simulation-based inference for Bayesian probability models, providing researchers a very practical approach for dealing with coarsened covariates.
Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)
DEFF Research Database (Denmark)
Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis
We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
A model for evaluating the ballistic resistance of stratified packs
Pirvu, C.; Georgescu, C.; Badea, S.; Deleanu, L.
2016-08-01
Models for evaluating the ballistic performance of stratified packs are useful in reducing the time for laboratory tests, understanding the failure process and identifying key factors to improve the architecture of the packs. The authors present the results of simulating the bullet impact on a packs made of 24 layers, taking into consideration the friction between layers (μ = 0.4) and the friction between bullet and layers (μ = 0.3). The aim of this study is to obtain a number of layers that allows for the bullet arrest in the packs and to have several layers undamaged in order to offer a high level of safety for this kind of packs that could be included in individual armors. The model takes into account the yield and fracture limits of the two materials the bullet is made of and those for one layer, here considered as an orthotropic material, having maximum equivalent plastic strain of 0.06. All materials are considered to have bilinear isotropic hardening behavior. After documentation, the model was designed as isothermal because thermal influence of the impact is considered low for these impact velocities. The model was developed with the help of Ansys 14.5. Each layer has 200 mm × 200 × 0.35 mm. The bullet velocity just before impact was 400 m/s, a velocity characterizing the average values obtained in close range with a ballistic barrel and the bullet model is following the shape and dimensions of the 9 mm FMJ (full metal jacket). The model and the results concerning the number of broken layers were validated by experiments, as the number of broken layers for the actual pack (made of 24 layers of LFT SB1) were also seven...eight. The models for ballistic impact are useful when they are particularly formulated for resembling to the actual system projectile - target.
Modeling the Conditional Covariance between Stock and Bond Returns
P. de Goeij (Peter); W.A. Marquering (Wessel)
2002-01-01
textabstractTo analyze the intertemporal interaction between the stock and bond market returns, we allow the conditional covariance matrix to vary over time according to a multivariate GARCH model similar to Bollerslev, Engle and Wooldridge (1988). We extend the model such that it allows for asymmet
Stratified flows with variable density: mathematical modelling and numerical challenges.
Murillo, Javier; Navas-Montilla, Adrian
2017-04-01
Stratified flows appear in a wide variety of fundamental problems in hydrological and geophysical sciences. They may involve from hyperconcentrated floods carrying sediment causing collapse, landslides and debris flows, to suspended material in turbidity currents where turbulence is a key process. Also, in stratified flows variable horizontal density is present. Depending on the case, density varies according to the volumetric concentration of different components or species that can represent transported or suspended materials or soluble substances. Multilayer approaches based on the shallow water equations provide suitable models but are not free from difficulties when moving to the numerical resolution of the governing equations. Considering the variety of temporal and spatial scales, transfer of mass and energy among layers may strongly differ from one case to another. As a consequence, in order to provide accurate solutions, very high order methods of proved quality are demanded. Under these complex scenarios it is necessary to observe that the numerical solution provides the expected order of accuracy but also converges to the physically based solution, which is not an easy task. To this purpose, this work will focus in the use of Energy balanced augmented solvers, in particular, the Augmented Roe Flux ADER scheme. References: J. Murillo , P. García-Navarro, Wave Riemann description of friction terms in unsteady shallow flows: Application to water and mud/debris floods. J. Comput. Phys. 231 (2012) 1963-2001. J. Murillo B. Latorre, P. García-Navarro. A Riemann solver for unsteady computation of 2D shallow flows with variable density. J. Comput. Phys.231 (2012) 4775-4807. A. Navas-Montilla, J. Murillo, Energy balanced numerical schemes with very high order. The Augmented Roe Flux ADER scheme. Application to the shallow water equations, J. Comput. Phys. 290 (2015) 188-218. A. Navas-Montilla, J. Murillo, Asymptotically and exactly energy balanced augmented flux
Globally covering a-priori regional gravity covariance models
Directory of Open Access Journals (Sweden)
D. Arabelos
2003-01-01
Full Text Available Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation, the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-22. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`
Covariance in models of loop quantum gravity: Gowdy systems
Bojowald, Martin
2015-01-01
Recent results in the construction of anomaly-free models of loop quantum gravity have shown obstacles when local physical degrees of freedom are present. Here, a set of no-go properties is derived in polarized Gowdy models, raising the question whether these systems can be covariant beyond a background treatment. As a side product, it is shown that normal deformations in classical polarized Gowdy models can be Abelianized.
A model selection approach to analysis of variance and covariance.
Alber, Susan A; Weiss, Robert E
2009-06-15
An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
A hierarchical nest survival model integrating incomplete temporally varying covariates
Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.
2013-01-01
Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the
Using time-varying covariates in multilevel growth models
Directory of Open Access Journals (Sweden)
D. Betsy McCoach
2010-06-01
Full Text Available This article provides an illustration of growth curve modeling within a multilevel framework. Specifically, we demonstrate coding schemes that allow the researcher to model discontinuous longitudinal data using a linear growth model in conjunction with time varying covariates. Our focus is on developing a level-1 model that accurately reflects the shape of the growth trajectory. We demonstrate the importance of adequately modeling the shape of the level-1 growth trajectory in order to make inferences about the importance of both level-1 and level-2 predictors.
Model Order Selection Rules for Covariance Structure Classification in Radar
Carotenuto, Vincenzo; De Maio, Antonio; Orlando, Danilo; Stoica, Petre
2017-10-01
The adaptive classification of the interference covariance matrix structure for radar signal processing applications is addressed in this paper. This represents a key issue because many detection architectures are synthesized assuming a specific covariance structure which may not necessarily coincide with the actual one due to the joint action of the system and environment uncertainties. The considered classification problem is cast in terms of a multiple hypotheses test with some nested alternatives and the theory of Model Order Selection (MOS) is exploited to devise suitable decision rules. Several MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria are adopted and the corresponding merits and drawbacks are discussed. At the analysis stage, illustrating examples for the probability of correct model selection are presented showing the effectiveness of the proposed rules.
Spin Structure Functions in a Covariant Spectator Quark Model
Energy Technology Data Exchange (ETDEWEB)
G. Ramalho, Franz Gross and M. T. Peña
2010-12-01
We apply the covariant spectator quark–diquark model, already probed in the description of the nucleon elastic form factors, to the calculation of the deep inelastic scattering (DIS) spin-independent and spin-dependent structure functions of the nucleon. The nucleon wave function is given by a combination of quark–diquark orbital states, corresponding to S, D and P-waves. A simple form for the quark distribution function associated to the P and D waves is tested.
A Covariant model for the nucleon and the $\\Delta$
Ramalho, G; Gross, Franz
2008-01-01
The covariant spectator formalism is used to model the nucleon and the $\\Delta$(1232) as a system of three constituent quarks with their own electromagnetic structure. The definition of the ``fixed-axis'' polarization states for the diquark emitted from the initial state vertex and absorbed into the final state vertex is discussed. The helicity sum over those states is evaluated and seen to be covariant. Using this approach, all four electromagnetic form factors of the nucleon, together with the {\\it magnetic} form factor, $G_M^*$, for the $\\gamma N \\to \\Delta$ transition, can be described using manifestly covariant nucleon and $\\Delta$ wave functions with {\\it zero} orbital angular momentum $L$, but a successful description of $G_M^*$ near $Q^2=0$ requires the addition of a pion cloud term not included in the class of valence quark models considered here. We also show that the pure $S$-wave model gives electric, $G_E^*$, and coulomb, $G^*_C$, transition form factors that are identically zero, showing that th...
DEFF Research Database (Denmark)
Carmo, Carolina; Dumont, Olivier; Nielsen, Mads Pagh
2015-01-01
The use of stratified hot water tanks in solar energy systems - including ORC systems - as well as heat pump systems is paramount for a better performance of these systems. However, the availability of effective and reliable models to predict the annual performance of stratified hot water tanks c...
Covariance in models of loop quantum gravity: Spherical symmetry
Bojowald, Martin; Reyes, Juan D
2015-01-01
Spherically symmetric models of loop quantum gravity have been studied recently by different methods that aim to deal with structure functions in the usual constraint algebra of gravitational systems. As noticed by Gambini and Pullin, a linear redefinition of the constraints (with phase-space dependent coefficients) can be used to eliminate structure functions, even Abelianizing the more-difficult part of the constraint algebra. The Abelianized constraints can then easily be quantized or modified by putative quantum effects. As pointed out here, however, the method does not automatically provide a covariant quantization, defined as an anomaly-free quantum theory with a classical limit in which the usual (off-shell) gauge structure of hypersurface deformations in space-time appears. The holonomy-modified vacuum theory based on Abelianization is covariant in this sense, but matter theories with local degrees of freedom are not. Detailed demonstrations of these statements show complete agreement with results of ...
FBST for covariance structures of generalized Gompertz models
Maranhão, Viviane Teles de Lucca; Lauretto, Marcelo De Souza; Stern, Julio Michael
2012-10-01
The Gompertz distribution is commonly used in biology for modeling fatigue and mortality. This paper studies a class of models proposed by Adham and Walker, featuring a Gompertz type distribution where the dependence structure is modeled by a lognormal distribution, and develops a new multivariate formulation that facilitates several numerical and computational aspects. This paper also implements the FBST, the Full Bayesian Significance Test for pertinent sharp (precise) hypotheses on the lognormal covariance structure. The FBST's e-value, ev(H), gives the epistemic value of hypothesis, H, or the value of evidence in the observed in support of H.
On the Problem of Permissible Covariance and Variogram Models
Christakos, George
1984-02-01
The covariance and variogram models (ordinary or generalized) are important statistical tools used in various estimation and simulation techniques which have been recently applied to diverse hydrologic problems. For example, the efficacy of kriging, a method for interpolating, filtering, or averaging spatial phenomena, depends, to a large extent, on the covariance or variogram model chosen. The aim of this article is to provide the users of these techniques with convenient criteria that may help them to judge whether a function which arises in a particular problem, and is not included among the known covariance or variogram models, is permissible as such a model. This is done by investigating the properties of the candidate model in both the space and frequency domains. In the present article this investigation covers stationary random functions as well as intrinsic random functions (i.e., nonstationary functions for which increments of some order are stationary). Then, based on the theoretical results obtained, a procedure is outlined and successfully applied to a number of candidate models. In order to give to this procedure a more practical context, we employ "stereological" equations that essentially transfer the investigations to one-dimensional space, together with approximations in terms of polygonal functions and Fourier-Bessel series expansions. There are many benefits and applications of such a procedure. Polygonal models can be fit arbitrarily closely to the data. Also, the approximation of a particular model in the frequency domain by a Fourier-Bessel series expansion can be very effective. This is shown by theory and by example.
GARCH modelling of covariance in dynamical estimation of inverse solutions
Energy Technology Data Exchange (ETDEWEB)
Galka, Andreas [Institute of Experimental and Applied Physics, University of Kiel, 24098 Kiel (Germany) and Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)]. E-mail: galka@physik.uni-kiel.de; Yamashita, Okito [ATR Computational Neuroscience Laboratories, Hikaridai 2-2-2, Kyoto 619-0288 (Japan); Ozaki, Tohru [Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)
2004-12-06
The problem of estimating unobserved states of spatially extended dynamical systems poses an inverse problem, which can be solved approximately by a recently developed variant of Kalman filtering; in order to provide the model of the dynamics with more flexibility with respect to space and time, we suggest to combine the concept of GARCH modelling of covariance, well known in econometrics, with Kalman filtering. We formulate this algorithm for spatiotemporal systems governed by stochastic diffusion equations and demonstrate its feasibility by presenting a numerical simulation designed to imitate the situation of the generation of electroencephalographic recordings by the human cortex.
(1)-covariant gauge for the two-Higgs doublet model
Indian Academy of Sciences (India)
C G Honorato; J J Toscano
2009-12-01
A (1)-covariant gauge for the two-Higgs doublet model based on BRST (Becchi–Rouet–Stora–Tyutin) symmetry is introduced. This gauge allows one to remove a significant number of nonphysical vertices appearing in conventional linear gauges, which greatly simplifies the loop calculations, since the resultant theory satisfies QED-like Ward identities. The presence of four ghost interactions in these types of gauges and their connection with the BRST symmetry are stressed. The Feynman rules for those new vertices that arise in this gauge, as well as for those couplings already present in the linear gauge but that are modified by this gauge-fixing procedure, are presented.
Conditioning of the stationary kriging matrices for some well-known covariance models
Energy Technology Data Exchange (ETDEWEB)
Posa, D. (IRMA-CNR, Bari (Italy))
1989-10-01
In this paper, the condition number of the stationary kriging matrix is studied for some well-known covariance models. Indeed, the robustness of the kriging weights is strongly affected by this measure. Such an analysis can justify the choice of a covariance function among other admissible models which could fit a given experimental covariance equally well.
Davies, Christopher E; Glonek, Gary Fv; Giles, Lynne C
2017-08-01
One purpose of a longitudinal study is to gain a better understanding of how an outcome of interest changes among a given population over time. In what follows, a trajectory will be taken to mean the series of measurements of the outcome variable for an individual. Group-based trajectory modelling methods seek to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Group-based trajectory models generally assume a certain structure in the covariances between measurements, for example conditional independence, homogeneous variance between groups or stationary variance over time. Violations of these assumptions could be expected to result in poor model performance. We used simulation to investigate the effect of covariance misspecification on misclassification of trajectories in commonly used models under a range of scenarios. To do this we defined a measure of performance relative to the ideal Bayesian correct classification rate. We found that the more complex models generally performed better over a range of scenarios. In particular, incorrectly specified covariance matrices could significantly bias the results but using models with a correct but more complicated than necessary covariance matrix incurred little cost.
Simulation model of stratified thermal energy storage tank using finite difference method
Waluyo, Joko
2016-06-01
Stratified TES tank is normally used in the cogeneration plant. The stratified TES tanks are simple, low cost, and equal or superior in thermal performance. The advantage of TES tank is that it enables shifting of energy usage from off-peak demand for on-peak demand requirement. To increase energy utilization in a stratified TES tank, it is required to build a simulation model which capable to simulate the charging phenomenon in the stratified TES tank precisely. This paper is aimed to develop a novel model in addressing the aforementioned problem. The model incorporated chiller into the charging of stratified TES tank system in a closed system. The model was developed in one-dimensional type involve with heat transfer aspect. The model covers the main factors affect to degradation of temperature distribution namely conduction through the tank wall, conduction between cool and warm water, mixing effect on the initial flow of the charging as well as heat loss to surrounding. The simulation model is developed based on finite difference method utilizing buffer concept theory and solved in explicit method. Validation of the simulation model is carried out using observed data obtained from operating stratified TES tank in cogeneration plant. The temperature distribution of the model capable of representing S-curve pattern as well as simulating decreased charging temperature after reaching full condition. The coefficient of determination values between the observed data and model obtained higher than 0.88. Meaning that the model has capability in simulating the charging phenomenon in the stratified TES tank. The model is not only capable of generating temperature distribution but also can be enhanced for representing transient condition during the charging of stratified TES tank. This successful model can be addressed for solving the limitation temperature occurs in charging of the stratified TES tank with the absorption chiller. Further, the stratified TES tank can be
A Covariant OBE Model for $\\eta$ Production in NN Collisions
Gedalin, E; Razdolskaya, L A
1998-01-01
A relativistic covariant one boson exchange model, previously applied to describe elastic nucleon-nucleon scattering, is extended to study $\\eta$ production in NN collisions. The transition amplitude for the elementary BN->$\\eta$N process with B being the meson exchanged (B=$\\pi$, $|sigma$,$\\eta$, corresponding to s and u-channels with a nucleon or a nucleon isobar N*(1535MeV) in the intermediate states. Taking the relative phases of the various exchange amplitudes to be +1, the model reproduces the cross sections for the $NN\\to X\\eta$ reactions in a consistent manner. In the limit where all overall contributions from the exchange of pseudoscalart and scalar mesons with that of vector mesons cancel out. Consequently, much of the ambiguities in the model predictions due to unknown relative phases of different vector pseudoscalar exchanges are strongly reduced.
A covariant model for the nucleon spin structure
Ramalho, G
2015-01-01
We present the results of the covariant spectator quark model applied to the nucleon structure function $f(x)$ measured in unpolarized deep inelastic scattering, and the structure functions $g_1(x)$ and $g_2(x)$ measured in deep inelastic scattering using polarized beams and targets ($x$ is the Bjorken scaling variable). The nucleon is modeled by a valence quark-diquark structure with $S,P$ and $D$ components. The shape of the wave functions and the relative strength of each component are fixed by making fits to the deep inelastic scattering data for the structure functions $f(x)$ and $g_1(x)$. The model is then used to make predictions on the function $g_2(x)$ for the proton and neutron.
Royle, J. Andrew; Converse, Sarah J.
2014-01-01
Capture–recapture studies are often conducted on populations that are stratified by space, time or other factors. In this paper, we develop a Bayesian spatial capture–recapture (SCR) modelling framework for stratified populations – when sampling occurs within multiple distinct spatial and temporal strata.We describe a hierarchical model that integrates distinct models for both the spatial encounter history data from capture–recapture sampling, and also for modelling variation in density among strata. We use an implementation of data augmentation to parameterize the model in terms of a latent categorical stratum or group membership variable, which provides a convenient implementation in popular BUGS software packages.We provide an example application to an experimental study involving small-mammal sampling on multiple trapping grids over multiple years, where the main interest is in modelling a treatment effect on population density among the trapping grids.Many capture–recapture studies involve some aspect of spatial or temporal replication that requires some attention to modelling variation among groups or strata. We propose a hierarchical model that allows explicit modelling of group or strata effects. Because the model is formulated for individual encounter histories and is easily implemented in the BUGS language and other free software, it also provides a general framework for modelling individual effects, such as are present in SCR models.
A dynamic subgrid-scale model for the large eddy simulation of stratified flow
Institute of Scientific and Technical Information of China (English)
刘宁宇; 陆夕云; 庄礼贤
2000-01-01
A new dynamic subgrid-scale (SGS) model, including subgrid turbulent stress and heat flux models for stratified shear flow is proposed by using Yoshizawa’ s eddy viscosity model as a base model. Based on our calculated results, the dynamic subgrid-scale model developed here is effective for the large eddy simulation (LES) of stratified turbulent channel flows. The new SGS model is then applied to the large eddy simulation of stratified turbulent channel flow under gravity to investigate the coupled shear and buoyancy effects on the near-wall turbulent statistics and the turbulent heat transfer at different Richardson numbers. The critical Richardson number predicted by the present calculation is in good agreement with the value of theoretical analysis.
A dynamic subgrid-scale model for the large eddy simulation of stratified flow
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
A new dynamic subgrid-scale (SGS) model, including subgrid turbulent stress and heat flux models for stratified shear flow is proposed by using Yoshizawa's eddy viscosity model as a base model. Based on our calculated results, the dynamic subgrid-scale model developed here is effective for the large eddy simulation (LES) of stratified turbulent channel flows. The new SGS model is then applied to the large eddy simulation of stratified turbulent channel flow under gravity to investigate the coupled shear and buoyancy effects on the near-wall turbulent statistics and the turbulent heat transfer at different Richardson numbers. The critical Richardson number predicted by the present calculation is in good agreement with the value of theoretical analysis.
Chiral Dynamics of Baryons in a Lorentz Covariant Quark Model
Faessler, A; Lyubovitskij, V E; Pumsa-ard, K; Faessler, Amand; Gutsche, Th.
2006-01-01
We develop a manifestly Lorentz covariant chiral quark model for the study of baryons as bound states of constituent quarks dressed by a cloud of pseudoscalar mesons. The approach is based on a non-linear chirally symmetric Lagrangian, which involves effective degrees of freedom - constituent quarks and the chiral (pseudoscalar meson) fields. In a first step, this Lagrangian can be used to perform a dressing of the constituent quarks by a cloud of light pseudoscalar mesons and other heavy states using the calculational technique of infrared dimensional regularization of loop diagrams. We calculate the dressed transition operators with a proper chiral expansion which are relevant for the interaction of quarks with external fields in the presence of a virtual meson cloud. In a second step, these dressed operators are used to calculate baryon matrix elements. Applications are worked out for the masses of the baryon octet, the meson-nucleon sigma terms, the magnetic moments of the baryon octet, the nucleon charge...
A Non-Fickian Mixing Model for Stratified Turbulent Flows
2013-09-30
Berselli et al., 2011) and in ocean models ( Marques and Özgökmen, 2012). Our approach in Özgökmen et al. (2012) is perhaps the first truly multi-scale...Transport in Star Eddies: Star eddies have been observed from MODIS SST images in both the summer 2011 and winter 2012 LatMix cruises. I have...published, refereed]. Marques , G.M. and T.M. Özgökmen: On modeling the turbulent exchange in buoyancy-driven fronts. Ocean Modelling [submitted
Mathematical models for two-phase stratified pipe flow
Energy Technology Data Exchange (ETDEWEB)
Biberg, Dag
2005-06-01
The simultaneous transport of oil, gas and water in a single multiphase flow pipe line has for economical and practical reasons become common practice in the gas and oil fields operated by the oil industry. The optimal design and safe operation of these pipe lines require reliable estimates of liquid inventory, pressure drop and flow regime. Computer simulations of multiphase pipe flow have thus become an important design tool for field developments. Computer simulations yielding on-line monitoring and look ahead predictions are invaluable in day-to-day field management. Inaccurate predictions may have large consequences. The accuracy and reliability of multiphase pipe flow models are thus important issues. Simulating events in large pipelines or pipeline systems is relatively computer intensive. Pipe-lines carrying e.g. gas and liquefied gas (condensate) may cover distances of several hundred km in which transient phenomena may go on for months. The evaluation times associated with contemporary 3-D CFD models are thus not compatible with field applications. Multiphase flow lines are therefore normally simulated using specially dedicated 1-D models. The closure relations of multiphase pipe flow models are mainly based on lab data. The maximum pipe inner diameter, pressure and temperature in a multiphase pipe flow lab is limited to approximately 0.3 m, 90 bar and 60{sup o}C respectively. The corresponding field values are, however, much higher i.e.: 1 m, 1000 bar and 200{sup o}C respectively. Lab data does thus not cover the actual field conditions. Field predictions are consequently frequently based on model extrapolation. Applying field data or establishing more advanced labs will not solve this problem. It is in fact not practically possible to acquire sufficient data to cover all aspects of multiphase pipe flow. The parameter range involved is simply too large. Liquid levels and pressure drop in three-phase flow are e.g. determined by 13 dimensionless parameters
A NONHYDROSTATIC NUMERICAL MODEL FOR DENSITY STRATIFIED FLOW AND ITS APPLICATIONS
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
A modular numerical model was developed for simulating density-stratified flow in domains with irregular bottom topography. The model was designed for examining interactions between stratified flow and topography, e.g., tidally driven flow over two-dimensional sills or internal solitary waves propagating over a shoaling bed. The model was based on the non-hydrostatic vorticity-stream function equations for a continuously stratified fluid in a rotating frame. A self-adaptive grid was adopted in the vertical coordinate, the Alternative Direction Implicit (ADI) scheme was used for the time marching equations while the Poisson equation for stream-function was solved based on the Successive Over Relaxation (SOR) iteration with the Chebyshev acceleration. The numerical techniques were described and three applications of the model were presented.
Numerical and Experimental Models of the Thermally Stratified Boundary Layer
Directory of Open Access Journals (Sweden)
Michalcová Vladimíra
2016-12-01
Full Text Available The article describes a change of selected turbulent variables in the surroundings of a flow around thermally loaded object. The problem is solved numerically in the software Ansys Fluent using a Transition SST model that is able to take into account the difference between high and low turbulence at the interface between the wake behind an obstacle and the free stream. The results are verified with experimental measurements in the wind tunnel.
Highly covariant quantum lattice gas model of the Dirac equation
Yepez, Jeffrey
2011-01-01
We revisit the quantum lattice gas model of a spinor quantum field theory-the smallest scale particle dynamics is partitioned into unitary collide and stream operations. The construction is covariant (on all scales down to a small length {\\ell} and small time {\\tau} = c {\\ell}) with respect to Lorentz transformations. The mass m and momentum p of the modeled Dirac particle depend on {\\ell} according to newfound relations m = mo cos (2{\\pi}{\\ell}/{\\lambda}) and p = (h/2{\\pi}{\\ell}) sin(2{\\pi}{\\ell}/{\\lambda}), respectively, where {\\lambda} is the Compton wavelength of the modeled particle. These relations represent departures from a relativistically invariant mass and the de Broglie relation-when taken as quantifying numerical errors the model is physically accurate when {\\ell} {\\ll} {\\lambda}. Calculating the vacuum energy in the special case of a massless spinor field, we find that it vanishes (or can have a small positive value) for a sufficiently large wave number cutoff. This is a marked departure from th...
Models of ash-laden intrusions in a stratified atmosphere
Hogg, Andrew; Johnson, Chris; Sparks, Steve; Huppert, Herbert; Woodhouse, Mark; Phillips, Jeremy
2013-04-01
Recent volcanic eruptions and the associated dispersion of ash through the atmosphere have led to widespread closures of airspace, for example the 2010 eruption of Eyjafjallajokull and 2011 eruption of Puyehue-Cordón Caulle. These episodes bring into sharp focus the need to predict quantitatively the transport and deposition of fine ash and in particular, its interaction with atmospheric wind. Many models of this process are based upon capturing the physics of advection with the wind, turbulence-induced diffusion and gravitational settling. Buoyancy-induced processes, associated with the density of the ash cloud and the background stratification of the atmosphere, are neglected and it is this issue that we address in this contribution. In particular, we suggest that the buoyancy-induced motion may account for the relatively thin distal ash layers that have been observed in the atmosphere and their relatively weak cross-wind spreading. We formulate a new model for buoyancy-driven spreading in the atmosphere in which we treat the evolving ash layer as relatively shallow so that its motion is predominantly horizontal and the pressure locally hydrostatic. The motion is driven by horizontal pressure gradients along with interfacial drag between the flowing ash layer and the surrounding atmosphere. Ash-laden fluid is delivered to this intrusion from a plume source and has risen through the atmosphere to its height of neutral buoyancy. The ash particles are then transported horizontally by the intrusion and progressively settle out of it to sediment through the atmosphere and form the deposit on the ground. This model is integrated numerically and analysed asymptotically in various regimes, including scenarios in which the atmosphere is quiescent and in which there is a sustained wind. The results yield predictions for the variation of the thickness of the intrusion with distance from the source and for how the concentration of ash is reduced due to settling. They
Measures to assess the prognostic ability of the stratified Cox proportional hazards model
DEFF Research Database (Denmark)
(Tybjaerg-Hansen, A.) The Fibrinogen Studies Collaboration.The Copenhagen City Heart Study; Tybjærg-Hansen, Anne
2009-01-01
Many measures have been proposed to summarize the prognostic ability of the Cox proportional hazards (CPH) survival model, although none is universally accepted for general use. By contrast, little work has been done to summarize the prognostic ability of the stratified CPH model; such measures w...
STRATIFIED MODEL FOR ESTIMATING FATIGUE CRACK GROWTH RATE OF METALLIC MATERIALS
Institute of Scientific and Technical Information of China (English)
YANG Yong-yu; LIU Xin-wei; YANG Fan
2005-01-01
The curve of relationship between fatigue crack growth rate and the stress strength factor amplitude represented an important fatigue property in designing of damage tolerance limits and predicting life of metallic component parts. In order to have a morereasonable use of testing data, samples from population were stratified suggested by the stratified random sample model (SRAM). The data in each stratum corresponded to the same experiment conditions. A suitable weight was assigned to each stratified sample according to the actual working states of the pressure vessel, so that the estimation of fatigue crack growth rate equation was more accurate for practice. An empirical study shows that the SRAM estimation by using fatigue crack growth rate data from different stoves is obviously better than the estimation from simple random sample model.
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
Analysing stratified medicine business models and value systems: innovation-regulation interactions.
Mittra, James; Tait, Joyce
2012-09-15
Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models.
Spatiotemporal noise covariance model for MEG/EEG data source analysis
Plis, S M; Jun, S C; Pare-Blagoev, J; Ranken, D M; Schmidt, D M; Wood, C C
2005-01-01
A new method for approximating spatiotemporal noise covariance for use in MEG/EEG source analysis is proposed. Our proposed approach extends a parameterized one pair approximation consisting of a Kronecker product of a temporal covariance and a spatial covariance into 1) an unparameterized one pair approximation and then 2) into a multi-pair approximation. These models are motivated by the need to better describe correlated background and make estimation of these models more efficient. The effects of these different noise covariance models are compared using a multi-dipole inverse algorithm and simulated data consisting of empirical MEG background data as noise and simulated dipole sources.
Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances
Halpin, Peter F.; Maraun, Michael D.
2010-01-01
A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…
Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances
Halpin, Peter F.; Maraun, Michael D.
2010-01-01
A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…
DEFF Research Database (Denmark)
Carmo, Carolina; Dumont, Olivier; Nielsen, Mads Pagh
2015-01-01
coupled with energy system solutions is limited. In this poster, a discretized model of a stratified tank developed in Modelica is presented. The physical phenoma to be considered are the thermal transfers by conduction and convection – stratification, heat loss to ambient, charging and discharging...
Computational Fluid Dynamics model of stratified atmospheric boundary-layer flow
DEFF Research Database (Denmark)
Koblitz, Tilman; Bechmann, Andreas; Sogachev, Andrey;
2015-01-01
For wind resource assessment, the wind industry is increasingly relying on computational fluid dynamics models of the neutrally stratified surface-layer. So far, physical processes that are important to the whole atmospheric boundary-layer, such as the Coriolis effect, buoyancy forces and heat...
Numerical modeling of mixing in large stably stratified enclosures using TRACMIX++
Christensen, Jakob
This PhD dissertation focuses on the numerical modeling of stably stratified large enclosures. In stably stratified volumes, the distribution of temperature, species concentration etc become essentially 1-D throughout most of the enclosure. When the fluid in an enclosure is stratified, wall boundary buoyant jets, forced buoyant jets (injection of fluid) and natural convection plumes become the primary sources of mixing. The time constants for the buoyant jets may be considered as much smaller than the time constant for the mixing of the stratified ambient fluid, provided the combined volume occupied by the buoyant jets is small compared to the volume of the enclosure. Therefore, fluid transport by the buoyant jets may be considered as occurring instantaneously. For this reason this work focuses on deriving a numerical method which is able to solve the 1-D vertical fluid conservation equations, as given in Peterson (1994). Starting with the Eulerian fluid conservation equations given in Peterson (1994), a set of Lagrangian fluid conservation equations were derived. Combining the Lagrangian approach with operator splitting such that the convective step and the diffusive step is separated renders a very efficient, accurate, and stable numerical method as it is shown in this text. Since the stratified flow field frequently exhibits very strong gradients or so-called fronts, the generation of these fronts has to be accurately detected and tracked by the numerical method. Flow in stably stratified large enclosure has typically been modeled in the past using 1- or 2-zone models. The present model is new in that it belongs to the K-zone models where the number of zones is arbitrarily large and depends on the complexity of the solution and the accuracy requirement set by the user. Because fronts are present in the flow field, a Lagrangian type numerical method is used. A Lagrangian method facilitates front tracking and prevents numerical diffusion from altering the shape of
Modeling Heterogeneous Variance-Covariance Components in Two-Level Models
Leckie, George; French, Robert; Charlton, Chris; Browne, William
2014-01-01
Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…
Institute of Scientific and Technical Information of China (English)
ZHONG; Fengquan(仲峰泉); LIU; Nansheng(刘难生); LU; Xiyun(陆夕云); ZHUANG; Lixian(庄礼贤)
2002-01-01
In the present paper, a new dynamic subgrid-scale (SGS) model of turbulent stress and heat flux for stratified shear flow is proposed. Based on our calculated results of stratified channel flow, the dynamic subgrid-scale model developed in this paper is shown to be effective for large eddy simulation (LES) of stratified turbulent shear flows. The new SGS model is then applied to the LES of the stratified turbulent channel flow to investigate the coupled shear and buoyancy effects on the behavior of turbulent statistics, turbulent heat transfer and flow structures at different Richardson numbers.
Robustness studies in covariance structure modeling - An overview and a meta-analysis
Boomsma, A
1998-01-01
In covariance structure modeling, several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the conclus
Realized mixed-frequency factor models for vast dimensional covariance estimation
K. Bannouh (Karim); M.P.E. Martens (Martin); R.C.A. Oomen (Roel); D.J.C. van Dijk (Dick)
2012-01-01
textabstractWe introduce a Mixed-Frequency Factor Model (MFFM) to estimate vast dimensional covari- ance matrices of asset returns. The MFFM uses high-frequency (intraday) data to estimate factor (co)variances and idiosyncratic risk and low-frequency (daily) data to estimate the factor loadings. We
Robustness studies in covariance structure modeling - An overview and a meta-analysis
Boomsma, A
1998-01-01
In covariance structure modeling, several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the conclus
Robustness studies in covariance structure modeling - An overview and a meta-analysis
Boomsma, A
In covariance structure modeling, several estimation methods are available. The robustness of an estimator against specific violations of assumptions can be determined empirically by means of a Monte Carlo study. Many such studies in covariance structure analysis have been published, but the
A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
Forecasting Multivariate Volatility using the VARFIMA Model on Realized Covariance Cholesky Factors
DEFF Research Database (Denmark)
Halbleib, Roxana; Voev, Valeri
2011-01-01
This paper analyzes the forecast accuracy of the multivariate realized volatility model introduced by Chiriac and Voev (2010), subject to different degrees of model parametrization and economic evaluation criteria. Bymodelling the Cholesky factors of the covariancematrices, the model generates...... positive definite, but biased covariance forecasts. In this paper, we provide empirical evidence that parsimonious versions of the model generate the best covariance forecasts in the absence of bias correction. Moreover, we show by means of stochastic dominance tests that any risk averse investor...
Bello, Nora M; Steibel, Juan P; Tempelman, Robert J
2010-06-01
Bivariate mixed effects models are often used to jointly infer upon covariance matrices for both random effects (u) and residuals (e) between two different phenotypes in order to investigate the architecture of their relationship. However, these (co)variances themselves may additionally depend upon covariates as well as additional sets of exchangeable random effects that facilitate borrowing of strength across a large number of clusters. We propose a hierarchical Bayesian extension of the classical bivariate mixed effects model by embedding additional levels of mixed effects modeling of reparameterizations of u-level and e-level (co)variances between two traits. These parameters are based upon a recently popularized square-root-free Cholesky decomposition and are readily interpretable, each conveniently facilitating a generalized linear model characterization. Using Markov Chain Monte Carlo methods, we validate our model based on a simulation study and apply it to a joint analysis of milk yield and calving interval phenotypes in Michigan dairy cows. This analysis indicates that the e-level relationship between the two traits is highly heterogeneous across herds and depends upon systematic herd management factors.
A Model for Predicting Holdup and Pressure Drop in Gas-Liquid Stratified Flow
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The time-dependent liquid film thickness and pressure drop were measured by using parallel-wire conductance probes and capacitance differential-preesure transducers. Applying the eddy viscosity theory and an appropriate correlation of interfacial sear stress,a new two-dimensional separated model of holdup and pressure drop of turbulent/turbulent gas-liquid stratified flow was presented. Prediction results agreed well with experimental data.
A Model of Turbulent-Laminar Gas-Liquid Stratified Flow
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The time-dependent liquid film thickness and pressure drop are measured by using parallel-wire conduc tance probes and capacitance differential-pressure transducer. A mathematical model with iterative procedure to calculate holdup and pressure drop in horizontal and inclined gas-liquid stratified flow is developed. The predictions agree well with over a hundred experimental data in 0.024 and 0.04 m diameter pipelines.
Optics of an opal modeled with a stratified effective index and the effect of the interface
Maurin, Isabelle; Laliotis, Athanasios; Bloch, Daniel
2015-01-01
Reflection and transmission for an artificial opal are described through a model of stratified medium based upon a one-dimensional variation of an effective index. The model is notably applicable to a Langmuir-Blodgett type disordered opal. Light scattering is accounted for by a phenomenological absorption. The interface region between the opal and the substrate -or the vacuum- induces a periodicity break in the photonic crystal arrangement, which exhibits a prominent influence on the reflection, notably away from the Bragg reflection peak. Experimental results are compared to our model. The model is extendable to inverse opals, stacked cylinders, or irradiation by evanescent waves
Lakghomi, B; Lawryshyn, Y; Hofmann, R
2015-01-01
An analytical model and a computational fluid dynamic model of particle removal in dissolved air flotation were developed that included the effects of stratified flow and bubble-particle clustering. The models were applied to study the effect of operating conditions and formation of stratified flow on particle removal. Both modeling approaches demonstrated that the presence of stratified flow enhanced particle removal in the tank. A higher air fraction was shown to be needed at higher loading rates to achieve the same removal efficiency. The model predictions showed that an optimum bubble size was present that increased with an increase in particle size.
Identifiability of the Sign of Covariate Effects in the Competing Risks Model
DEFF Research Database (Denmark)
Lo, Simon M.S.; Wilke, Ralf
2017-01-01
We present a new framework for the identification of competing risks models, which also include Roy models. We show that by establishing a Hicksian-type decomposition, the direction of covariate effects on the marginal distributions of the competing risks model can be identified under weak...... of the range of durations for which the direction of the covariate effect is identified, particularly for long duration....
Matérn-based nonstationary cross-covariance models for global processes
Jun, Mikyoung
2014-07-01
Many spatial processes in environmental applications, such as climate variables and climate model errors on a global scale, exhibit complex nonstationary dependence structure, in not only their marginal covariance but also their cross-covariance. Flexible cross-covariance models for processes on a global scale are critical for an accurate description of each spatial process as well as the cross-dependences between them and also for improved predictions. We propose various ways to produce cross-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters vary over space, coupled with a differential operators approach for modeling large-scale nonstationarity. We compare their performance to the performance of some existing models in terms of the aic and spatial predictions in two applications: joint modeling of surface temperature and precipitation, and joint modeling of errors in climate model ensembles. © 2014 Elsevier Inc.
Covariance of Light-Front Models Pair Current
Pacheco-Bicudo-Cabral de Melo, J; Naus, H W L; Sauer, P U
1999-01-01
We compute the "+" component of the electromagnetic current of a composite spin-one two-fermion system for vanishing momentum transfer component $q^+=q^0+q^3$. In particular, we extract the nonvanishing pair production amplitude on the light-front. It is a consequence of the longitudinal zero momentum mode, contributing to the light-front current in the Breit-frame. The covariance of the current is violated, if such pair terms are not included in its matrix elements. We illustrate our discussion with some numerical examples.
National Research Council Canada - National Science Library
Aoki, Yasunori; Nordgren, Rikard; Hooker, Andrew C
2016-01-01
... a bottleneck in the analysis. We propose a preconditioning method for non-linear mixed effects models used in pharmacometric analyses to stabilise the computation of the variance-covariance matrix...
Gonzalez-Andrades, Miguel; Alonso-Pastor, Luis; Mauris, Jérôme; Cruzat, Andrea; Dohlman, Claes H; Argüeso, Pablo
2016-01-13
The repair of wounds through collective movement of epithelial cells is a fundamental process in multicellular organisms. In stratified epithelia such as the cornea and skin, healing occurs in three steps that include a latent, migratory, and reconstruction phases. Several simple and inexpensive assays have been developed to study the biology of cell migration in vitro. However, these assays are mostly based on monolayer systems that fail to reproduce the differentiation processes associated to multilayered systems. Here, we describe a straightforward in vitro wound assay to evaluate the healing and restoration of barrier function in stratified human corneal epithelial cells. In this assay, circular punch injuries lead to the collective migration of the epithelium as coherent sheets. The closure of the wound was associated with the restoration of the transcellular barrier and the re-establishment of apical intercellular junctions. Altogether, this new model of wound healing provides an important research tool to study the mechanisms leading to barrier function in stratified epithelia and may facilitate the development of future therapeutic applications.
Background error covariance modelling for convective-scale variational data assimilation
Petrie, R. E.
An essential component in data assimilation is the background error covariance matrix (B). This matrix regularizes the ill-posed data assimilation problem, describes the confidence of the background state and spreads information. Since the B-matrix is too large to represent explicitly it must be modelled. In variational data assimilation it is essentially a climatological approximation of the true covariances. Such a conventional covariance model additionally relies on the imposition of balance conditions. A toy model which is derived from the Euler equations (by making appropriate simplifications and introducing tuneable parameters) is used as a convective-scale system to investigate these issues. Its behaviour is shown to exhibit large-scale geostrophic and hydrostatic balance while permitting small-scale imbalance. A control variable transform (CVT) approach to modelling the B-matrix where the control variables are taken to be the normal modes (NM) of the linearized model is investigated. This approach is attractive for convective-scale covariance modelling as it allows for unbalanced as well as appropriately balanced relationships. Although the NM-CVT is not applied to a data assimilation problem directly, it is shown to be a viable approach to convective-scale covariance modelling. A new mathematically rigorous method to incorporate flow-dependent error covariances with the otherwise static B-matrix estimate is also proposed. This is an extension to the reduced rank Kalman filter (RRKF) where its Hessian singular vector calculation is replaced by an ensemble estimate of the covariances, and is known as the ensemble RRKF (EnRRKF). Ultimately it is hoped that together the NM-CVT and the EnRRKF would improve the predictability of small-scale features in convective-scale weather forecasting through the relaxation of inappropriate balance and the inclusion of flow-dependent covariances.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.
Testing of RANS Turbulence Models for Stratified Flows Based on DNS Data
Venayagamoorthy, S. K.; Koseff, J. R.; Ferziger, J. H.; Shih, L. H.
2003-01-01
In most geophysical flows, turbulence occurs at the smallest scales and one of the two most important additional physical phenomena to account for is strati cation (the other being rotation). In this paper, the main objective is to investigate proposed changes to RANS turbulence models which include the effects of stratifi- cation more explicitly. These proposed changes were developed using a DNS database on strati ed and sheared homogenous turbulence developed by Shih et al. (2000) and are described more fully in Ferziger et al. (2003). The data generated by Shih, et al. (2000) (hereinafter referred to as SKFR) are used to study the parameters in the k- model as a function of the turbulent Froude number, Frk. A modified version of the standard k- model based on the local turbulent Froude number is proposed. The proposed model is applied to a stratified open channel flow, a test case that differs significantly from the flows from which the modified parameters were derived. The turbulence modeling and results are discussed in the next two sections followed by suggestions for future work.
Estimating Cosmological Parameter Covariance
Taylor, Andy
2014-01-01
We investigate the bias and error in estimates of the cosmological parameter covariance matrix, due to sampling or modelling the data covariance matrix, for likelihood width and peak scatter estimators. We show that these estimators do not coincide unless the data covariance is exactly known. For sampled data covariances, with Gaussian distributed data and parameters, the parameter covariance matrix estimated from the width of the likelihood has a Wishart distribution, from which we derive the mean and covariance. This mean is biased and we propose an unbiased estimator of the parameter covariance matrix. Comparing our analytic results to a numerical Wishart sampler of the data covariance matrix we find excellent agreement. An accurate ansatz for the mean parameter covariance for the peak scatter estimator is found, and we fit its covariance to our numerical analysis. The mean is again biased and we propose an unbiased estimator for the peak parameter covariance. For sampled data covariances the width estimat...
Generating survival times to simulate Cox proportional hazards models with time-varying covariates.
Austin, Peter C
2012-12-20
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate.
Garaud, Pascale; Gagnier, Damien; Verhoeven, Jan
2017-03-01
Shear-induced turbulence could play a significant role in mixing momentum and chemical species in stellar radiation zones, as discussed by Zahn. In this paper we analyze the results of direct numerical simulations of stratified plane Couette flows, in the limit of rapid thermal diffusion, to measure the turbulent viscosity and the turbulent diffusivity of a passive tracer as a function of the local shear and the local stratification. We find that the stability criterion proposed by Zahn, namely that the product of the gradient Richardson number and the Prandtl number must be smaller than a critical values {(J\\Pr )}c for instability, adequately accounts for the transition to turbulence in the flow, with {(J\\Pr )}c≃ 0.007. This result recovers and confirms the prior findings of Prat et al. Zahn’s model for the turbulent diffusivity and viscosity, namely that the mixing coefficient should be proportional to the ratio of the thermal diffusivity to the gradient Richardson number, does not satisfactorily match our numerical data. It fails (as expected) in the limit of large stratification where the Richardson number exceeds the aforementioned threshold for instability, but it also fails in the limit of low stratification where the turbulent eddy scale becomes limited by the computational domain size. We propose a revised model for turbulent mixing by diffusive stratified shear instabilities that properly accounts for both limits, fits our data satisfactorily, and recovers Zahn’s model in the limit of large Reynolds numbers.
Analysis of capture-recapture models with individual covariates using data augmentation
Royle, J. Andrew
2009-01-01
I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.
DEFF Research Database (Denmark)
Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen
2014-01-01
's dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude......We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup...
Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne; Do, Duy Ngoc; KADARMIDEEN, Haja N.; Jensen, Just
2014-01-01
Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI. Based on covariance functions, residual feed intake (RFI) was defined and derived as the conditional genetic variance in feed intake given mid-test breeding value for BW and rate of gain. The heritabili...
Automated model selection in covariance estimation and spatial whitening of MEG and EEG signals.
Engemann, Denis A; Gramfort, Alexandre
2015-03-01
Magnetoencephalography and electroencephalography (M/EEG) measure non-invasively the weak electromagnetic fields induced by post-synaptic neural currents. The estimation of the spatial covariance of the signals recorded on M/EEG sensors is a building block of modern data analysis pipelines. Such covariance estimates are used in brain-computer interfaces (BCI) systems, in nearly all source localization methods for spatial whitening as well as for data covariance estimation in beamformers. The rationale for such models is that the signals can be modeled by a zero mean Gaussian distribution. While maximizing the Gaussian likelihood seems natural, it leads to a covariance estimate known as empirical covariance (EC). It turns out that the EC is a poor estimate of the true covariance when the number of samples is small. To address this issue the estimation needs to be regularized. The most common approach downweights off-diagonal coefficients, while more advanced regularization methods are based on shrinkage techniques or generative models with low rank assumptions: probabilistic PCA (PPCA) and factor analysis (FA). Using cross-validation all of these models can be tuned and compared based on Gaussian likelihood computed on unseen data. We investigated these models on simulations, one electroencephalography (EEG) dataset as well as magnetoencephalography (MEG) datasets from the most common MEG systems. First, our results demonstrate that different models can be the best, depending on the number of samples, heterogeneity of sensor types and noise properties. Second, we show that the models tuned by cross-validation are superior to models with hand-selected regularization. Hence, we propose an automated solution to the often overlooked problem of covariance estimation of M/EEG signals. The relevance of the procedure is demonstrated here for spatial whitening and source localization of MEG signals.
Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.
Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F
2001-01-01
When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.
Sang, Huiyan
2011-12-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.
Implications of the modelling of stratified hot water storage tanks in the simulation of CHP plants
Energy Technology Data Exchange (ETDEWEB)
Campos Celador, A., E-mail: alvaro.campos@ehu.es [ENEDI Research Group-University of the Basque Country, Departamento de Maquinas y Motores Termicos, E.T.S.I. de Bilbao Alameda de Urquijo, s/n 48013 Bilbao, Bizkaia (Spain); Odriozola, M.; Sala, J.M. [ENEDI Research Group-University of the Basque Country, Departamento de Maquinas y Motores Termicos, E.T.S.I. de Bilbao Alameda de Urquijo, s/n 48013 Bilbao, Bizkaia (Spain)
2011-08-15
Highlights: {yields} Three different modelling approaches for simulation of hot water tanks are presented. {yields} The three models are simulated within a residential cogeneration plant. {yields} Small differences in the results are found by an energy and exergy analysis. {yields} Big differences between the results are found by an advanced exergy analysis. {yields} Results on the feasibility study are explained by the advanced exergy analysis. - Abstract: This paper considers the effect that different hot water storage tank modelling approaches have on the global simulation of residential CHP plants as well as their impact on their economic feasibility. While a simplified assessment of the heat storage is usually considered in the feasibility studies of CHP plants in buildings, this paper deals with three different levels of modelling of the hot water tank: actual stratified model, ideal stratified model and fully mixed model. These three approaches are presented and comparatively evaluated under the same case of study, a cogeneration plant with thermal storage meeting the loads of an urbanisation located in the Bilbao metropolitan area (Spain). The case of study is simulated by TRNSYS for each one of the three modelling cases and the so obtained annual results are analysed from both a First and Second-Law-based viewpoint. While the global energy and exergy efficiencies of the plant for the three modelling cases agree quite well, important differences are found between the economic results of the feasibility study. These results can be predicted by means of an advanced exergy analysis of the storage tank considering the endogenous and exogenous exergy destruction terms caused by the hot water storage tank.
Forecasting Multivariate Volatility using the VARFIMA Model on Realized Covariance Cholesky Factors
DEFF Research Database (Denmark)
Halbleib, Roxana; Voev, Valeri
2011-01-01
This paper analyzes the forecast accuracy of the multivariate realized volatility model introduced by Chiriac and Voev (2010), subject to different degrees of model parametrization and economic evaluation criteria. Bymodelling the Cholesky factors of the covariancematrices, the model generates...... positive definite, but biased covariance forecasts. In this paper, we provide empirical evidence that parsimonious versions of the model generate the best covariance forecasts in the absence of bias correction. Moreover, we show by means of stochastic dominance tests that any risk averse investor......, regardless of the type of utility function or return distribution, would be better-off from using this model than from using some standard approaches....
Thiele, Uwe; Frastia, Lubor
2007-01-01
A dynamical model is proposed to describe the coupled decomposition and profile evolution of a free surface film of a binary mixture. An example is a thin film of a polymer blend on a solid substrate undergoing simultaneous phase separation and dewetting. The model is based on model-H describing the coupled transport of the mass of one component (convective Cahn-Hilliard equation) and momentum (Navier-Stokes-Korteweg equations) supplemented by appropriate boundary conditions at the solid substrate and the free surface. General transport equations are derived using phenomenological non-equilibrium thermodynamics for a general non-isothermal setting taking into account Soret and Dufour effects and interfacial viscosity for the internal diffuse interface between the two components. Focusing on an isothermal setting the resulting model is compared to literature results and its base states corresponding to homogeneous or vertically stratified flat layers are analysed.
Application of an Error Statistics Estimation Method to the PSAS Forecast Error Covariance Model
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.
Su, Li; Daniels, Michael J
2015-05-30
In long-term follow-up studies, irregular longitudinal data are observed when individuals are assessed repeatedly over time but at uncommon and irregularly spaced time points. Modeling the covariance structure for this type of data is challenging, as it requires specification of a covariance function that is positive definite. Moreover, in certain settings, careful modeling of the covariance structure for irregular longitudinal data can be crucial in order to ensure no bias arises in the mean structure. Two common settings where this occurs are studies with 'outcome-dependent follow-up' and studies with 'ignorable missing data'. 'Outcome-dependent follow-up' occurs when individuals with a history of poor health outcomes had more follow-up measurements, and the intervals between the repeated measurements were shorter. When the follow-up time process only depends on previous outcomes, likelihood-based methods can still provide consistent estimates of the regression parameters, given that both the mean and covariance structures of the irregular longitudinal data are correctly specified and no model for the follow-up time process is required. For 'ignorable missing data', the missing data mechanism does not need to be specified, but valid likelihood-based inference requires correct specification of the covariance structure. In both cases, flexible modeling approaches for the covariance structure are essential. In this paper, we develop a flexible approach to modeling the covariance structure for irregular continuous longitudinal data using the partial autocorrelation function and the variance function. In particular, we propose semiparametric non-stationary partial autocorrelation function models, which do not suffer from complex positive definiteness restrictions like the autocorrelation function. We describe a Bayesian approach, discuss computational issues, and apply the proposed methods to CD4 count data from a pediatric AIDS clinical trial. © 2015 The Authors
DEFF Research Database (Denmark)
Yang, Yukay
I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...
Peters, Elisabeth; Stuke, Maik
2016-01-01
In this manuscript we study the modeling of experimental data and its impact on the resulting integral experimental covariance and correlation matrices. By investigating a set of three low enriched and water moderated UO2 fuel rod arrays we found that modeling the same set of data with different, yet reasonable assumptions concerning the fuel rod composition and its geometric properties leads to significantly different covariance matrices or correlation coefficients. Following a Monte Carlo sampling approach, we show for nine different modeling assumptions the corresponding correlation coefficients and sensitivity profiles for each pair of the effective neutron multiplication factor keff. Within the 95% confidence interval the correlation coefficients vary from 0 to 1, depending on the modeling assumptions. Our findings show that the choice of modeling can have a huge impact on integral experimental covariance matrices. When the latter are used in a validation procedure to derive a bias, this procedure can be...
Modeling and Assessment of Buoyancy-Driven Stratified Airflow in High-Space Industrial Hall
Institute of Scientific and Technical Information of China (English)
WANG Han-qing; CHEN Ke; HU Jian-jun; KOU Guang-xiao; WANG Zhi-yong
2009-01-01
In industrial environment,heat sources often are contaminant sources and health threatening con-taminants are mainly passive,so a detailed understanding of airflow mode can assist in work environment hy-giene measurement and prevention.This paper presented a numerical investigation of stratified airflow scenario in a high-space industrial hall with validated commercial code and experimentally acquired boundary conditions.Based upon an actually undergoing engineering project,this study investigated the performance of the buoyancy-driven displacement ventilation in a large welding hall with big components manufactured.The results have demonstrated that stratified airflow sustained by thermal buoyancy provides zoning effect in terms of clean and polluted regions except minor stagnant eddy areas.The competition between negative buoyant jets from displace-ment radial diffusers and positive buoyant plume from bulk object constitutes the complex transport characteris-tics under and above stratification interface.Entrainment,downdraft and turbulent eddy motion complicate the upper mixing zone,but the exhaust outlet plays a less important role in the whole field flow.And the corre-sponding suggestions concerning computational stability and convergence,further improvements in modeling and measurements were given.
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
Indian Academy of Sciences (India)
Deepak Swami; P K Sharma; C S P Ojha
2014-12-01
In this paper, we have studied the behaviour of reactive solute transport through stratified porous medium under the influence of multi-process nonequilibrium transport model. Various experiments were carried out in the laboratory and the experimental breakthrough curves were observed at spatially placed sampling points for stratified porous medium. Batch sorption studies were also performed to estimate the sorption parameters of the material used in stratified aquifer system. The effects of distance dependent dispersion and tailing are visible in the experimental breakthrough curves. The presence of physical and chemical non-equilibrium are observed from the pattern of breakthrough curves. Multi-process non-equilibrium model represents the combined effect of physical and chemical non-ideality in the stratified aquifer system. The results show that the incorporation of distance dependent dispersivity in multi-process non-equilibrium model provides best fit of observed data through stratified porous media. Also, the exponential distance dependent dispersivity is more suitable for large distances and at small distances, linear or constant dispersivity function can be considered for simulating reactive solute in stratified porous medium.
Zhou, Xinyao; Bi, Shaojie; Yang, Yonghui; Tian, Fei; Ren, Dandan
2014-11-01
The three-temperature (3T) model is a simple model which estimates plant transpiration from only temperature data. In-situ field experimental results have shown that 3T is a reliable evapotranspiration (ET) estimation model. Despite encouraging results from recent efforts extending the 3T model to remote sensing applications, literature shows limited comparisons of the 3T model with other remote sensing driven ET models. This research used ET obtained from eddy covariance to evaluate the 3T model and in turn compared the model-simulated ET with that of the more traditional SEBAL (Surface Energy Balance Algorithm for Land) model. A field experiment was conducted in the cotton fields of Taklamakan desert oasis in Xinjiang, Northwest China. Radiation and surface temperature were obtained from hyperspectral and thermal infrared images for clear days in 2013. The images covered the time period of 0900-1800 h at four different phenological stages of cotton. Meteorological data were automatically recorded in a station located at the center of the cotton field. Results showed that the 3T model accurately captured daily and seasonal variations in ET. As low dry soil surface temperatures induced significant errors in the 3T model, it was unsuitable for estimating ET in the early morning and late afternoon periods. The model-simulated ET was relatively more accurate for squaring, bolling and boll-opening stages than for seedling stage of cotton during when ET was generally low. Wind speed was apparently not a limiting factor of ET in the 3T model. This was attributed to the fact that surface temperature, a vital input of the model, indirectly accounted for the effect of wind speed on ET. Although the 3T model slightly overestimated ET compared with SEBAL and eddy covariance, it was generally reliable for estimating daytime ET during 0900-1600 h.
Pion generalized parton distributions within a fully covariant constituent quark model
Energy Technology Data Exchange (ETDEWEB)
Fanelli, Cristiano [Massachusetts Institute of Technology, Cambridge, MA (United States). Lab. for Nuclear Science; Pace, Emanuele [' ' Tor Vergata' ' Univ., Rome (Italy). Physics Dept.; INFN Sezione di TorVergata, Rome (Italy); Romanelli, Giovanni [Rutherford-Appleton Laboratory, Didcot (United Kingdom). STFC; Salme, Giovanni [Istituto Nazionale di Fisica Nucleare, Rome (Italy); Salmistraro, Marco [Rome La Sapienza Univ. (Italy). Physics Dept.; I.I.S. G. De Sanctis, Rome (Italy)
2016-05-15
We extend the investigation of the generalized parton distribution for a charged pion within a fully covariant constituent quark model, in two respects: (1) calculating the tensor distribution and (2) adding the treatment of the evolution, needed for achieving a meaningful comparison with both the experimental parton distribution and the lattice evaluation of the so-called generalized form factors. Distinct features of our phenomenological covariant quark model are: (1) a 4D Ansatz for the pion Bethe-Salpeter amplitude, to be used in the Mandelstam formula for matrix elements of the relevant current operators, and (2) only two parameters, namely a quark mass assumed to be m{sub q} = 220 MeV and a free parameter fixed through the value of the pion decay constant. The possibility of increasing the dynamical content of our covariant constituent quark model is briefly discussed in the context of the Nakanishi integral representation of the Bethe-Salpeter amplitude. (orig.)
Institute of Scientific and Technical Information of China (English)
Yee LEUNG; WU Kefa; DONG Tianxin
2001-01-01
In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.
Survival prediction based on compound covariate under Cox proportional hazard models.
Directory of Open Access Journals (Sweden)
Takeshi Emura
Full Text Available Survival prediction from a large number of covariates is a current focus of statistical and medical research. In this paper, we study a methodology known as the compound covariate prediction performed under univariate Cox proportional hazard models. We demonstrate via simulations and real data analysis that the compound covariate method generally competes well with ridge regression and Lasso methods, both already well-studied methods for predicting survival outcomes with a large number of covariates. Furthermore, we develop a refinement of the compound covariate method by incorporating likelihood information from multivariate Cox models. The new proposal is an adaptive method that borrows information contained in both the univariate and multivariate Cox regression estimators. We show that the new proposal has a theoretical justification from a statistical large sample theory and is naturally interpreted as a shrinkage-type estimator, a popular class of estimators in statistical literature. Two datasets, the primary biliary cirrhosis of the liver data and the non-small-cell lung cancer data, are used for illustration. The proposed method is implemented in R package "compound.Cox" available in CRAN at http://cran.r-project.org/.
Covariant 4-dimensional fuzzy spheres, matrix models and higher spin
Sperling, Marcus; Steinacker, Harold C.
2017-09-01
We study in detail generalized 4-dimensional fuzzy spheres with twisted extra dimensions. These spheres can be viewed as SO(5) -equivariant projections of quantized coadjoint orbits of SO(6) . We show that they arise as solutions in Yang-Mills matrix models, which naturally leads to higher-spin gauge theories on S 4. Several types of embeddings in matrix models are found, including one with self-intersecting fuzzy extra dimensions \
Model Checking for a General Linear Model with Nonignorable Missing Covariates
Institute of Scientific and Technical Information of China (English)
Zhi-hua SUN; Wai-Cheung IP; Heung WONG
2012-01-01
In this paper,we investigate the model checking problem for a general linear model with nonignorable missing covariates.We show that,without any parametric model assumption for the response probability,the least squares method yields consistent estimators for the linear model even if only the complete data are applied.This makes it feasible to propose two testing procedures for the corresponding model checking problem:a score type lack-of-fit test and a test based on the empirical process.The asymptotic properties of the test statistics are investigated.Both tests are shown to have asymptotic power 1 for local alternatives converging to the null at the rate n-(r),0 ≤ (r) ＜ 1/2.Simulation results show that both tests perform satisfactorily.
A Matérn model of the spatial covariance structure of point rain rates
Sun, Ying
2014-07-15
It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.
Covariance Functions and Random Regression Models in the ...
African Journals Online (AJOL)
ARC-IRENE
modelled to account for heterogeneity of variance by AY. ... Results suggest that selection for CW could be effective and that RRM could be .... permanent environmental effects; and εij is the temporary environmental effect or measurement error. .... (1999), however, obtained correlations that were variable as low as 0.23 ...
Time-of-flight estimation based on covariance models
van der Heijden, Ferdinand; Tuquerres, G.; Regtien, Paulus P.L.
We address the problem of estimating the time-of-flight (ToF) of a waveform that is disturbed heavily by additional reflections from nearby objects. These additional reflections cause interference patterns that are difficult to predict. The introduction of a model for the reflection in terms of a
Utility covariances and context effects in conjoint MNP models
Haaijer, M.E.; Wedel, M.; Vriens, M.; Wansbeek, T.J.
1998-01-01
Experimental conjoint choice analysis is among the most frequently used methods for measuring and analyzing consumer preferences. The data from such experiments have been typically analyzed with the Multinomial Legit (MNL) model. However, there are several problems associated with the standard MNL
Utility covariances and context effects in conjoint MNP models
Haaijer, M.E.; Wedel, M.; Vriens, M.; Wansbeek, T.J.
1998-01-01
Experimental conjoint choice analysis is among the most frequently used methods for measuring and analyzing consumer preferences. The data from such experiments have been typically analyzed with the Multinomial Legit (MNL) model. However, there are several problems associated with the standard MNL m
Directory of Open Access Journals (Sweden)
Gianola Daniel
2007-09-01
Full Text Available Abstract Multivariate linear models are increasingly important in quantitative genetics. In high dimensional specifications, factor analysis (FA may provide an avenue for structuring (covariance matrices, thus reducing the number of parameters needed for describing (codispersion. We describe how FA can be used to model genetic effects in the context of a multivariate linear mixed model. An orthogonal common factor structure is used to model genetic effects under Gaussian assumption, so that the marginal likelihood is multivariate normal with a structured genetic (covariance matrix. Under standard prior assumptions, all fully conditional distributions have closed form, and samples from the joint posterior distribution can be obtained via Gibbs sampling. The model and the algorithm developed for its Bayesian implementation were used to describe five repeated records of milk yield in dairy cattle, and a one common FA model was compared with a standard multiple trait model. The Bayesian Information Criterion favored the FA model.
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
BAYESIAN LOCAL INFLUENCE ASSESSMENTS IN A GROWTH CURVE MODEL WITH GENERAL COVARIANCE STRUCTURE
Institute of Scientific and Technical Information of China (English)
2000-01-01
The objective of this paper is to present a Bayesian approach based on Kullback Leibler divergence for assessing local influence in a growth curve model with general covariance structure.Under certain prior distribution assumption,the Kullback-Leibler divergence is used to measure the influence of some minor perturbation on the posterior distribution of unknown parameter.This leads to the diagnostic statistic for detecting which response is locally influential.As an application,the common covariance-weighted perturbation scheme is thoroughly considered.
DEFF Research Database (Denmark)
Janssen, Anja; Mikosch, Thomas Valentin; Rezapour, Mohsen
2017-01-01
We consider a multivariate heavy-tailed stochastic volatility model and analyze the large-sample behavior of its sample covariance matrix. We study the limiting behavior of its entries in the infinite-variance case and derive results for the ordered eigenvalues and corresponding eigenvectors...... of the sample covariance matrix. While we show that in the case of heavy-tailed innovations the limiting behavior resembles that of completely independent observations, we also derive that in the case of a heavy-tailed volatility sequence the possible limiting behavior is more diverse, i.e. allowing...
Integrating lysimeter drainage and eddy covariance flux measurements in a groundwater recharge model
DEFF Research Database (Denmark)
Vasquez, Vicente; Thomsen, Anton Gårde; Iversen, Bo Vangsø;
2015-01-01
Field scale water balance is difficult to characterize because controls exerted by soils and vegetation are mostly inferred from local scale measurements with relatively small support volumes. Eddy covariance flux and lysimeters have been used to infer and evaluate field scale water balances...... because they have larger footprint areas than local soil moisture measurements.. This study quantifies heterogeneity of soil deep drainage (D) in four 12.5 m2 repacked lysimeters, compares evapotranspiration from eddy covariance (ETEC) and mass balance residuals of lysimeters (ETwbLys), and models D...
Advantages of vertically adaptive coordinates in numerical models of stratified shelf seas
Gräwe, Ulf; Holtermann, Peter; Klingbeil, Knut; Burchard, Hans
2015-08-01
Shelf seas such as the North Sea and the Baltic Sea are characterised by spatially and temporally varying stratification that is highly relevant for their physical dynamics and the evolution of their ecosystems. Stratification may vary from unstably stratified (e.g., due to convective surface cooling) to strongly stratified with density jumps of up to 10 kg/m3 per m (e.g., in overflows into the Baltic Sea). Stratification has a direct impact on vertical turbulent transports (e.g., of nutrients) and influences the entrainment rate of ambient water into dense bottom currents which in turn determine the stratification of and oxygen supply to, e.g., the central Baltic Sea. Moreover, the suppression of the vertical diffusivity at the summer thermocline is one of the limiting factors for the vertical exchange of nutrients in the North Sea. Due to limitations of computational resources and since the locations of such density jumps (either by salinity or temperature) are predicted by the model simulation itself, predefined vertical coordinates cannot always reliably resolve these features. Thus, all shelf sea models with a predefined vertical coordinate distribution are inherently subject to under-resolution of the density structure. To solve this problem, Burchard and Beckers (2004) and Hofmeister et al. (2010) developed the concept of vertically adaptive coordinates for ocean models, where zooming of vertical coordinates at locations of strong stratification (and shear) is imposed. This is achieved by solving a diffusion equation for the position of the coordinates (with the diffusivity being proportional to the stratification or shear frequencies). We will show for a coupled model system of the North Sea and the Baltic Sea (resolution ˜ 1.8 km) how numerical mixing is substantially reduced and model results become significantly more realistic when vertically adaptive coordinates are applied. We additionally demonstrate that vertically adaptive coordinates perform well
Latour, G; Elias, M; Frigerio, J M
2007-10-01
The diffuse reflectance spectra and the trichromatic coordinates of diffusing stratified paints are modeled. Each layer contains its own pigments, and their optical properties are first determined from experiments. The radiative transfer equation is then solved by the auxiliary function method for modeling the total light scattered by the stratified systems. The results are in good agreement with experimental spectra and validate the modeling. The calculations are then applied on the same stratified systems to study the influence of the observation angle in a bidirectional configuration and to study the influence of the thickness of the layers in a given configuration. In both cases, the reflectance spectra and the trichromatic coordinates are calculated and compared.
Lagged PM2.5 effects in mortality time series: Critical impact of covariate model
The two most common approaches to modeling the effects of air pollution on mortality are the Harvard and the Johns Hopkins (NMMAPS) approaches. These two approaches, which use different sets of covariates, result in dissimilar estimates of the effect of lagged fine particulate ma...
Ali, Sanni; Groenwold, Rolf H.H.; Belitser, S.; Roes, Kit C.B.; Hoes, Arno W.; De Boer, Anthonius; Klungel, Olaf H.
2015-01-01
Background: In building propensity score (PS) model, inclusion of interaction/square terms in addition to the main terms and the use of balance measures has been suggested. However, the impact of assessing balance of several sets of covariates and their interactions/squares on bias/precision is not
The Consequences of Ignoring Multilevel Data Structures in Nonhierarchical Covariance Modeling.
Julian, Marc W.
2001-01-01
Examined the effects of ignoring multilevel data structures in nonhierarchical covariance modeling using a Monte Carlo simulation. Results suggest that when the magnitudes of intraclass correlations are less than 0.05 and the group size is small, the consequences of ignoring the data dependence within the multilevel data structures seem to be…
Lee, Sik-Yum; Song, Xin-Yuan; Tang, Nian-Sheng
2007-01-01
The analysis of interaction among latent variables has received much attention. This article introduces a Bayesian approach to analyze a general structural equation model that accommodates the general nonlinear terms of latent variables and covariates. This approach produces a Bayesian estimate that has the same statistical optimal properties as a…
Directory of Open Access Journals (Sweden)
Lei Qin
2014-05-01
Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.
Excess covariance and dynamic instability in a multi-asset model
Anufriev, M.; Bottazzi, G.; Marsili, M.; Pin, P.
2011-01-01
The presence of excess covariance in financial price returns is an accepted empirical fact: the price dynamics of financial assets tend to be more correlated than their fundamentals would justify. We propose an intertemporal equilibrium multi-assets model of financial markets with an explicit and
Cai, Li; Lee, Taehun
2009-01-01
We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…
Bhadra, Anindya; Carroll, Raymond J
2016-07-01
In truncated polynomial spline or B-spline models where the covariates are measured with error, a fully Bayesian approach to model fitting requires the covariates and model parameters to be sampled at every Markov chain Monte Carlo iteration. Sampling the unobserved covariates poses a major computational problem and usually Gibbs sampling is not possible. This forces the practitioner to use a Metropolis-Hastings step which might suffer from unacceptable performance due to poor mixing and might require careful tuning. In this article we show for the cases of truncated polynomial spline or B-spline models of degree equal to one, the complete conditional distribution of the covariates measured with error is available explicitly as a mixture of double-truncated normals, thereby enabling a Gibbs sampling scheme. We demonstrate via a simulation study that our technique performs favorably in terms of computational efficiency and statistical performance. Our results indicate up to 62 and 54 % increase in mean integrated squared error efficiency when compared to existing alternatives while using truncated polynomial splines and B-splines respectively. Furthermore, there is evidence that the gain in efficiency increases with the measurement error variance, indicating the proposed method is a particularly valuable tool for challenging applications that present high measurement error. We conclude with a demonstration on a nutritional epidemiology data set from the NIH-AARP study and by pointing out some possible extensions of the current work.
A new procedure to built a model covariance matrix: first results
Barzaghi, R.; Marotta, A. M.; Splendore, R.; Borghi, A.
2012-04-01
In order to validate the results of geophysical models a common procedure is to compare model predictions with observations by means of statistical tests. A limit of this approach is the lack of a covariance matrix associated to model results, that may frustrate the achievement of a confident statistical significance of the results. Trying to overcome this limit, we have implemented a new procedure to build a model covariance matrix that could allow a more reliable statistical analysis. This procedure has been developed in the frame of the thermo-mechanical model described in Splendore et al. (2010), that predicts the present-day crustal velocity field in the Tyrrhenian due to Africa-Eurasia convergence and to lateral rheological heterogeneities of the lithosphere. Modelled tectonic velocity field has been compared to the available surface velocity field based on GPS observation, determining the best fit model and the degree of fitting, through the use of a χ2 test. Once we have identified the key models parameters and defined their appropriate ranges of variability, we have run 100 different models for 100 sets of randomly values of the parameters extracted within the corresponding interval, obtaining a stack of 100 velocity fields. Then, we calculated variance and empirical covariance for the stack of results, taking into account also cross-correlation, obtaining a positive defined, diagonal matrix that represents the covariance matrix of the model. This empirical approach allows us to define a more robust statistical analysis with respect the classic approach. Reference Splendore, Marotta, Barzaghi, Borghi and Cannizzaro, 2010. Block model versus thermomechanical model: new insights on the present-day regional deformation in the surroundings of the Calabrian Arc. In: Spalla, Marotta and Gosso (Eds) Advances in Interpretation of Geological Processes: Refinement of Multi scale Data and Integration in Numerical Modelling. Geological Society, London, Special
Aoki, Yasunori; Nordgren, Rikard; Hooker, Andrew C
2016-03-01
As the importance of pharmacometric analysis increases, more and more complex mathematical models are introduced and computational error resulting from computational instability starts to become a bottleneck in the analysis. We propose a preconditioning method for non-linear mixed effects models used in pharmacometric analyses to stabilise the computation of the variance-covariance matrix. Roughly speaking, the method reparameterises the model with a linear combination of the original model parameters so that the Hessian matrix of the likelihood of the reparameterised model becomes close to an identity matrix. This approach will reduce the influence of computational error, for example rounding error, to the final computational result. We present numerical experiments demonstrating that the stabilisation of the computation using the proposed method can recover failed variance-covariance matrix computations, and reveal non-identifiability of the model parameters.
Modeling the Thickness of Perennial Ice Covers on Stratified Lakes of the Taylor Valley, Antarctica
Obryk, M. K.; Doran, P. T.; Hicks, J. A.; McKay, C. P.; Priscu, J. C.
2016-01-01
A one-dimensional ice cover model was developed to predict and constrain drivers of long term ice thickness trends in chemically stratified lakes of Taylor Valley, Antarctica. The model is driven by surface radiative heat fluxes and heat fluxes from the underlying water column. The model successfully reproduced 16 years (between 1996 and 2012) of ice thickness changes for west lobe of Lake Bonney (average ice thickness = 3.53 m; RMSE = 0.09 m, n = 118) and Lake Fryxell (average ice thickness = 4.22 m; RMSE = 0.21 m, n = 128). Long-term ice thickness trends require coupling with the thermal structure of the water column. The heat stored within the temperature maximum of lakes exceeding a liquid water column depth of 20 m can either impede or facilitate ice thickness change depending on the predominant climatic trend (temperature cooling or warming). As such, shallow (< 20 m deep water columns) perennially ice-covered lakes without deep temperature maxima are more sensitive indicators of climate change. The long-term ice thickness trends are a result of surface energy flux and heat flux from the deep temperature maximum in the water column, the latter of which results from absorbed solar radiation.
Energy Technology Data Exchange (ETDEWEB)
Muroki, T. [Kanagawa Inst. of Technology, Dept. of Mechanical Engineering, Kanagawa (Japan); Moriyoshi, Y. [Chiba Univ., Dept. of Electronics and Mechanical Engineering, Chiba (Japan)
2000-11-01
In a stratified charge engine, a glow plug pilot flame ignition system has been compared with a spark-ignition system for a model stratified charge Wankel combustion chamber. A motored two-stroke diesel engine was operated as a rapid compression and expansion machine with the cylinder head replaced by a model Wankel combustion chamber designed to simulate the temporal changes of air flow and pressure fields inside the chamber of an actual engine. It was found that the pilot flame ignition system had better ignitability and improved combustion characteristics, especially in the lean mixture range, relative to the spark-ignition system. (Author)
Energy Technology Data Exchange (ETDEWEB)
Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.
2016-11-01
The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)
Directory of Open Access Journals (Sweden)
Marcin Studnicki
2016-06-01
Full Text Available The main objectives of multi-environmental trials (METs are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E interactions. Linear mixed models (LMMs with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011 from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset.
Models with orthogonal block structure, with diagonal blockwise variance-covariance matrices
Carvalho, Francisco; Mexia, João T.; Covas, Ricardo
2017-07-01
We intend to show that in the family of models with orthogonal block structure, OBS, we may single out those with blockwise diagonal variance-covariance matrices, DOBS. Namely we show that for every model with observation vector y with OBS, there is a model y °=P y , with P orthogonal which is DOBS and that the estimation of relevant parameters may be carried out for y ° .
Garaud, P; Verhoeven, J
2016-01-01
Shear-induced turbulence could play a significant role in mixing momentum and chemical species in stellar radiation zones, as discussed by Zahn (1974). In this paper we analyze the results of direct numerical simulations of stratified plane Couette flows, in the limit of rapid thermal diffusion, to measure the turbulent diffusivity and turbulent viscosity as a function of the local shear and the local stratification. We find that the stability criterion proposed by Zahn (1974), namely that the product of the gradient Richardson number and the Prandtl number must be smaller than a critical values $(J\\Pr)_c$ for instability, adequately accounts for the transition to turbulence in the flow, with $(J\\Pr)_c \\simeq 0.007$. This result recovers and confirms the prior findings of Prat et al. (2016). Zahn's model for the turbulent diffusivity and viscosity (Zahn 1992), namely that the mixing coefficient should be proportional to the ratio of the thermal diffusivity to the gradient Richardson number, does not satisfact...
Wanstall, Taber; Hadji, Layachi
2016-11-01
The convective stability associated with carbon sequestration is modeled by adopting an unstably stratified basic profile having a step function density with top heavy carbon saturated layer overlying a lighter carbon free layer. The model takes into account the anisotropy in both permeability and carbon dioxide diffusion, and chemical reactions between the CO2 rich brine and host mineralogy. We carry out a linear stability analysis to derive the instability threshold parameters for a variety of CO2 boundary conditions. We solve for the minimum thickness of the carbon-rich layer at which convection sets in and quantify how its value is influenced by diffusion, anisotropy, permeability, reaction and type of boundary conditions. The discontinuity leads to convective concentration contours that have the shape of an asymmetric lens which we quantify by deriving and making use of the CO2 flux expressions at the interface. The linear problem is extended to the nonlinear regime, the analysis of which leads to the determination of a uniformly valid super critical steady solution.
Madrasi, Kumpal; Chaturvedula, Ayyappa; Haberer, Jessica E; Sale, Mark; Fossler, Michael J; Bangsberg, David; Baeten, Jared M; Celum, Connie; Hendrix, Craig W
2016-12-06
Adherence is a major factor in the effectiveness of preexposure prophylaxis (PrEP) for HIV prevention. Modeling patterns of adherence helps to identify influential covariates of different types of adherence as well as to enable clinical trial simulation so that appropriate interventions can be developed. We developed a Markov mixed-effects model to understand the covariates influencing adherence patterns to daily oral PrEP. Electronic adherence records (date and time of medication bottle cap opening) from the Partners PrEP ancillary adherence study with a total of 1147 subjects were used. This study included once-daily dosing regimens of placebo, oral tenofovir disoproxil fumarate (TDF), and TDF in combination with emtricitabine (FTC), administered to HIV-uninfected members of serodiscordant couples. One-coin and first- to third-order Markov models were fit to the data using NONMEM(®) 7.2. Model selection criteria included objective function value (OFV), Akaike information criterion (AIC), visual predictive checks, and posterior predictive checks. Covariates were included based on forward addition (α = 0.05) and backward elimination (α = 0.001). Markov models better described the data than 1-coin models. A third-order Markov model gave the lowest OFV and AIC, but the simpler first-order model was used for covariate model building because no additional benefit on prediction of target measures was observed for higher-order models. Female sex and older age had a positive impact on adherence, whereas Sundays, sexual abstinence, and sex with a partner other than the study partner had a negative impact on adherence. Our findings suggest adherence interventions should consider the role of these factors.
Development of a Curved, Stratified, In Vitro Model to Assess Ocular Biocompatibility: e96448
National Research Council Canada - National Science Library
Cameron K Postnikoff; Robert Pintwala; Sara Williams; Ann M Wright; Denise Hileeto; Maud B Gorbet
2014-01-01
.... Methods Immortalized human corneal epithelial cells were grown to confluency on curved cellulose filters for seven days, and were then differentiated and stratified using an air-liquid interface...
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-09-25
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools.
DEFF Research Database (Denmark)
He, Peng; Eriksson, Frank; Scheike, Thomas H.
2016-01-01
With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution and the cov......With competing risks data, one often needs to assess the treatment and covariate effects on the cumulative incidence function. Fine and Gray proposed a proportional hazards regression model for the subdistribution of a competing risk with the assumption that the censoring distribution...... and the covariates are independent. Covariate-dependent censoring sometimes occurs in medical studies. In this paper, we study the proportional hazards regression model for the subdistribution of a competing risk with proper adjustments for covariate-dependent censoring. We consider a covariate-adjusted weight...... function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight...
Quark model with chiral-symmetry breaking and confinement in the Covariant Spectator Theory
Energy Technology Data Exchange (ETDEWEB)
Biernat, Elmer P. [CFTP, Instituto Superior TÃ©cnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Pena, Maria Teresa [CFTP, Instituto Superior TÃ©cnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Departamento de FÃsica, Instituto Superior TÃ©cnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Ribiero, Jose' Emilio F. [CeFEMA, Instituto Superior TÃ©cnico, Universidade de Lisboa, 1049-001 Lisboa, Portugal; Stadler, Alfred [Departamento de FÃsica, Universidade de Ãvora, 7000-671 Ãvora, Portugal; Gross, Franz L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)
2016-03-01
We propose a model for the quark-antiquark interaction in Minkowski space using the Covariant Spectator Theory. We show that with an equal-weighted scalar-pseudoscalar structure for the confining part of our interaction kernel the axial-vector Ward-Takahashi identity is preserved and our model complies with the Adler-zero constraint for pi-pi-scattering imposed by chiral symmetry.
Gvirtzman, Haim; Shalev, Eyal; Dahan, Ofer; Hatzor, Yossef H.
2008-01-01
SummaryTwo large-scale field experiments were conducted to track water flow through unsaturated stratified loess deposits. In the experiments, a trench was flooded with water, and water infiltration was allowed until full saturation of the sediment column, to a depth of 20 m, was achieved. The water penetrated through a sequence of alternating silty-sand and sandy-clay loess deposits. The changes in water content over time were monitored at 28 points beneath the trench, using time domain reflectometry (TDR) probes placed in four boreholes. Detailed records were obtained from a 21-day-period of wetting, followed by a 3-month-period of drying, and finally followed by a second 14-day-period of re-wetting. These processes were simulated using a two-dimensional numerical code that solves the flow equation. The model was calibrated using PEST. The simulations demonstrate that the propagation of the wetting front is hampered due to alternating silty-sand and sandy-clay loess layers. Moreover, wetting front propagation is further hampered by the extremely low values of the initial, unsaturated, hydraulic conductivity; thereby increasing the water content within the onion-shaped wetted zone up to full saturation. Numerical simulations indicate that above-hydrostatic pressure is developed within intermediate saturated layers, enhancing wetting front propagation.
The optical interface of a photonic crystal: Modeling an opal with a stratified effective index
Maurin, Isabelle; Laliotis, Athanasios; Bloch, Daniel
2014-01-01
An artificial opal is a compact arrangement of transparent spheres, and is an archetype of a three-dimensional photonic crystal. Here, we describe the optics of an opal using a flexible model based upon a stratified medium whose (effective) index is governed by the opal density in a small planar slice of the opal. We take into account the effect of the substrate and assume a well- controlled number of layers, as it occurs for an opal fabricated by Langmuir-Blodgett deposition. The calculations are performed with transfer matrices, and an absorptive component in the effective index is introduced to account for the light scattering. This one-dimensional formalism allows quantitative predictions for reflection and transmission, notably as a function of the ratio between the irradiation wavelength and the sphere diameter, or as a function of the incidence angle or of the polarization. It can be used for an irradiation from the substrate side or from the vacuum side and can account for defect layers. The interface...
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
On the numerical simulation of active scalar,a new explicit algebraic expression on active scalar flux was derived based on Wikstrm,Wallin and Johansson model (aWWJ model). Reynolds stress algebraic expressions were added by a term to account for the buoyancy effect. The new explicit Reynolds stress and active scalar flux model was then established. Governing equations of this model were solved by finite volume method with unstructured grids. The thermal shear stratified cylinder wake flow was computed by this new model. The computational results are in good agreement with laboratorial measurements. This work is the development on modeling of explicit algebraic Reynolds stress and scalar flux,and is also a further modification of the aWWJ model for complex situations such as a shear stratified flow.
Cognola, Guido; Sebastiani, Lorenzo; Vagnozzi, Sunny; Zerbini, Sergio
2016-01-01
We consider the Nojiri-Odintsov covariant Horava-like gravitational model, where diffeomorphism invariance is broken dynamically via a non-standard coupling to a perfect fluid. The theory allows to address some of the potential instability problems present in Horava-Lifshitz gravity due to explicit diffeomorphism invariance breaking. The fluid is instead constructed from a scalar field constrained by a Lagrange multiplier. This construction allows to identify the scalar field with the mimetic field of the recently proposed mimetic gravity. Subsequently, we thoroughly explore the consequences of this identification. By adding a potential for the scalar field, we show how one can reproduce a number of interesting cosmological scenarios. We then turn to the study of perturbations around a flat FLRW background, showing that the fluid in question behaves as an irrotational fluid, with zero sound speed. To address this problem, we consider a modified version of the theory, adding higher derivative terms in a way wh...
Zhang, Zhongrui; Zhong, Quanlin; Niklas, Karl J.; Cai, Liang; Yang, Yusheng; Cheng, Dongliang
2016-08-01
Metabolic scaling theory (MST) posits that the scaling exponents among plant height H, diameter D, and biomass M will covary across phyletically diverse species. However, the relationships between scaling exponents and normalization constants remain unclear. Therefore, we developed a predictive model for the covariation of H, D, and stem volume V scaling relationships and used data from Chinese fir (Cunninghamia lanceolata) in Jiangxi province, China to test it. As predicted by the model and supported by the data, normalization constants are positively correlated with their associated scaling exponents for D vs. V and H vs. V, whereas normalization constants are negatively correlated with the scaling exponents of H vs. D. The prediction model also yielded reliable estimations of V (mean absolute percentage error = 10.5 ± 0.32 SE across 12 model calibrated sites). These results (1) support a totally new covariation scaling model, (2) indicate that differences in stem volume scaling relationships at the intra-specific level are driven by anatomical or ecophysiological responses to site quality and/or management practices, and (3) provide an accurate non-destructive method for predicting Chinese fir stem volume.
Zhang, Zhongrui; Zhong, Quanlin; Niklas, Karl J; Cai, Liang; Yang, Yusheng; Cheng, Dongliang
2016-08-24
Metabolic scaling theory (MST) posits that the scaling exponents among plant height H, diameter D, and biomass M will covary across phyletically diverse species. However, the relationships between scaling exponents and normalization constants remain unclear. Therefore, we developed a predictive model for the covariation of H, D, and stem volume V scaling relationships and used data from Chinese fir (Cunninghamia lanceolata) in Jiangxi province, China to test it. As predicted by the model and supported by the data, normalization constants are positively correlated with their associated scaling exponents for D vs. V and H vs. V, whereas normalization constants are negatively correlated with the scaling exponents of H vs. D. The prediction model also yielded reliable estimations of V (mean absolute percentage error = 10.5 ± 0.32 SE across 12 model calibrated sites). These results (1) support a totally new covariation scaling model, (2) indicate that differences in stem volume scaling relationships at the intra-specific level are driven by anatomical or ecophysiological responses to site quality and/or management practices, and (3) provide an accurate non-destructive method for predicting Chinese fir stem volume.
Directory of Open Access Journals (Sweden)
A. Budishchev
2014-09-01
Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.
Instabilities of continuously stratified zonal equatorial jets in a periodic channel model
Directory of Open Access Journals (Sweden)
S. Masina
Full Text Available Several numerical experiments are performed in a nonlinear, multi-level periodic channel model centered on the equator with different zonally uniform background flows which resemble the South Equatorial Current (SEC. Analysis of the simulations focuses on identifying stability criteria for a continuously stratified fluid near the equator. A 90 m deep frontal layer is required to destabilize a zonally uniform, 10° wide, westward surface jet that is symmetric about the equator and has a maximum velocity of 100 cm/s. In this case, the phase velocity of the excited unstable waves is very similar to the phase speed of the Tropical Instability Waves (TIWs observed in the eastern Pacific Ocean. The vertical scale of the baroclinic waves corresponds to the frontal layer depth and their phase speed increases as the vertical shear of the jet is doubled. When the westward surface parabolic jet is made asymmetric about the equator, in order to simulate more realistically the structure of the SEC in the eastern Pacific, two kinds of instability are generated. The oscillations that grow north of the equator have a baroclinic nature, while those generated on and very close to the equator have a barotropic nature.
This study shows that the potential for baroclinic instability in the equatorial region can be as large as at mid-latitudes, if the tendency of isotherms to have a smaller slope for a given zonal velocity, when the Coriolis parameter vanishes, is compensated for by the wind effect.
Key words. Oceanography: general (equatorial oceanography; numerical modeling – Oceanography: physics (fronts and jets
Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.
Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał
2016-08-01
Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014.
Maximum Likelihood Inference for the Cox Regression Model with Applications to Missing Covariates.
Chen, Ming-Hui; Ibrahim, Joseph G; Shao, Qi-Man
2009-10-01
In this paper, we carry out an in-depth theoretical investigation for existence of maximum likelihood estimates for the Cox model (Cox, 1972, 1975) both in the full data setting as well as in the presence of missing covariate data. The main motivation for this work arises from missing data problems, where models can easily become difficult to estimate with certain missing data configurations or large missing data fractions. We establish necessary and sufficient conditions for existence of the maximum partial likelihood estimate (MPLE) for completely observed data (i.e., no missing data) settings as well as sufficient conditions for existence of the maximum likelihood estimate (MLE) for survival data with missing covariates via a profile likelihood method. Several theorems are given to establish these conditions. A real dataset from a cancer clinical trial is presented to further illustrate the proposed methodology.
S-wave $\\pi^0$ Production in pp Collision in a Covariant OBE Model
Gedalin, E; Razdolskaya, L A
1999-01-01
The total cross section for the pp to pp $\\pi^0$ reaction at energies close to threshold is calculated in a covariant one-boson-exchange model. The amplitudes for the elementary BN to N$\\pi^0$ processes are taken to be the sum of s, u and t pole terms. The main contributions to the primary productionamplitude is due to a sigma meson exchange. Both the scale and energy dependence of the cross section are perfectly reproduced.
A covariant model for the $\\gamma^\\ast N \\to N^\\ast(1520)$ reaction
Ramalho, G
2014-01-01
We apply the covariant spectator quark model to the study of the electromagnetic structure of the $N^\\ast(1520)$ state ($J^{P}= \\frac{3}{2}^-$), an important resonance from the second resonance region in both spacelike and timelike regimes. The contributions from the valence quark effects are calculated for the $\\gamma^\\ast N \\to N^\\ast(1520)$ helicity amplitudes. The results are used to parametrize the meson cloud dominant at low $Q^2$.
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Inverse modeling of the terrestrial carbon flux in China with flux covariance among inverted regions
Wang, H.; Jiang, F.; Chen, J. M.; Ju, W.; Wang, H.
2011-12-01
Quantitative understanding of the role of ocean and terrestrial biosphere in the global carbon cycle, their response and feedback to climate change is required for the future projection of the global climate. China has the largest amount of anthropogenic CO2 emission, diverse terrestrial ecosystems and an unprecedented rate of urbanization. Thus information on spatial and temporal distributions of the terrestrial carbon flux in China is of great importance in understanding the global carbon cycle. We developed a nested inversion with focus in China. Based on Transcom 22 regions for the globe, we divide China and its neighboring countries into 17 regions, making 39 regions in total for the globe. A Bayesian synthesis inversion is made to estimate the terrestrial carbon flux based on GlobalView CO2 data. In the inversion, GEOS-Chem is used as the transport model to develop the transport matrix. A terrestrial ecosystem model named BEPS is used to produce the prior surface flux to constrain the inversion. However, the sparseness of available observation stations in Asia poses a challenge to the inversion for the 17 small regions. To obtain additional constraint on the inversion, a prior flux covariance matrix is constructed using the BEPS model through analyzing the correlation in the net carbon flux among regions under variable climate conditions. The use of the covariance among different regions in the inversion effectively extends the information content of CO2 observations to more regions. The carbon flux over the 39 land and ocean regions are inverted for the period from 2004 to 2009. In order to investigate the impact of introducing the covariance matrix with non-zero off-diagonal values to the inversion, the inverted terrestrial carbon flux over China is evaluated against ChinaFlux eddy-covariance observations after applying an upscaling methodology.
DEFF Research Database (Denmark)
Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne
. Based on covariance functions, residual feed intake (RFI) was defined and derived as the conditional genetic variance in feed intake given mid-test breeding value for BW and rate of gain. The heritability of RFI over the entire period was 0.36, but more interestingly, the genetic variance of RFI was 6......Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI......% of the genetic variance in feed intake, revealing that a minor component of feed intake was genetically independent of maintenance and growth. In conclusion, the approach derived herein led to a consistent definition of RFI, where genomic breeding values were easily obtained...
Tanaka, S
2004-01-01
Noncommutative field theory on Yang's quantized space-time algebra (YSTA) is studied. It gives a theoretical framework to reformulate the matrix model as quantum mechanics of $D_0$ branes in a Lorentz-covariant form. The so-called kinetic term ($\\sim {\\hat{P_i}}^2)$ and potential term ($\\sim {[\\hat{X_i},\\hat{X_j}]}^2)$ of $D_0$ branes in the matrix model are described now in terms of Casimir operator of $SO(D,1)$, a subalgebra of the primary algebra $SO(D+1,1)$ which underlies YSTA with two contraction- parameters, $\\lambda$ and $R$. $D$-dimensional noncommutative space-time and momentum operators $\\hat{X_\\mu}$ and $\\hat{P_\\mu}$ in YSTA show a distinctive spectral structure, that is, space-components $\\hat{X_i}$ and $\\hat{P_i}$ have discrete eigenvalues, and time-components $\\hat{X_0}$ and $\\hat{P_0}$ continuous eigenvalues, consistently with Lorentz-covariance. According to the method of Lorentz-covariant Moyal star product proper to YSTA, the field equation of $D_0$ brane on YSTA is derived in a nontrivial ...
Covariant quark model of form factors in the heavy mass limit
Yaouanc, A. Le; Oliver, L; Pène, O.; Raynal, J. -C.
1995-01-01
We show that quark models of current matrix-elements based on the Bakamjian-Thomas construction of relativistic states with a fixed number of particles, plus the additivity assumption, are covariant in the heavy-quark limit and satisfy the full set of heavy-quark symmetry relations discovered by Isgur and Wise. We find the lower bound of $\\rho^2$ in such models to be $3/4$ for ground state mesons, independently of any parameter. Another welcome property of these models is that in the infinite...
Spatially Homogeneous Bianchi Type V Cosmological Model in the Scale-Covariant Theory of Gravitation
Institute of Scientific and Technical Information of China (English)
Shri Ram; M.K.Verma; Mohd.Zeyauddin
2009-01-01
We discuss spatially homogeneous and anisotropic Bianchi type-V spacetime filled with a perfect fluid in the framework of the scaie-covariant theory of gravitation proposed by Canuto et al.By applying the law of variation for Hubble's parameter,exact solutions of the field equations are obtained,which correspond to the model of the universe having a big-bang type singularity at the initial time t=0.The cosmological model,evolving from the initial singularity,expands with power-law expansion and gives essentially an empty space for a large time.The physical and dynamical properties of the model are also discussed.
[A variance-covariance model for analysis of pedigrees with inbreeding].
Svishcheva, G R
2007-08-01
A variance-covariance model is suggested for plotting the distribution of a quantitative trait analyzed in animal pedigrees resulting from crosses of outbred lines. The model takes inbreeding into account. A special parameter characterizing the degree of inbreeding has been introduced, which makes the model versatile. Pedigrees with the same structure that contain or not contain inbred individuals have been compared to analyze the effect of inbreeding on the parameters of the trait distribution, such as the mean genotypic value and variance of the trait.
Diallo, Thierno M O; Morin, Alexandre J S; Lu, HuiZhong
2017-03-01
This article evaluates the impact of partial or total covariate inclusion or exclusion on the class enumeration performance of growth mixture models (GMMs). Study 1 examines the effect of including an inactive covariate when the population model is specified without covariates. Study 2 examines the case in which the population model is specified with 2 covariates influencing only the class membership. Study 3 examines a population model including 2 covariates influencing the class membership and the growth factors. In all studies, we contrast the accuracy of various indicators to correctly identify the number of latent classes as a function of different design conditions (sample size, mixing ratio, invariance or noninvariance of the variance-covariance matrix, class separation, and correlations between the covariates in Studies 2 and 3) and covariate specification (exclusion, partial or total inclusion as influencing class membership, partial or total inclusion as influencing class membership, and the growth factors in a class-invariant or class-varying manner). The accuracy of the indicators shows important variation across studies, indicators, design conditions, and specification of the covariates effects. However, the results suggest that the GMM class enumeration process should be conducted without covariates, and should rely mostly on the Bayesian information criterion (BIC) and consistent Akaike information criterion (CAIC) as the most reliable indicators under conditions of high class separation (as indicated by higher entropy), versus the sample size adjusted BIC or CAIC (SBIC, SCAIC) and bootstrapped likelihood ratio test (BLRT) under conditions of low class separation (indicated by lower entropy). (PsycINFO Database Record
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Energy Technology Data Exchange (ETDEWEB)
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-09-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Rathbun, Stephen L; Shiffman, Saul
2016-03-01
Cigarette smoking is a prototypical example of a recurrent event. The pattern of recurrent smoking events may depend on time-varying covariates including mood and environmental variables. Fixed effects and frailty models for recurrent events data assume that smokers have a common association with time-varying covariates. We develop a mixed effects version of a recurrent events model that may be used to describe variation among smokers in how they respond to those covariates, potentially leading to the development of individual-based smoking cessation therapies. Our method extends the modified EM algorithm of Steele (1996) for generalized mixed models to recurrent events data with partially observed time-varying covariates. It is offered as an alternative to the method of Rizopoulos, Verbeke, and Lesaffre (2009) who extended Steele's (1996) algorithm to a joint-model for the recurrent events data and time-varying covariates. Our approach does not require a model for the time-varying covariates, but instead assumes that the time-varying covariates are sampled according to a Poisson point process with known intensity. Our methods are well suited to data collected using Ecological Momentary Assessment (EMA), a method of data collection widely used in the behavioral sciences to collect data on emotional state and recurrent events in the every-day environments of study subjects using electronic devices such as Personal Digital Assistants (PDA) or smart phones.
de Brito, G P; Gomes, Y M P; Junior, J T Guaitolini; Nikoofard, V
2016-01-01
In this paper we introduce a modified covariant quantum algebra based in the so-called Quesne-Tkachuk algebra. By means of a deformation procedure we arrive at a class of higher derivative models of gravity. The study of the particle spectra of these models reveals an equivalence with the physical content of the well-known renormalizable and super-renormalizable higher derivative gravities. The particle spectrum exhibits the presence of spurious complex ghosts and, in light of this problem, we suggest an interesting interpretation in the context of minimal length theories. Also, a discussion regarding the non-relativistic potential energy is proposed.
An immersed interface method for two-dimensional modelling of stratified flow in pipes
Berthelsen, Petter Andreas
2004-01-01
This thesis deals with the construction of a numerical method for solving two-dimensional elliptic interface problems, such as fully developed stratified flow in pipes. Interface problems are characterized by its non-smooth and often discontinuous behaviour along a sharp boundary separating the fluids or other materials. Classical numerical schemes are not suitable for these problems due to the irregular geometry of the interface. Standard finite difference discretization across the interface...
Schuurman, N K; Grasman, R P P P; Hamaker, E L
2016-01-01
Multilevel autoregressive models are especially suited for modeling between-person differences in within-person processes. Fitting these models with Bayesian techniques requires the specification of prior distributions for all parameters. Often it is desirable to specify prior distributions that have negligible effects on the resulting parameter estimates. However, the conjugate prior distribution for covariance matrices-the Inverse-Wishart distribution-tends to be informative when variances are close to zero. This is problematic for multilevel autoregressive models, because autoregressive parameters are usually small for each individual, so that the variance of these parameters will be small. We performed a simulation study to compare the performance of three Inverse-Wishart prior specifications suggested in the literature, when one or more variances for the random effects in the multilevel autoregressive model are small. Our results show that the prior specification that uses plug-in ML estimates of the variances performs best. We advise to always include a sensitivity analysis for the prior specification for covariance matrices of random parameters, especially in autoregressive models, and to include a data-based prior specification in this analysis. We illustrate such an analysis by means of an empirical application on repeated measures data on worrying and positive affect.
Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M
2012-03-01
With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.
Dreano, D.
2017-04-05
Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.
Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions
DEFF Research Database (Denmark)
Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier
We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...... turnover and statistically superior positions compared to existing procedures. Translating these statistical improvements into economic gains, we find that under empirically realistic assumptions a risk-averse investor would be willing to pay up to 170 basis points per year to shift to using the new class...
The shape of the $\\Delta$ baryon in a covariant spectator quark model
Ramalho, G; Stadler, A
2012-01-01
Using a covariant spectator quark model that describes the recent lattice QCD data for the $\\Delta$ electromagnetic form factors and all available experimental data on $\\gamma N \\to \\Delta$ transitions, we analyze the charge and magnetic dipole distributions of the $\\Delta$ baryon and discuss its shape. We conclude that the quadrupole moment of the $\\Delta$ is a good indicator of the deformation and that the $\\Delta^+$ charge distribution has an oblate shape. We also calculate transverse moments and find that they do not lead to unambiguous conclusions about the underlying shape.
Simulation of parametric model towards the fixed covariate of right censored lung cancer data
Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Ridwan Olaniran, Oyebayo; Enera Amran, Syahila
2017-09-01
In this study, simulation procedure was applied to measure the fixed covariate of right censored data by using parametric survival model. The scale and shape parameter were modified to differentiate the analysis of parametric regression survival model. Statistically, the biases, mean biases and the coverage probability were used in this analysis. Consequently, different sample sizes were employed to distinguish the impact of parametric regression model towards right censored data with 50, 100, 150 and 200 number of sample. R-statistical software was utilised to develop the coding simulation with right censored data. Besides, the final model of right censored simulation was compared with the right censored lung cancer data in Malaysia. It was found that different values of shape and scale parameter with different sample size, help to improve the simulation strategy for right censored data and Weibull regression survival model is suitable fit towards the simulation of survival of lung cancer patients data in Malaysia.
Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.
2013-01-01
Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.
A Semi-parametric Multivariate Gap-filling Model for Eddy Covariance Latent Heat Flux
Li, M.; Chen, Y.
2010-12-01
Quantitative descriptions of latent heat fluxes are important to study the water and energy exchanges between terrestrial ecosystems and the atmosphere. The eddy covariance approaches have been recognized as the most reliable technique for measuring surface fluxes over time scales ranging from hours to years. However, unfavorable micrometeorological conditions, instrument failures, and applicable measurement limitations may cause inevitable flux gaps in time series data. Development and application of suitable gap-filling techniques are crucial to estimate long term fluxes. In this study, a semi-parametric multivariate gap-filling model was developed to fill latent heat flux gaps for eddy covariance measurements. Our approach combines the advantages of a multivariate statistical analysis (principal component analysis, PCA) and a nonlinear interpolation technique (K-nearest-neighbors, KNN). The PCA method was first used to resolve the multicollinearity relationships among various hydrometeorological factors, such as radiation, soil moisture deficit, LAI, and wind speed. The KNN method was then applied as a nonlinear interpolation tool to estimate the flux gaps as the weighted sum latent heat fluxes with the K-nearest distances in the PCs’ domain. Two years, 2008 and 2009, of eddy covariance and hydrometeorological data from a subtropical mixed evergreen forest (the Lien-Hua-Chih Site) were collected to calibrate and validate the proposed approach with artificial gaps after standard QC/QA procedures. The optimal K values and weighting factors were determined by the maximum likelihood test. The results of gap-filled latent heat fluxes conclude that developed model successful preserving energy balances of daily, monthly, and yearly time scales. Annual amounts of evapotranspiration from this study forest were 747 mm and 708 mm for 2008 and 2009, respectively. Nocturnal evapotranspiration was estimated with filled gaps and results are comparable with other studies
Directory of Open Access Journals (Sweden)
Berge Léonie
2016-01-01
Full Text Available As the need for precise handling of nuclear data covariances grows ever stronger, no information about covariances of prompt fission neutron spectra (PFNS are available in the evaluated library JEFF-3.2, although present in ENDF/B-VII.1 and JENDL-4.0 libraries for the main fissile isotopes. The aim of this work is to provide an estimation of covariance matrices related to PFNS, in the frame of some commonly used models for the evaluated files, such as the Maxwellian spectrum, the Watt spectrum, or the Madland-Nix spectrum. The evaluation of PFNS through these models involves an adjustment of model parameters to available experimental data, and the calculation of the spectrum variance-covariance matrix arising from experimental uncertainties. We present the results for thermal neutron induced fission of 235U. The systematic experimental uncertainties are propagated via the marginalization technique available in the CONRAD code. They are of great influence on the final covariance matrix, and therefore, on the spectrum uncertainty band width. In addition to this covariance estimation work, we have also investigated the importance on a reactor calculation of the fission spectrum model choice. A study of the vessel fluence depending on the PFNS model is presented. This is done through the propagation of neutrons emitted from a fission source in a simplified PWR using the TRIPOLI-4® code. This last study includes thermal fission spectra from the FIFRELIN Monte-Carlo code dedicated to the simulation of prompt particles emission during fission.
Berge, Léonie; Litaize, Olivier; Serot, Olivier; Archier, Pascal; De Saint Jean, Cyrille; Pénéliau, Yannick; Regnier, David
2016-02-01
As the need for precise handling of nuclear data covariances grows ever stronger, no information about covariances of prompt fission neutron spectra (PFNS) are available in the evaluated library JEFF-3.2, although present in ENDF/B-VII.1 and JENDL-4.0 libraries for the main fissile isotopes. The aim of this work is to provide an estimation of covariance matrices related to PFNS, in the frame of some commonly used models for the evaluated files, such as the Maxwellian spectrum, the Watt spectrum, or the Madland-Nix spectrum. The evaluation of PFNS through these models involves an adjustment of model parameters to available experimental data, and the calculation of the spectrum variance-covariance matrix arising from experimental uncertainties. We present the results for thermal neutron induced fission of 235U. The systematic experimental uncertainties are propagated via the marginalization technique available in the CONRAD code. They are of great influence on the final covariance matrix, and therefore, on the spectrum uncertainty band width. In addition to this covariance estimation work, we have also investigated the importance on a reactor calculation of the fission spectrum model choice. A study of the vessel fluence depending on the PFNS model is presented. This is done through the propagation of neutrons emitted from a fission source in a simplified PWR using the TRIPOLI-4® code. This last study includes thermal fission spectra from the FIFRELIN Monte-Carlo code dedicated to the simulation of prompt particles emission during fission.
Performance of growth mixture models in the presence of time-varying covariates.
Diallo, Thierno M O; Morin, Alexandre J S; Lu, HuiZhong
2016-10-31
Growth mixture modeling is often used to identify unobserved heterogeneity in populations. Despite the usefulness of growth mixture modeling in practice, little is known about the performance of this data analysis technique in the presence of time-varying covariates. In the present simulation study, we examined the impacts of five design factors: the proportion of the total variance of the outcome explained by the time-varying covariates, the number of time points, the error structure, the sample size, and the mixing ratio. More precisely, we examined the impact of these factors on the accuracy of parameter and standard error estimates, as well as on the class enumeration accuracy. Our results showed that the consistent Akaike information criterion (CAIC), the sample-size-adjusted CAIC (SCAIC), the Bayesian information criterion (BIC), and the integrated completed likelihood criterion (ICL-BIC) proved to be highly reliable indicators of the true number of latent classes in the data, across design conditions, and that the sample-size-adjusted BIC (SBIC) also proved quite accurate, especially in larger samples. In contrast, the Akaike information criterion (AIC), the entropy, the normalized entropy criterion (NEC), and the classification likelihood criterion (CLC) proved to be unreliable indicators of the true number of latent classes in the data. Our results also showed that substantial biases in the parameter and standard error estimates tended to be associated with growth mixture models that included only four time points.
A covariant model for the gamma N -> N(1535) transition at high momentum transfer
Ramalho, G
2011-01-01
A relativistic constituent quark model is applied to the gamma N -> N(1535) transition. The N(1535) wave function is determined by extending the covariant spectator quark model, previously developed for the nucleon, to the S11 resonance. The model allows us to calculate the valence quark contributions to the gamma N -> N(1535) transition form factors. Because of the nucleon and N(1535) structure the model is valid only for Q^2> 2.3 GeV^2. The results are compared with the experimental data for the electromagnetic form factors F1* and F2* and the helicity amplitudes A_1/2 and S_1/2, at high Q^2.
A covariant model for the gamma N -> N(1535) transition at high momentum transfer
Energy Technology Data Exchange (ETDEWEB)
G. Ramalho, M.T. Pena
2011-08-01
A relativistic constituent quark model is applied to the gamma N -> N(1535) transition. The N(1535) wave function is determined by extending the covariant spectator quark model, previously developed for the nucleon, to the S11 resonance. The model allows us to calculate the valence quark contributions to the gamma N -> N(1535) transition form factors. Because of the nucleon and N(1535) structure the model is valid only for Q^2> 2.3 GeV^2. The results are compared with the experimental data for the electromagnetic form factors F1* and F2* and the helicity amplitudes A_1/2 and S_1/2, at high Q^2.
Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets
Zhang, Bohai
2014-01-01
Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.
Senocak, I.; Ackerman, A. S.; Kirkpatrick, M. P.; Stevens, D. E.; Mansour, N. N.
2004-01-01
Large-eddy simulation (LES) is a widely used technique in armospheric modeling research. In LES, large, unsteady, three dimensional structures are resolved and small structures that are not resolved on the computational grid are modeled. A filtering operation is applied to distinguish between resolved and unresolved scales. We present two near-surface models that have found use in atmospheric modeling. We also suggest a simpler eddy viscosity model that adopts Prandtl's mixing length model (Prandtl 1925) in the vicinity of the surface and blends with the dynamic Smagotinsky model (Germano et al, 1991) away from the surface. We evaluate the performance of these surface models by simulating a neutraly stratified atmospheric boundary layer.
Deformed Hamilton-Jacobi Method in Covariant Quantum Gravity Effective Models
Benrong, Mu; Yang, Haitang
2014-01-01
We first briefly revisit the original Hamilton-Jacobi method and show that the Hamilton-Jacobi equation for the action $I$ of tunnelings of a fermionic particle from a charged black hole can be written in the same form as that of a scalar particle. For the low energy quantum gravity effective models which respect covariance of the curved spacetime, we derive the deformed model-independent KG/Dirac and Hamilton-Jacobi equations using the methods of effective field theory. We then find that, to all orders of the effective theories, the deformed Hamilton-Jacobi equations can be obtained from the original ones by simply replacing the mass of emitted particles $m$ with a parameter $m_{eff}$ that includes all the quantum gravity corrections. Therefore, in this scenario, there will be no corrections to the Hawking temperature of a black hole from the quantum gravity effects if its original Hawking temperature is independent of the mass of emitted particles. As a consequence, our results show that breaking covariance...
DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....
Numerical model for macroscopic quantum superpositions based on phase-covariant quantum cloning
Buraczewski, Adam
2011-01-01
We present a numerical model of macroscopic quantum superpositions generated by universally covariant optimal quantum cloning. It requires fast computation of the Gaussian hypergeometric function for moderate values of its parameters and argument as well as evaluation of infinite sums involving this function. We developed a method of dynamical estimation of cutoff for these sums. We worked out algorithms performing efficient summation of values of orders ranging from $10^{-100}$ to $10^{100}$ which neither lose precision nor accumulate errors, but provide the summation with acceleration. Our model is well adapted to experimental conditions. It optimizes computation by parallelization and choice of the most efficient algorithm. The methods presented here can be adjusted for analysis of similar experimental schemes. Including decoherence and realistic detection greatly improved the reliability and usability of our model for scientific research.
B --> D$**$ semileptonic decay in covariant quark models à la Bakamjian-Thomas
Morénas, V; Oliver, L; Pène, O; Raynal, J C
1996-01-01
Once chosen the dynamics in one frame, for example the rest frame, the Bakamjian and Thomas method allows to define relativistic quark models in any frame. These models have been shown to provide, in the heavy quark limit, fully covariant current form factors as matrix elements of the quark current operator. They also verify the Isgur-Wise scaling and give a slope parameter \\rho^2>3/4 for all the possible choices of the dynamics. In this paper we study the L=1 excited states and derive the general formula, valid for any dynamics, for the scaling invariant form factors \\tau_{1/2}^{(n)}(w) and \\tau_{3/2}^{(n)}(w). We also check the Bjorken-Isgur-Wise sum rule already demonstrated elsewhere in this class of models.
Space-Time Modelling of Groundwater Level Using Spartan Covariance Function
Varouchakis, Emmanouil; Hristopulos, Dionissios
2014-05-01
Geostatistical models often need to handle variables that change in space and in time, such as the groundwater level of aquifers. A major advantage of space-time observations is that a higher number of data supports parameter estimation and prediction. In a statistical context, space-time data can be considered as realizations of random fields that are spatially extended and evolve in time. The combination of spatial and temporal measurements in sparsely monitored watersheds can provide very useful information by incorporating spatiotemporal correlations. Spatiotemporal interpolation is usually performed by applying the standard Kriging algorithms extended in a space-time framework. Spatiotemoral covariance functions for groundwater level modelling, however, have not been widely developed. We present a new non-separable theoretical spatiotemporal variogram function which is based on the Spartan covariance family and evaluate its performance in spatiotemporal Kriging (STRK) interpolation. The original spatial expression (Hristopulos and Elogne 2007) that has been successfully used for the spatial interpolation of groundwater level (Varouchakis and Hristopulos 2013) is modified by defining the following space-time normalized distance h = °h2r-+-α h2τ, hr=r- ξr, hτ=τ- ξτ; where r is the spatial lag vector, τ the temporal lag vector, ξr is the correlation length in position space (r) and ξτ in time (τ), h the normalized space-time lag vector, h = |h| is its Euclidean norm of the normalized space-time lag and α the coefficient that determines the relative weight of the time lag. The space-time experimental semivariogram is determined from the biannual (wet and dry period) time series of groundwater level residuals (obtained from the original series after trend removal) between the years 1981 and 2003 at ten sampling stations located in the Mires hydrological basin in the island of Crete (Greece). After the hydrological year 2002-2003 there is a significant
DEFF Research Database (Denmark)
Mahdi Shariati, Mohammad; Su, Guosheng; Madsen, Per
2007-01-01
means of each environment. It has been shown that this method results in poor inferences and that a more satisfactory alternative is to infer environmental effects jointly with the other parameters of the model. Such a reaction norm model with unknown covariates and heterogeneous residual variances...... across herds was fitted to milk, protein, and fat yield of first-lactation Danish Holstein cows to investigate the presence of GxE. Data included 188,502 first test-day records from 299 herds and 3,775 herd-years in a time period ranging from 1991 to 2003. Variance components and breeding values were...... estimated with a Bayesian approach implemented using Markov chain Monte Carlo. The posterior distribution of the variance of genetic slopes was markedly shifted away from zero for all traits under study, supporting the presence of GxE. The ratio of the genetic slope variance to the genetic level variance...
Emergent 4D gravity on covariant quantum spaces in the IKKT model
Steinacker, Harold C
2016-01-01
We study perturbations of the 4-dimensional fuzzy sphere as a background in the IKKT or IIB matrix model. The linearized 4-dimensional Einstein equations are shown to arise from the classical matrix model action, without adding an Einstein-Hilbert term. The excitation modes with lowest spin are identified as gauge fields, metric and connection fields. In addition to the usual gravitational waves, there are also physical "torsion" wave excitations. The quantum structure of the geometry encodes a twisted bundle of self-dual 2-forms, which leads to a covariant 4-dimensional noncommutative geometry. The formalism of string states is used to compute one-loop corrections to the effective action. This leads to a mass term for the gravitons which is significant for $S^4$, but argued to be small in the Minkowski case.
Directory of Open Access Journals (Sweden)
Y. I. Troitskaya
2006-01-01
Full Text Available The objective of the present paper is to develop a theoretical model describing the evolution of a turbulent wake behind a towed sphere in a stably stratified fluid at large Froude and Reynolds numbers. The wake flow is considered as a quasi two-dimensional (2-D turbulent jet flow whose dynamics is governed by the momentum transfer from the mean flow to a quasi-2-D sinuous mode growing due to hydrodynamic instability. The model employs a quasi-linear approximation to describe this momentum transfer. The model scaling coefficients are defined with the use of available experimental data, and the performance of the model is verified by comparison with the results of a direct numerical simulation of a 2-D turbulent jet flow. The model prediction for the temporal development of the wake axis mean velocity is found to be in good agreement with the experimental data obtained by Spedding (1997.
Kwun, Jihye; Song, Hyo-Jong; Park, Jong-Im
2013-04-01
Background error covariance matrix is very important for variational data assimilation system, determining how the information from observed variables is spread to unobserved variables and spatial points. The full representation of the matrix is impossible because of the huge size so the matrix is constructed implicitly by means of a variable transformation. It is assumed that the forecast errors in the control variables chosen are statistically independent. We used the cubed-sphere geometry based on the spectral element method which is better for parallel application. In cubed-sphere grids, the grid points are located at Gauss-Legendre-Lobatto points on each local element of 6 faces on the sphere. The two stages of the transformation were used in this study. The first is the variable transformation from model to a set of control variables whose errors are assumed to be uncorrelated, which was developed on the cubed sphere-using Galerkin method. Winds are decomposed into rotational part and divergent part by introducing stream function and velocity potential as control variables. The dynamical constraint for balance between mass and wind were made by applying linear balance operator. The second is spectral transformation which is to remove the remaining spatial correlation. The bases for the spectral transform were generated for the cubed-sphere grid. 6-hr difference fields of shallow water equation (SWE) model run initialized by variational data assimilation system were used to obtain forecast error statistics. In the horizontal background error covariance modeling, the regression analysis of the control variables was performed to define the unbalanced variables as the difference between full and correlated part. Regression coefficient was used to remove the remaining correlations between variables.
Directory of Open Access Journals (Sweden)
N. Stashchuk
2005-01-01
Full Text Available We present the results of numerical experiments performed with the use of a fully non-linear non-hydrostatic numerical model to study the baroclinic response of a long narrow tank filled with stratified water to an initially tilted interface. Upon release, the system starts to oscillate with an eigen frequency corresponding to basin-scale baroclinic gravitational seiches. Field observations suggest that the disintegration of basin-scale internal waves into packets of solitary waves, shear instabilities, billows and spots of mixed water are important mechanisms for the transfer of energy within stratified lakes. Laboratory experiments performed by D. A. Horn, J. Imberger and G. N. Ivey (JFM, 2001 reproduced several regimes, which include damped linear waves and solitary waves. The generation of billows and shear instabilities induced by the basin-scale wave was, however, not sufficiently studied. The developed numerical model computes a variety of flows, which were not observed with the experimental set-up. In particular, the model results showed that under conditions of low dissipation, the regimes of billows and supercritical flows may transform into a solitary wave regime. The obtained results can help in the interpretation of numerous observations of mixing processes in real lakes.
DEFF Research Database (Denmark)
Kinnebrock, Silja; Podolskij, Mark
This paper introduces a new estimator to measure the ex-post covariation between high-frequency financial time series under market microstructure noise. We provide an asymptotic limit theory (including feasible central limit theorems) for standard methods such as regression, correlation analysis...... and covariance, for which we obtain the optimal rate of convergence. We demonstrate some positive semidefinite estimators of the covariation and construct a positive semidefinite estimator of the conditional covariance matrix in the central limit theorem. Furthermore, we indicate how the assumptions on the noise...
DEFF Research Database (Denmark)
Kinnebrock, Silja; Podolskij, Mark
and covariance, for which we obtain the optimal rate of convergence. We demonstrate some positive semidefinite estimators of the covariation and construct a positive semidefinite estimator of the conditional covariance matrix in the central limit theorem. Furthermore, we indicate how the assumptions on the noise......This paper introduces a new estimator to measure the ex-post covariation between high-frequency financial time series under market microstructure noise. We provide an asymptotic limit theory (including feasible central limit theorems) for standard methods such as regression, correlation analysis...
Exact distribution of MLE of covariance matrix in a GMANOVA-MANOVA model
Institute of Scientific and Technical Information of China (English)
BAI; Peng
2005-01-01
For a GMANOVA-MANOVA model with normal error: Y = XB1Z1T + B2Z2T +E, E ～ Nq×n(0, In(×) ∑), the present paper is devoted to the study of distribution of MLE,Σ, of covariance matrix ∑. The main results obtained are stated as follows: (1) When rk(Z) -rk(Z2) ≥ q-rk(X), the exact distribution of (Σ) is derived, where Z = (Z1, Z2), rk(A)denotes the rank of matrix A. (2) The exact distribution of |(Σ)| is gained. (3) It is proved that ntr{[∑-1 - ∑-1XM(MTXT∑-1XM)-1MTXT∑-1]Σ} has x2(q-rk(X))(n-rk(Z2)) distribution, where M is the matrix whose columns are the standardized orthogonal eigenvectors corresponding to the nonzero eigenvalues of XT∑-1X.
Ramalho, G
2012-01-01
We study the $\\gamma^\\ast \\Lambda \\to \\Sigma^0$ transition form factors by applying the covariant spectator quark model. Using the parametrization for the baryon core wave functions as well as for the pion cloud dressing obtained in a previous work, we calculate the dependence on the momentum transfer squared, $Q^2$, of the electromagnetic transition form factors. The magnetic form factor is dominated by the valence quark contributions. The final result for the transition magnetic moment, a combination of the quark core and pion cloud effects, turns out to give a value very close to the data. The pion cloud, although small, makes the result towards the data. It is also predicted that small but nonzero values for the electric form factor in the finite $Q^2$ region, as a consequence of the pion cloud dressing.
Quarkonia and heavy-light mesons in a covariant quark model
Directory of Open Access Journals (Sweden)
Leitão Sofia
2016-01-01
Full Text Available Preliminary calculations using the Covariant Spectator Theory (CST employed a scalar linear confining interaction and an additional constant vector potential to compute the mesonic mass spectra. In this work we generalize the confining interaction to include more general structures, in particular a vector and also a pseudoscalar part, as suggested by a recent study [1]. A one-gluon-exchange kernel is also implemented to describe the short-range part of the interaction. We solve the simplest CST approximation to the complete Bethe-Salpeter equation, the one-channel spectator equation, using a numerical technique that eliminates all singularities from the kernel. The parameters of the model are determined through a fit to the experimental pseudoscalar meson spectra, with a good agreement for both quarkonia and heavy-light states.
2015-12-01
women with a diagnosis of breast cancer from 2003 to 2012 and enrolled in a larger study on MD were evaluated. Operative and pathology reports were...AD______________ AWARD NUMBER: W81XWH-11-1-0545 TITLE: Building a Better Model: A Personalized Breast Cancer Risk Model Incorporating Breast ...Better Model: A Personalized Breast Cancer Risk Model Incorporating Breast Density to Stratify Risk and Improve Application of Resources 5a. CONTRACT
Schubert, Sebastian; Lucarini, Valerio
2016-04-01
The classical approach for studying atmospheric variability is based on defining a background state and studying the linear stability of the small fluctuations around such a state. Weakly non-linear theories can be constructed using higher order expansions terms. While these methods have undoubtedly great value for elucidating the relevant physical processes, they are unable to follow the dynamics of a turbulent atmosphere. We provide a first example of extension of the classical stability analysis to a non-linearly evolving quasi-geostrophic flow. The so-called covariant Lyapunov vectors (CLVs) provide a covariant basis describing the directions of exponential expansion and decay of perturbations to the non-linear trajectory of the flow. We use such a formalism to re-examine the basic barotropic and baroclinic processes of the atmosphere with a quasi-geostrophic beta-plane two-layer model in a periodic channel driven by a forced meridional temperature gradient ΔT . We explore three settings of ΔT , representative of relatively weak turbulence, well-developed turbulence, and intermediate conditions. We construct the Lorenz energy cycle for each CLV describing the energy exchanges with the background state. A positive baroclinic conversion rate is a necessary but not sufficient condition of instability. Barotropic instability is present only for few very unstable CLVs for large values of ΔT. Slowly growing and decaying hydrodynamic Lyapunov modes closely mirror the properties of the background flow. Following classical necessary conditions for barotropic/baroclinic instability, we find a clear relationship between the properties of the eddy fluxes of a CLV and its instability. CLVs with positive baroclinic conversion seem to form a set of modes for constructing a reduced model of the atmosphere dynamics.
Do gamblers eat more salt? Testing a latent trait model of covariance in consumption.
Goodwin, Belinda C; Browne, Matthew; Rockloff, Matthew; Donaldson, Phillip
2015-09-01
A diverse class of stimuli, including certain foods, substances, media, and economic behaviours, may be described as 'reward-oriented' in that they provide immediate reinforcement with little initial investment. Neurophysiological and personality concepts, including dopaminergic dysfunction, reward sensitivity and rash impulsivity, each predict the existence of a latent behavioural trait that leads to increased consumption of all stimuli in this class. Whilst bivariate relationships (co-morbidities) are often reported in the literature, to our knowledge, a multivariate investigation of this possible trait has not been done. We surveyed 1,194 participants (550 male) on their typical weekly consumption of 11 types of reward-oriented stimuli, including fast food, salt, caffeine, television, gambling products, and illicit drugs. Confirmatory factor analysis was used to compare models in a 3×3 structure, based on the definition of a single latent factor (none, fixed loadings, or estimated loadings), and assumed residual covariance structure (none, a-priori / literature based, or post-hoc / data-driven). The inclusion of a single latent behavioural 'consumption' factor significantly improved model fit in all cases. Also confirming theoretical predictions, estimated factor loadings on reward-oriented indicators were uniformly positive, regardless of assumptions regarding residual covariances. Additionally, the latent trait was found to be negatively correlated with the non-reward-oriented indicators of fruit and vegetable consumption. The findings support the notion of a single behavioural trait leading to increased consumption of reward-oriented stimuli across multiple modalities. We discuss implications regarding the concentration of negative lifestyle-related health behaviours.
A consistent hamiltonian treatment of the Thirring-Wess and Schwinger model in the covariant gauge
Martinovič, L'ubomír
2014-06-01
We present a unified hamiltonian treatment of the massless Schwinger model in the Landau gauge and of its non-gauge counterpart-the Thirring-Wess (TW) model. The operator solution of the Dirac equation has the same structure in the both models and identifies free fields as the true dynamical degrees of freedom. The coupled boson field equations (Maxwell and Proca, respectively) can also be solved exactly. The Hamiltonan in Fock representation is derived for the TW model and its diagonalization via a Bogoliubov transformation is suggested. The axial anomaly is derived in both models directly from the operator solution using a hermitian version of the point-splitting regularization. A subtlety of the residual gauge freedom in the covariant gauge is shown to modify the usual definition of the "gauge-invariant" currents. The consequence is that the axial anomaly and the boson mass generation are restricted to the zero-mode sector only. Finally, we discuss quantization of the unphysical gauge-field components in terms of ghost modes in an indefinite-metric space and sketch the next steps within the finite-volume treatment necessary to fully reveal physical content of the model in our hamiltonian formulation.
Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
Bizzotto, Roberto; Zamuner, Stefano; Mezzalana, Enrica; De Nicolao, Giuseppe; Gomeni, Roberto; Hooker, Andrew C; Karlsson, Mats O
2011-09-01
Mixed-effect Markov chain models have been recently proposed to characterize the time course of transition probabilities between sleep stages in insomniac patients. The most recent one, based on multinomial logistic functions, was used as a base to develop a final model combining the strengths of the existing ones. This final model was validated on placebo data applying also new diagnostic methods and then used for the inclusion of potential age, gender, and BMI effects. Internal validation was performed through simplified posterior predictive check (sPPC), visual predictive check (VPC) for categorical data, and new visual methods based on stochastic simulation and estimation and called visual estimation check (VEC). External validation mainly relied on the evaluation of the objective function value and sPPC. Covariate effects were identified through stepwise covariate modeling within NONMEM VI. New model features were introduced in the model, providing significant sPPC improvements. Outcomes from VPC, VEC, and external validation were generally very good. Age, gender, and BMI were found to be statistically significant covariates, but their inclusion did not improve substantially the model's predictive performance. In summary, an improved model for sleep internal architecture has been developed and suitably validated in insomniac patients treated with placebo. Thereafter, covariate effects have been included into the final model.
Computing the transport time scales of a stratified lake on the basis of Tonolli’s model
Directory of Open Access Journals (Sweden)
Marco Pilotti
2014-05-01
Full Text Available This paper deals with a simple model to evaluate the transport time scales in thermally stratified lakes that do not necessarily completely mix on a regular annual basis. The model is based on the formalization of an idea originally proposed in Italian by Tonolli in 1964, who presented a mass balance of the water initially stored within a lake, taking into account the known seasonal evolution of its thermal structure. The numerical solution of this mass balance provides an approximation to the water age distribution for the conceptualised lake, from which an upper bound to the typical time scales widely used in limnology can be obtained. After discussing the original test case considered by Tonolli, we apply the model to Lake Iseo, a deep lake located in the North of Italy, presenting the results obtained on the basis of a 30 year series of data.
A covariant model for the negative parity resonances of the nucleon
Ramalho, G
2015-01-01
We present a model for the $\\gamma^\\ast N \\to N^\\ast$ helicity amplitudes, where $N$ is the nucleon and $N^\\ast$ is a negative parity nucleon excitation, member of the $SU(6)$-multiplet $[70,1^-]$. The model combines the results from the single quark transition model for the helicity amplitudes with the results of the covariant spectator quark model for the $\\gamma^\\ast N \\to N^\\ast(1535)$ and $\\gamma^\\ast N \\to N^\\ast(1520)$ transitions. With the knowledge of the amplitudes $A_{1/2}$ and $A_{3/2}$ for those transitions we calculate three independent coefficients defined by the single quark transition model and make predictions for the helicity amplitudes associated with the $\\gamma^\\ast N \\to N^\\ast(1650)$, $\\gamma^\\ast N \\to N^\\ast(1700)$, $\\gamma^\\ast N \\to \\Delta(1620)$, and $\\gamma^\\ast N \\to \\Delta(1700)$ transitions. In order to facilitate the comparison with future experimental data at high $Q^2$, we provide also simple parametrizations for the amplitudes, compatible with the expected falloff at high ...
Bayesian analysis of the linear reaction norm model with unknown covariates.
Su, G; Madsen, P; Lund, M S; Sorensen, D; Korsgaard, I R; Jensen, J
2006-07-01
The reaction norm model is becoming a popular approach for the analysis of genotype x environment interactions. In a classical reaction norm model, the expression of a genotype in different environments is described as a linear function (a reaction norm) of an environmental gradient or value. An environmental value is typically defined as the mean performance of all genotypes in the environment, which is usually unknown. One approximation is to estimate the mean phenotypic performance in each environment and then treat these estimates as known covariates in the model. However, a more satisfactory alternative is to infer environmental values simultaneously with the other parameters of the model. This study describes a method and its Bayesian Markov Chain Monte Carlo implementation that makes this possible. Frequentist properties of the proposed method are tested in a simulation study. Estimates of parameters of interest agree well with the true values. Further, inferences about genetic parameters from the proposed method are similar to those derived from a reaction norm model using true environmental values. On the other hand, using phenotypic means as proxies for environmental values results in poor inferences.
Analytic model for the matter power spectrum, its covariance matrix, and baryonic effects
Mohammed, Irshad
2014-01-01
We develop a model for the matter power spectrum as the sum of quasi-linear Zeldovich approximation and even powers of $k$, i.e., $A_0 - A_2k^2 + A_4k^4 - ...$, compensated at low $k$. The model can predict the true power spectrum to a few percent accuracy up to $k \\sim 0.7\\ h \\rm{Mpc}^{-1}$, over a wide range of redshifts and models, including massive neutrino models. We write a simple form of the covariance matrix as a sum of Gaussian part and $A_0$ variance and we find that it reproduces well the simulations. We investigate the super-sample variance effect and show it induces a relation between the Zeldovich term and $A_0$ that differs from the amplitude change, allowing it to be modeled as an additional parameter that can be determined from the data. The $A_n$ coefficients contain information about cosmology, in particular the amplitude of fluctuations $\\sigma_8$. We explore their information content, showing that $A_0$ contains the bulk of amplitude information, scaling as $\\sigma_8^{3.9}$, which allows ...
Applying Covariational Reasoning While Modeling Dynamic Events: A Framework and a Study.
Carlson, Marilyn; Jacobs, Sally; Coe, Edward; Larsen, Sean; Hsu, Eric
2002-01-01
Develops covariational reasoning and proposes a framework for describing mental actions when interpreting and representing dynamic function events. Investigates calculus students' ability to reason about covarying quantities in dynamic situations. Suggests that curriculum and instruction should emphasize moving students to a coordinated image of…
Teaching the Verhulst Model: A Teaching Experiment in Covariational Reasoning and Exponential Growth
Castillo-Garsow, Carlos
2010-01-01
Both Thompson and the duo of Confrey and Smith describe how students might be taught to build "ways of thinking" about exponential behavior by coordinating the covariation of two changing quantities, however, these authors build exponential behavior from different meanings of covariation. Confrey and Smith advocate beginning with discrete additive…
Asymptotic Theory for the QMLE in GARCH-X Models with Stationary and Non-Stationary Covariates
DEFF Research Database (Denmark)
Han, Heejoon; Kristensen, Dennis
This paper investigates the asymptotic properties of the Gaussian quasi-maximum-likelihood estimators (QMLE’s) of the GARCH model augmented by including an additional explanatory variable - the so-called GARCH-X model. The additional covariate is allowed to exhibit any degree of persistence as ca...
Liu, Junhui
2012-01-01
The current study investigated how between-subject and within-subject variance-covariance structures affected the detection of a finite mixture of unobserved subpopulations and parameter recovery of growth mixture models in the context of linear mixed-effects models. A simulation study was conducted to evaluate the impact of variance-covariance…
A covariate-adjustment regression model approach to noninferiority margin definition.
Nie, Lei; Soon, Guoxing
2010-05-10
To maintain the interpretability of the effect of experimental treatment (EXP) obtained from a noninferiority trial, current statistical approaches often require the constancy assumption. This assumption typically requires that the control treatment effect in the population of the active control trial is the same as its effect presented in the population of the historical trial. To prevent constancy assumption violation, clinical trial sponsors were recommended to make sure that the design of the active control trial is as close to the design of the historical trial as possible. However, these rigorous requirements are rarely fulfilled in practice. The inevitable discrepancies between the historical trial and the active control trial have led to debates on many controversial issues. Without support from a well-developed quantitative method to determine the impact of the discrepancies on the constancy assumption violation, a correct judgment seems difficult. In this paper, we present a covariate-adjustment generalized linear regression model approach to achieve two goals: (1) to quantify the impact of population difference between the historical trial and the active control trial on the degree of constancy assumption violation and (2) to redefine the active control treatment effect in the active control trial population if the quantification suggests an unacceptable violation. Through achieving goal (1), we examine whether or not a population difference leads to an unacceptable violation. Through achieving goal (2), we redefine the noninferiority margin if the violation is unacceptable. This approach allows us to correctly determine the effect of EXP in the noninferiority trial population when constancy assumption is violated due to the population difference. We illustrate the covariate-adjustment approach through a case study.
National Research Council Canada - National Science Library
Sebnem Elci; Huseyin Burak Ekmekçi
2016-01-01
.... A 3D numerical model is used to investigate the water column hydrodynamics for the duration of measurements and the performance of various turbulence models used in the CFD model are investigated via...
Simulation of longitudinal exposure data with variance-covariance structures based on mixed models.
Song, Peng; Xue, Jianping; Li, Zhilin
2013-03-01
Longitudinal data are important in exposure and risk assessments, especially for pollutants with long half-lives in the human body and where chronic exposures to current levels in the environment raise concerns for human health effects. It is usually difficult and expensive to obtain large longitudinal data sets for human exposure studies. This article reports a new simulation method to generate longitudinal data with flexible numbers of subjects and days. Mixed models are used to describe the variance-covariance structures of input longitudinal data. Based on estimated model parameters, simulation data are generated with similar statistical characteristics compared to the input data. Three criteria are used to determine similarity: the overall mean and standard deviation, the variance components percentages, and the average autocorrelation coefficients. Upon the discussion of mixed models, a simulation procedure is produced and numerical results are shown through one human exposure study. Simulations of three sets of exposure data successfully meet above criteria. In particular, simulations can always retain correct weights of inter- and intrasubject variances as in the input data. Autocorrelations are also well followed. Compared with other simulation algorithms, this new method stores more information about the input overall distribution so as to satisfy the above multiple criteria for statistical targets. In addition, it generates values from numerous data sources and simulates continuous observed variables better than current data methods. This new method also provides flexible options in both modeling and simulation procedures according to various user requirements.
A background error covariance model of significant wave height employing Monte Carlo simulation
Institute of Scientific and Technical Information of China (English)
GUO Yanyou; HOU Yijun; ZHANG Chunmei; YANG Jie
2012-01-01
The quality of background error statistics is one of the key components for successful assimilation of observations in a numerical model.The background error covariance(BEC)of ocean waves is generally estimated under an assumption that it is stationary over a period of time and uniform over a domain.However,error statistics are in fact functions of the physical processes governing the meteorological situation and vary with the wave condition.In this paper,we simulated the BEC of the significant wave height(SWH)employing Monte Carlo methods.An interesting result is that the BEC varies consistently with the mean wave direction(MWD).In the model domain,the BEC of the SWH decreases significantly when the MWD changes abruptly.A new BEC model of the SWH based on the correlation between the BEC and MWD was then developed.A case study of regional data assimilation was performed,where the SWH observations of buoy 22001 were used to assess the SWH hindcast.The results show that the new BEC model benefits wave prediction and allows reasonable approximations of anisotropy and inhomogeneous errors.
Strong decays of excited 1D charmed(-strange) mesons in the covariant oscillator quark model
Maeda, Tomohito; Yoshida, Kento; Yamada, Kenji; Ishida, Shin; Oda, Masuho
2016-05-01
Recently observed charmed mesons, D1* (2760), D3* (2760) and charmed-strange mesons, Ds1 * (2860), Ds3 * (2860), by BaBar and LHCb collaborations are considered to be plausible candidates for c q ¯ 13 DJ (q = u, d, s) states. We calculate the strong decays with one pion (kaon) emission of these states including well-established 1S and 1P charmed(-strange) mesons within the framework of the covariant oscillator quark model. The results obtained are compared with the experimental data and the typical nonrelativistic quark-model calculations. Concerning the results for 1S and 1P states, we find that, thanks to the relativistic effects of decay form factors, our model parameters take reasonable values, though our relativistic approach and the nonrelativistic quark model give similar decay widths in agreement with experiment. While the results obtained for 13 DJ=1,3 states are roughly consistent with the present data, they should be checked by the future precise measurement.
Yi, Grace Y; He, Wenqing
2012-05-01
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation-extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error-prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.
Axial form factors of the octet baryons in a covariant quark model
Ramalho, G
2015-01-01
We study the weak interaction axial form factors of the octet baryons, within the covariant spectator quark model, focusing on the dependence of four-momentum transfer squared, Q^2. In our model the axial form factors G_A(Q^2) (axial-vector form factor) and G_P(Q^2) (induced pseudoscalar form factor), are calculated based on the constituent quark axial form factors and the octet baryon wave functions. The quark axial current is parametrized by the two constituent quark form factors, the axial-vector form factor g_A^q(Q^2), and the induced pseudoscalar form factor g_P^q(Q^2). The baryon wave functions are composed of a dominant S-state and a P-state mixture for the relative angular momentum of the quarks. First, we study in detail the nucleon case. We assume that the quark axial-vector form factor g_A^q(Q^2) has the same function form as that of the quark electromagnetic isovector form factor. The remaining parameters of the model, the P-state mixture and the Q^2-dependence of g_P^q(Q^2), are determined by a f...
A k-Model for Stably Stratified Nearly Horizontal Turbulent Flows
Kranenburg, C.
1985-01-01
A k-model is formulated that consists of the turbulent kinetic energy equation and an algebraic expression for the mixing length taking into account the influence of stratification. Applicability of the model is restricted to shallow, nearly horizontal flows. For local-equilibrium flows the model re
Chang, Chih-Hao; Liou, Meng-Sing
2007-07-01
In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations . Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM +-up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion. However, conservative form is lost in these balance equations when considering each individual phase; in fact, the interactions that exist simultaneously in both phases manifest themselves as nonconservative terms.
A covariance-adaptive approach for regularized inversion in linear models
Kotsakis, Christopher
2007-11-01
The optimal inversion of a linear model under the presence of additive random noise in the input data is a typical problem in many geodetic and geophysical applications. Various methods have been developed and applied for the solution of this problem, ranging from the classic principle of least-squares (LS) estimation to other more complex inversion techniques such as the Tikhonov-Philips regularization, truncated singular value decomposition, generalized ridge regression, numerical iterative methods (Landweber, conjugate gradient) and others. In this paper, a new type of optimal parameter estimator for the inversion of a linear model is presented. The proposed methodology is based on a linear transformation of the classic LS estimator and it satisfies two basic criteria. First, it provides a solution for the model parameters that is optimally fitted (in an average quadratic sense) to the classic LS parameter solution. Second, it complies with an external user-dependent constraint that specifies a priori the error covariance (CV) matrix of the estimated model parameters. The formulation of this constrained estimator offers a unified framework for the description of many regularization techniques that are systematically used in geodetic inverse problems, particularly for those methods that correspond to an eigenvalue filtering of the ill-conditioned normal matrix in the underlying linear model. Our study lies on the fact that it adds an alternative perspective on the statistical properties and the regularization mechanism of many inversion techniques commonly used in geodesy and geophysics, by interpreting them as a family of `CV-adaptive' parameter estimators that obey a common optimal criterion and differ only on the pre-selected form of their error CV matrix under a fixed model design.
Bosetti, Hadrien; Posch, Harald A; Dellago, Christoph; Hoover, William G
2010-10-01
Recently, a new algorithm for the computation of covariant Lyapunov vectors and of corresponding local Lyapunov exponents has become available. Here we study the properties of these still unfamiliar quantities for a simple model representing a harmonic oscillator coupled to a thermal gradient with a two-stage thermostat, which leaves the system ergodic and fully time reversible. We explicitly demonstrate how time-reversal invariance affects the perturbation vectors in tangent space and the associated local Lyapunov exponents. We also find that the local covariant exponents vary discontinuously along directions transverse to the phase flow.
Matzelle, A.; Montalto, V.; Sarà, G.; Zippay, M.; Helmuth, B.
2014-11-01
Dynamic Energy Budget (DEB) models serve as a powerful tool for describing the flow of energy through organisms from assimilation of food to utilization for maintenance, growth and reproduction. The DEB theory has been successfully applied to several bivalve species to compare bioenergetic and physiological strategies for the utilization of energy. In particular, mussels within the Mytilus edulis complex (M. edulis, M. galloprovincialis, and M. trossulus) have been the focus of many studies due to their economic and ecological importance, and their worldwide distribution. However, DEB parameter values have never been estimated for Mytilus californianus, a species that is an ecological dominant on rocky intertidal shores on the west coast of North America and which likely varies considerably from mussels in the M. edulis complex in its physiology. We estimated a set of DEB parameters for M. californianus using the covariation method estimation procedure and compared these to parameter values from other bivalve species. Model parameters were used to compare sensitivity to environmental variability among species, as a first examination of how strategies for physiologically contending with environmental change by M. californianus may differ from those of other bivalves. Results suggest that based on the parameter set obtained, M. californianus has favorable energetic strategies enabling it to contend with a range of environmental conditions. For instance, the allocation fraction of reserve to soma (κ) is among the highest of any bivalves, which is consistent with the observation that this species can survive over a wide range of environmental conditions, including prolonged periods of starvation.
Structural propensities of kinase family proteins from a Potts model of residue co-variation.
Haldane, Allan; Flynn, William F; He, Peng; Vijayan, R S K; Levy, Ronald M
2016-08-01
Understanding the conformational propensities of proteins is key to solving many problems in structural biology and biophysics. The co-variation of pairs of mutations contained in multiple sequence alignments of protein families can be used to build a Potts Hamiltonian model of the sequence patterns which accurately predicts structural contacts. This observation paves the way to develop deeper connections between evolutionary fitness landscapes of entire protein families and the corresponding free energy landscapes which determine the conformational propensities of individual proteins. Using statistical energies determined from the Potts model and an alignment of 2896 PDB structures, we predict the propensity for particular kinase family proteins to assume a "DFG-out" conformation implicated in the susceptibility of some kinases to type-II inhibitors, and validate the predictions by comparison with the observed structural propensities of the corresponding proteins and experimental binding affinity data. We decompose the statistical energies to investigate which interactions contribute the most to the conformational preference for particular sequences and the corresponding proteins. We find that interactions involving the activation loop and the C-helix and HRD motif are primarily responsible for stabilizing the DFG-in state. This work illustrates how structural free energy landscapes and fitness landscapes of proteins can be used in an integrated way, and in the context of kinase family proteins, can potentially impact therapeutic design strategies. © 2016 The Protein Society.
Modeling the angular correlation function and its full covariance in Photometric Galaxy Surveys
Crocce, Martin; Gaztañaga, Enrique
2010-01-01
Near future cosmology will see the advent of wide area photometric galaxy surveys, like the Dark Energy Survey (DES), that extent to high redshifts (z ~ 1 - 2) but with poor radial distance resolution. In such cases splitting the data into redshift bins and using the angular correlation function $w(\\theta)$, or the $C_{\\ell}$ power spectrum, will become the standard approach to extract cosmological information or to study the nature of dark energy through the Baryon Acoustic Oscillations (BAO) probe. In this work we present a detailed model for $w(\\theta)$ at large scales as a function of redshift and bin width, including all relevant effects, namely nonlinear gravitational clustering, bias, redshift space distortions and photo-z uncertainties. We also present a model for the full covariance matrix characterizing the angular correlation measurements, that takes into account the same effects as for $w(\\theta)$ and also the possibility of a shot-noise component and partial sky coverage. Provided with a large vo...
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich; Asparouhov, Tihomir; Muthen, Bengt
2008-01-01
In multilevel modeling (MLM), group-level (L2) characteristics are often measured by aggregating individual-level (L1) characteristics within each group so as to assess contextual effects (e.g., group-average effects of socioeconomic status, achievement, climate). Most previous applications have used a multilevel manifest covariate (MMC) approach,…
Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance
Barr, J.G.; Engel, V.; Fuentes, J.D.; Fuller, D.O.; Kwon, H.
2013-01-01
Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC) systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE), and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt) increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and information about
Modeling light use efficiency in a subtropical mangrove forest equipped with CO2 eddy covariance
Directory of Open Access Journals (Sweden)
J. G. Barr
2013-03-01
Full Text Available Despite the importance of mangrove ecosystems in the global carbon budget, the relationships between environmental drivers and carbon dynamics in these forests remain poorly understood. This limited understanding is partly a result of the challenges associated with in situ flux studies. Tower-based CO2 eddy covariance (EC systems are installed in only a few mangrove forests worldwide, and the longest EC record from the Florida Everglades contains less than 9 years of observations. A primary goal of the present study was to develop a methodology to estimate canopy-scale photosynthetic light use efficiency in this forest. These tower-based observations represent a basis for associating CO2 fluxes with canopy light use properties, and thus provide the means for utilizing satellite-based reflectance data for larger scale investigations. We present a model for mangrove canopy light use efficiency utilizing the enhanced green vegetation index (EVI derived from the Moderate Resolution Imaging Spectroradiometer (MODIS that is capable of predicting changes in mangrove forest CO2 fluxes caused by a hurricane disturbance and changes in regional environmental conditions, including temperature and salinity. Model parameters are solved for in a Bayesian framework. The model structure requires estimates of ecosystem respiration (RE, and we present the first ever tower-based estimates of mangrove forest RE derived from nighttime CO2 fluxes. Our investigation is also the first to show the effects of salinity on mangrove forest CO2 uptake, which declines 5% per each 10 parts per thousand (ppt increase in salinity. Light use efficiency in this forest declines with increasing daily photosynthetic active radiation, which is an important departure from the assumption of constant light use efficiency typically applied in satellite-driven models. The model developed here provides a framework for estimating CO2 uptake by these forests from reflectance data and
Electromagnetic properties of nucleons and hyperons in a Lorentz covariant quark model
Faessler, A; Holstein, B R; Lyubovitskij, V E; Nicmorus, D; Pumsa-ard, K; Faessler, Amand; Gutsche, Thomas; Holstein, Barry R.; Lyubovitskij, Valery E.; Nicmorus, Diana; Pumsa-ard, Kem
2006-01-01
We calculate magnetic moments of nucleons and hyperons and N -> Delta + gamma transition characteristics using a manifestly Lorentz covariant chiral quark approach for the study of baryons as bound states of constituent quarks dressed by a cloud of pseudoscalar mesons.
Directory of Open Access Journals (Sweden)
K. Lee
2002-01-01
Full Text Available This paper reports the application to vegetation canopies of a coherent model for the propagation of electromagnetic radiation through a stratified medium. The resulting multi-layer vegetation model is plausibly realistic in that it recognises the dielectric permittivity of the vegetation matter, the mixing of the dielectric permittivities for vegetation and air within the canopy and, in simplified terms, the overall vertical distribution of dielectric permittivity and temperature through the canopy. Any sharp changes in the dielectric profile of the canopy resulted in interference effects manifested as oscillations in the microwave brightness temperature as a function of canopy height or look angle. However, when Gaussian broadening of the top and bottom of the canopy (reflecting the natural variability between plants was included within the model, these oscillations were eliminated. The model parameters required to specify the dielectric profile within the canopy, particularly the parameters that quantify the dielectric mixing between vegetation and air in the canopy, are not usually available in typical field experiments. Thus, the feasibility of specifying these parameters using an advanced single-criterion, multiple-parameter optimisation technique was investigated by automatically minimizing the difference between the modelled and measured brightness temperatures. The results imply that the mixing parameters can be so determined but only if other parameters that specify vegetation dry matter and water content are measured independently. The new model was then applied to investigate the sensitivity of microwave emission to specific vegetation parameters. Keywords: passive microwave, soil moisture, vegetation, SMOS, retrieval
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
Vogel, Curtis R; Tyler, Glenn A; Wittich, Donald J
2014-07-01
We introduce a framework for modeling, analysis, and simulation of aero-optics wavefront aberrations that is based on spatial-temporal covariance matrices extracted from wavefront sensor measurements. Within this framework, we present a quasi-homogeneous structure function to analyze nonhomogeneous, mildly anisotropic spatial random processes, and we use this structure function to show that phase aberrations arising in aero-optics are, for an important range of operating parameters, locally Kolmogorov. This strongly suggests that the d5/3 power law for adaptive optics (AO) deformable mirror fitting error, where d denotes actuator separation, holds for certain important aero-optics scenarios. This framework also allows us to compute bounds on AO servo lag error and predictive control error. In addition, it provides us with the means to accurately simulate AO systems for the mitigation of aero-effects, and it may provide insight into underlying physical processes associated with turbulent flow. The techniques introduced here are demonstrated using data obtained from the Airborne Aero-Optics Laboratory.
Time dependent modelisation of TeV blazars by a stratified jet model
Boutelier, Timothé; Petrucci, Pierre-Olivier
2008-01-01
We present a new time-dependent inhomogeneous jet model of non-thermal blazar emission. Ultra-relativistic leptons are injected at the base of a jet and propagate along it. We assume continuous reacceleration and cooling, producing a relativistic quasi-maxwellian (or "pile-up") particle energy distribution. The synchrotron and Synchrotron-Self Compton jet emissivity are computed at each altitude. Klein-Nishina effects as well as intrinsic gamma-gamma absorption are included in the computation. Due to the pair production optical depth, considerable particle density enhancement can occur, particularly during flaring states.Time-dependent jet emission can be computed by varying the particle injection, but due to the sensitivity of pair production process, only small variations of the injected density are required during the flares. The stratification of the jet emission, together with a pile-up distribution, allows significantly lower bulk Lorentz factors, compared to one-zone models. Applying this model to the ...
A self consistent chemically stratified atmosphere model for the roAp star 10 Aquilae
Nesvacil, Nicole; Ryabchikova, Tanya A; Kochukhov, Oleg; Akberov, Artur; Weiss, Werner W
2012-01-01
Context: Chemically peculiar A type (Ap) stars are a subgroup of the CP2 stars which exhibit anomalous overabundances of numerous elements, e.g. Fe, Cr, Sr and rare earth elements. The pulsating subgroup of the Ap stars, the roAp stars, present ideal laboratories to observe and model pulsational signatures as well as the interplay of the pulsations with strong magnetic fields and vertical abundance gradients. Aims: Based on high resolution spectroscopic observations and observed stellar energy distributions we construct a self consistent model atmosphere, that accounts for modulations of the temperature-pressure structure caused by vertical abundance gradients, for the roAp star 10 Aquilae (HD 176232). We demonstrate that such an analysis can be used to determine precisely the fundamental atmospheric parameters required for pulsation modelling. Methods: Average abundances were derived for 56 species. For Mg, Si, Ca, Cr, Fe, Co, Sr, Pr, and Nd vertical stratification profiles were empirically derived using the...
Su, G; Madsen, P; Lund, M S
2009-05-01
Crossbreeding is currently increasing in dairy cattle production. Several studies have shown an environment-dependent heterosis [i.e., an interaction between heterosis and environment (H x E)]. An H x E interaction is usually estimated from a few discrete environment levels. The present study proposes a reaction norm model to describe H x E interaction, which can deal with a large number of environment levels using few parameters. In the proposed model, total heterosis consists of an environment-independent part, which is described as a function of heterozygosity, and an environment-dependent part, which is described as a function of heterozygosity and environmental value (e.g., herd-year effect). A Bayesian approach is developed to estimate the environmental covariates, the regression coefficients of the reaction norm, and other parameters of the model simultaneously in both linear and nonlinear reaction norms. In the nonlinear reaction norm model, the H x E is approximated using linear splines. The approach was tested using simulated data, which were generated using an animal model with a reaction norm for heterosis. The simulation study includes 4 scenarios (the combinations of moderate vs. low heritability and moderate vs. low herd-year variation) of H x E interaction in a nonlinear form. In all scenarios, the proposed model predicted total heterosis very well. The correlation between true heterosis and predicted heterosis was 0.98 in the scenarios with low herd-year variation and 0.99 in the scenarios with moderate herd-year variation. This suggests that the proposed model and method could be a good approach to analyze H x E interactions and predict breeding values in situations in which heterosis changes gradually and continuously over an environmental gradient. On the other hand, it was found that a model ignoring H x E interaction did not significantly harm the prediction of breeding value under the simulated scenarios in which the variance for environment
Unstructured grid modelling of offshore wind farm impacts on seasonally stratified shelf seas
Cazenave, Pierre William; Torres, Ricardo; Allen, J. Icarus
2016-06-01
Shelf seas comprise approximately 7% of the world's oceans and host enormous economic activity. Development of energy installations (e.g. Offshore Wind Farms (OWFs), tidal turbines) in response to increased demand for renewable energy requires a careful analysis of potential impacts. Recent remote sensing observations have identified kilometre-scale impacts from OWFs. Existing modelling evaluating monopile impacts has fallen into two camps: small-scale models with individually resolved turbines looking at local effects; and large-scale analyses but with sub-grid scale turbine parameterisations. This work straddles both scales through a 3D unstructured grid model (FVCOM): wind turbine monopiles in the eastern Irish Sea are explicitly described in the grid whilst the overall grid domain covers the south-western UK shelf. Localised regions of decreased velocity extend up to 250 times the monopile diameter away from the monopile. Shelf-wide, the amplitude of the M2 tidal constituent increases by up to 7%. The turbines enhance localised vertical mixing which decreases seasonal stratification. The spatial extent of this extends well beyond the turbines into the surrounding seas. With significant expansion of OWFs on continental shelves, this work highlights the importance of how OWFs may impact coastal (e.g. increased flooding risk) and offshore (e.g. stratification and nutrient cycling) areas.
Directory of Open Access Journals (Sweden)
Sara Schärrer
Full Text Available Demographic composition and dynamics of animal and human populations are important determinants for the transmission dynamics of infectious disease and for the effect of infectious disease or environmental disasters on productivity. In many circumstances, demographic data are not available or of poor quality. Since 1999 Switzerland has been recording cattle movements, births, deaths and slaughter in an animal movement database (AMD. The data present in the AMD offers the opportunity for analysing and understanding the dynamic of the Swiss cattle population. A dynamic population model can serve as a building block for future disease transmission models and help policy makers in developing strategies regarding animal health, animal welfare, livestock management and productivity. The Swiss cattle population was therefore modelled using a system of ordinary differential equations. The model was stratified by production type (dairy or beef, age and gender (male and female calves: 0-1 year, heifers and young bulls: 1-2 years, cows and bulls: older than 2 years. The simulation of the Swiss cattle population reflects the observed pattern accurately. Parameters were optimized on the basis of the goodness-of-fit (using the Powell algorithm. The fitted rates were compared with calculated rates from the AMD and differed only marginally. This gives confidence in the fitted rates of parameters that are not directly deductible from the AMD (e.g. the proportion of calves that are moved from the dairy system to fattening plants.
Sainath, Kamalesh
2016-01-01
We propose and investigate an "interface-flattening" transformation, hinging upon Transformation Optics (T.O.) techniques, to facilitate the rigorous analysis of electromagnetic (EM) fields radiated by sources embedded in tilted, cylindrically-layered geophysical media. Our method addresses the major challenge in such problems of appropriately approximating the domain boundaries in the computational model while, in a full-wave manner, predicting the effects of tilting in the layers. When incorporated into standard pseudo-analytical algorithms, moreover, the proposed method is quite robust, as it is not limited by absorption, anisotropy, and/or eccentering profile of the cylindrical geophysical formations, nor is it limited by the radiation frequency. These attributes of the proposed method are in contrast to past analysis methods for tilted-layer media that often place limitations on the source and medium characteristics. Through analytical derivations as well as a preliminary numerical investigation, we anal...
Stable, accurate and efficient computation of normal modes for horizontal stratified models
Wu, Bo; Chen, Xiaofei
2016-08-01
We propose an adaptive root-determining strategy that is very useful when dealing with trapped modes or Stoneley modes whose energies become very insignificant on the free surface in the presence of low-velocity layers or fluid layers in the model. Loss of modes in these cases or inaccuracy in the calculation of these modes may then be easily avoided. Built upon the generalized reflection/transmission coefficients, the concept of `family of secular functions' that we herein call `adaptive mode observers' is thus naturally introduced to implement this strategy, the underlying idea of which has been distinctly noted for the first time and may be generalized to other applications such as free oscillations or applied to other methods in use when these cases are encountered. Additionally, we have made further improvements upon the generalized reflection/transmission coefficient method; mode observers associated with only the free surface and low-velocity layers (and the fluid/solid interface if the model contains fluid layers) are adequate to guarantee no loss and high precision at the same time of any physically existent modes without excessive calculations. Finally, the conventional definition of the fundamental mode is reconsidered, which is entailed in the cases under study. Some computational aspects are remarked on. With the additional help afforded by our superior root-searching scheme and the possibility of speeding calculation using a less number of layers aided by the concept of `turning point', our algorithm is remarkably efficient as well as stable and accurate and can be used as a powerful tool for widely related applications.
Directory of Open Access Journals (Sweden)
C Elizabeth McCarron
Full Text Available BACKGROUND: Bayesian hierarchical models have been proposed to combine evidence from different types of study designs. However, when combining evidence from randomised and non-randomised controlled studies, imbalances in patient characteristics between study arms may bias the results. The objective of this study was to assess the performance of a proposed Bayesian approach to adjust for imbalances in patient level covariates when combining evidence from both types of study designs. METHODOLOGY/PRINCIPAL FINDINGS: Simulation techniques, in which the truth is known, were used to generate sets of data for randomised and non-randomised studies. Covariate imbalances between study arms were introduced in the non-randomised studies. The performance of the Bayesian hierarchical model adjusted for imbalances was assessed in terms of bias. The data were also modelled using three other Bayesian approaches for synthesising evidence from randomised and non-randomised studies. The simulations considered six scenarios aimed at assessing the sensitivity of the results to changes in the impact of the imbalances and the relative number and size of studies of each type. For all six scenarios considered, the Bayesian hierarchical model adjusted for differences within studies gave results that were unbiased and closest to the true value compared to the other models. CONCLUSIONS/SIGNIFICANCE: Where informed health care decision making requires the synthesis of evidence from randomised and non-randomised study designs, the proposed hierarchical Bayesian method adjusted for differences in patient characteristics between study arms may facilitate the optimal use of all available evidence leading to unbiased results compared to unadjusted analyses.
Zilitinkevich, S. S.; Elperin, T.; Kleeorin, N.; Rogachevskii, I.; Esau, I.
2013-03-01
Here we advance the physical background of the energy- and flux-budget turbulence closures based on the budget equations for the turbulent kinetic and potential energies and turbulent fluxes of momentum and buoyancy, and a new relaxation equation for the turbulent dissipation time scale. The closure is designed for stratified geophysical flows from neutral to very stable and accounts for the Earth's rotation. In accordance with modern experimental evidence, the closure implies the maintaining of turbulence by the velocity shear at any gradient Richardson number Ri, and distinguishes between the two principally different regimes: "strong turbulence" at {Ri ≪ 1} typical of boundary-layer flows and characterized by the practically constant turbulent Prandtl number Pr T; and "weak turbulence" at Ri > 1 typical of the free atmosphere or deep ocean, where Pr T asymptotically linearly increases with increasing Ri (which implies very strong suppression of the heat transfer compared to the momentum transfer). For use in different applications, the closure is formulated at different levels of complexity, from the local algebraic model relevant to the steady-state regime of turbulence to a hierarchy of non-local closures including simpler down-gradient models, presented in terms of the eddy viscosity and eddy conductivity, and a general non-gradient model based on prognostic equations for all the basic parameters of turbulence including turbulent fluxes.
DEFF Research Database (Denmark)
Hounyo, Ulrich
We propose a bootstrap mehtod for estimating the distribution (and functionals of it such as the variance) of various integrated covariance matrix estimators. In particular, we first adapt the wild blocks of blocks bootsratp method suggested for the pre-averaged realized volatility estimator......-studentized statistics, our results justify using the bootstrap to esitmate the covariance matrix of a broad class of covolatility estimators. The bootstrap variance estimator is positive semi-definite by construction, an appealing feature that is not always shared by existing variance estimators of the integrated...
Heavy-to-Light Form Factors in the Final Hadron Large Energy Limit Covariant Quark Model Approach
Charles, J; Oliver, L; Pène, O; Raynal, J C
1999-01-01
We prove the full covariance of the heavy-to-light weak current matrix elements based on the Bakamjian-Thomas construction of relativistic quark models, in the heavy mass limit for the parent hadron and the large energy limit for the daughter one. Moreover, this quark model representation of the heavy-to-light form factors fulfills the general relations that were recently argued to hold in the corresponding limit of QCD, namely that there are only three independent form factors describing the B -> pi (rho) matrix elements, as well as the factorized scaling law sqrt(M)z(E) of the form factors with respect to the heavy mass M and large energy E. These results constitute another good property of the quark models à la Bakamjian-Thomas, which were previously shown to exhibit covariance and Isgur-Wise scaling in the heavy-to-heavy case.
Energy Technology Data Exchange (ETDEWEB)
Picchi, St
1999-07-07
When a hot liquid comes into contact with a colder volatile liquid, one can obtain in some conditions an explosive vaporization, told vapour explosion, whose consequences can be important on neighbouring structures. This explosion needs the intimate mixing and the fine fragmentation between the two liquids. In a stratified vapour explosion, these two liquids are initially superposed and separated by a vapor film. A triggering of the explosion can induce a propagation of this along the film. A study of experimental results and existent models has allowed to retain the following main points: - the explosion propagation is due to a pressure wave propagating through the medium; - the mixing is due to the development of Kelvin-Helmholtz instabilities induced by the shear velocity between the two liquids behind the pressure wave. The presence of the vapour in the volatile liquid explains experimental propagation velocity and the velocity difference between the two fluids at the pressure wave crossing. A first model has been proposed by Brayer in 1994 in order to describe the fragmentation and the mixing of the two fluids. Results of the author do not show explosion propagation. We have therefore built a new mixing-fragmentation model based on the atomization phenomenon that develops itself during the pressure wave crossing. We have also taken into account the transient aspect of the heat transfer between fuel drops and the volatile liquid, and elaborated a model of transient heat transfer. These two models have been introduced in a multi-components, thermal, hydraulic code, MC3D. Results of calculation show a qualitative and quantitative agreement with experimental results and confirm basic options of the model. (author)
Threat Object Detection using Covariance Matrix Modeling in X-ray Images
Energy Technology Data Exchange (ETDEWEB)
Jeon, Byoun Gil; Kim, Jong Yul; Moon, Myung Kook [KAERI, Daejeon (Korea, Republic of)
2016-05-15
The X-ray imaging system for the aviation security is one of the applications. In airports, all passengers and properties should be inspected and accepted by security machines before boarding on aircrafts to avoid all treat factors. That treat factors might be directly connected on terrorist threats awfully hazardous to not only passengers but also people in highly populated area such as major cities or buildings. Because the performance of the system is increasing along with the growth of IT technology, information that has various type and good quality can be provided for security check. However, human factors are mainly affected on the inspections. It means that human inspectors should be proficient corresponding to the growth of technology for efficient and effective inspection but there is clear limit of proficiency. Human being is not a computer. Because of the limitation, the aviation security techniques have the tendencies to provide not only numerous and nice information but also effective assistance for security inspectors. Many image processing applications already have been developed to provide efficient assistance for the security systems. Naturally, the security check procedure should not be altered by automatic software because it's not guaranteed that the automatic system will never make any mistake. This paper addressed an application of threat object detection using the covariance matrix modeling. The algorithm is implemented in MATLAB environment and evaluated the performance by comparing with other detection algorithms. Considering the shape of an object on an image is changed by the attitude of that to the imaging machine, the implemented detector has the robustness for rotation and scale of an object.
Energy Technology Data Exchange (ETDEWEB)
Brassart, M. [Ecole Nationale Superieure Ingenieurs de Bourges, 18 - Bourges (France); Mounier, C. [CEA Saclay, Dir. de l' Energie Nucleaire DEN, Service d' Etudes des Reacteurs et de Modelisation Avancee, 91 - Gif sur Yvette (France); Dossantos-Uzarralde, P. [CEA Bruyeres le Chatel, 91 (France). Dept. de Physique Theorique et Appliquee
2004-07-01
Nuclear reaction models play an important role in today's nuclear data evaluations. There are, however, difficulties associated with evaluating data uncertainties, both while performing the experimental measurements as well as constructing them by nuclear models. In this general context, our interest is particularly targeted towards the study of the propagation uncertainties within nuclear models. In this report we discuss two distinct ways of calculating the nuclear cross section variance-covariance matrices and then show these can be applied to the nuclear spherical optical model. (authors)
Switching Principal Component Analysis for Modeling Means and Covariance Changes Over Time
De Roover, Kim; Timmerman, Marieke E.; Van Diest, Ilse; Onghena, Patrick; Ceulemans, Eva
2014-01-01
Many psychological theories predict that cognitions, affect, action tendencies, and other variables change across time in mean level as well as in covariance structure. Often such changes are rather abrupt, because they are caused by sudden events. To capture such changes, one may repeatedly measure
A covariant model of the electromagnetic current for the study of two-body scalar systems
Acero, M A; Sandoval, C E; Sanctis, Maurizio De; Sandoval, Carlos E.
2005-01-01
We present a procedure to derive a covariant electromagnetic current operator for a system made up by two scalars constituents. Using different wave functions we fitted their parameters to the experimental data of the pion form factor, obtainig great discrepancy at low momentum transfer. Introducing the Vector Meson Dominance corrective factor, we obtained a better fit to the data.
Migliavacca, M.; Reichstein, M.; Richardson, A.D.; Colombo, R.; Sutton, M.A.; Lasslop, G.; Tomelleri, E.; Wohlfahrt, G.; Carvalhais, N.; Molen, van der M.K.
2011-01-01
In this study we examined ecosystem respiration (RECO) data from 104 sites belonging to FLUXNET, the global network of eddy covariance flux measurements. The goal was to identify the main factors involved in the variability of RECO: temporally and between sites as affected by climate, vegetation str
Schmitt, J Eric; Lenroot, Rhoshel K; Ordaz, Sarah E; Wallace, Gregory L; Lerch, Jason P; Evans, Alan C; Prom, Elizabeth C; Kendler, Kenneth S; Neale, Michael C; Giedd, Jay N
2009-08-01
The role of genetics in driving intracortical relationships is an important question that has rarely been studied in humans. In particular, there are no extant high-resolution imaging studies on genetic covariance. In this article, we describe a novel method that combines classical quantitative genetic methodologies for variance decomposition with recently developed semi-multivariate algorithms for high-resolution measurement of phenotypic covariance. Using these tools, we produced correlational maps of genetic and environmental (i.e. nongenetic) relationships between several regions of interest and the cortical surface in a large pediatric sample of 600 twins, siblings, and singletons. These analyses demonstrated high, fairly uniform, statistically significant genetic correlations between the entire cortex and global mean cortical thickness. In agreement with prior reports on phenotypic covariance using similar methods, we found that mean cortical thickness was most strongly correlated with association cortices. However, the present study suggests that genetics plays a large role in global brain patterning of cortical thickness in this manner. Further, using specific gyri with known high heritabilities as seed regions, we found a consistent pattern of high bilateral genetic correlations between structural homologues, with environmental correlations more restricted to the same hemisphere as the seed region, suggesting that interhemispheric covariance is largely genetically mediated. These findings are consistent with the limited existing knowledge on the genetics of cortical variability as well as our prior multivariate studies on cortical gyri.
Lamont, A.E.; Vermunt, J.K.; Van Horn, M.L.
2016-01-01
Regression mixture models are increasingly used as an exploratory approach to identify heterogeneity in the effects of a predictor on an outcome. In this simulation study, we tested the effects of violating an implicit assumption often made in these models; that is, independent variables in the
Jung, M.; Reichstein, M.; Bondeau, A.
2009-10-01
Global, spatially and temporally explicit estimates of carbon and water fluxes derived from empirical up-scaling eddy covariance measurements would constitute a new and possibly powerful data stream to study the variability of the global terrestrial carbon and water cycle. This paper introduces and validates a machine learning approach dedicated to the upscaling of observations from the current global network of eddy covariance towers (FLUXNET). We present a new model TRee Induction ALgorithm (TRIAL) that performs hierarchical stratification of the data set into units where particular multiple regressions for a target variable hold. We propose an ensemble approach (Evolving tRees with RandOm gRowth, ERROR) where the base learning algorithm is perturbed in order to gain a diverse sequence of different model trees which evolves over time. We evaluate the efficiency of the model tree ensemble (MTE) approach using an artificial data set derived from the Lund-Potsdam-Jena managed Land (LPJmL) biosphere model. We aim at reproducing global monthly gross primary production as simulated by LPJmL from 1998-2005 using only locations and months where high quality FLUXNET data exist for the training of the model trees. The model trees are trained with the LPJmL land cover and meteorological input data, climate data, and the fraction of absorbed photosynthetic active radiation simulated by LPJmL. Given that we know the "true result" in the form of global LPJmL simulations we can effectively study the performance of the MTE upscaling and associated problems of extrapolation capacity. We show that MTE is able to explain 92% of the variability of the global LPJmL GPP simulations. The mean spatial pattern and the seasonal variability of GPP that constitute the largest sources of variance are very well reproduced (96% and 94% of variance explained respectively) while the monthly interannual anomalies which occupy much less variance are less well matched (41% of variance explained
Nonignorable data in IRT models: Polytomous responses and response propensity models with covariates
Glas, C.A.W.; Pimentel, J.L.; Lamers, S.M.A.
2015-01-01
Missing data usually present special problems for statistical analyses, especially when the data are not missing at random, that is, when the ignorability principle defined by Rubin (1976) does not hold. Recently, a substantial number of articles have been published on model-based procedures to hand
Sarmento, J L R; Torres, R A; Sousa, W H; Lôbo, R N B; Albuquerque, L G; Lopes, P S; Santos, N P S; Bignard, A B
2016-06-20
Polynomial functions of different orders were used to model random effects associated with weight of Santa Ines sheep from birth to 196 days. Fixed effects included in the models were contemporary groups, age of ewe at lambing, and fourth-order Legendre polynomials for age to represent the average growth curve. In the random part, functions of different orders were included to model variances associated with direct additive and maternal genetic effects and with permanent environmental effects of the animal and mother. Residual variance was fitted by a sixth-order ordinary polynomial for age. The higher the order of the functions, the better the model fit the data. According to the Akaike information criterion and likelihood ratio test, a continuous function of order, five, five, seven, and three for direct additive genetic, maternal genetic, animal permanent environmental, and maternal permanent environmental effects (k = 5573), respectively, was sufficient to model changes in (co)variances with age. However, a more parsimonious model of order three, three, five, and three (k = 3353) was suggested based on Schwarz's Bayesian information criterion for the same effects. Since it was a more flexible model, model k = 5573 provided inconsistent genetic parameter estimates when compared to the biologically expected result. Predicted breeding values obtained with models k = 3353 and k = 5573 differed, especially at young ages. Model k = 3353 adequately fit changes in variances and covariances with time, and may be used to describe changes in variances with age in the Santa Ines sheep studied.
Earth Observing System Covariance Realism
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu
2017-01-01
This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).
Semiparametric approach for non-monotone missing covariates in a parametric regression model
Sinha, Samiran
2014-02-26
Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.
Semiparametric approach for non-monotone missing covariates in a parametric regression model.
Sinha, Samiran; Saha, Krishna K; Wang, Suojin
2014-06-01
Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
Liu, Yang; Magnus, Brooke E; Thissen, David
2016-06-01
Differential item functioning (DIF), referring to between-group variation in item characteristics above and beyond the group-level disparity in the latent variable of interest, has long been regarded as an important item-level diagnostic. The presence of DIF impairs the fit of the single-group item response model being used, and calls for either model modification or item deletion in practice, depending on the mode of analysis. Methods for testing DIF with continuous covariates, rather than categorical grouping variables, have been developed; however, they are restrictive in parametric forms, and thus are not sufficiently flexible to describe complex interaction among latent variables and covariates. In the current study, we formulate the probability of endorsing each test item as a general bivariate function of a unidimensional latent trait and a single covariate, which is then approximated by a two-dimensional smoothing spline. The accuracy and precision of the proposed procedure is evaluated via Monte Carlo simulations. If anchor items are available, we proposed an extended model that simultaneously estimates item characteristic functions (ICFs) for anchor items, ICFs conditional on the covariate for non-anchor items, and the latent variable density conditional on the covariate-all using regression splines. A permutation DIF test is developed, and its performance is compared to the conventional parametric approach in a simulation study. We also illustrate the proposed semiparametric DIF testing procedure with an empirical example.
Directory of Open Access Journals (Sweden)
M. Jung
2009-05-01
Full Text Available Global, spatially and temporally explicit estimates of carbon and water fluxes derived from empirical up-scaling eddy covariance measurements would constitute a new and possibly powerful data stream to study the variability of the global terrestrial carbon and water cycle. This paper introduces and validates a machine learning approach dedicated to the upscaling of observations from the current global network of eddy covariance towers (FLUXNET. We present a new model TRee Induction ALgorithm (TRIAL that performs hierarchical stratification of the data set into units where particular multiple regressions for a target variable hold. We propose an ensemble approach (Evolving tRees with RandOm gRowth, ERROR where the base learning algorithm is perturbed in order to gain a diverse sequence of different model trees which evolves over time.
We evaluate the efficiency of the model tree ensemble approach using an artificial data set derived from the the Lund-Potsdam-Jena managed Land (LPJmL biosphere model. We aim at reproducing global monthly gross primary production as simulated by LPJmL from 1998–2005 using only locations and months where high quality FLUXNET data exist for the training of the model trees. The model trees are trained with the LPJmL land cover and meteorological input data, climate data, and the fraction of absorbed photosynthetic active radiation simulated by LPJmL. Given that we know the "true result" in the form of global LPJmL simulations we can effectively study the performance of the model tree ensemble upscaling and associated problems of extrapolation capacity.
We show that the model tree ensemble is able to explain 92% of the variability of the global LPJmL GPP simulations. The mean spatial pattern and the seasonal variability of GPP that constitute the largest sources of variance are very well reproduced (96% and 94% of variance explained respectively while the monthly interannual anomalies which occupy
An evolutionary-network model reveals stratified interactions in the V3 loop of the HIV-1 envelope.
Directory of Open Access Journals (Sweden)
Art F Y Poon
2007-11-01
Full Text Available The third variable loop (V3 of the human immunodeficiency virus type 1 (HIV-1 envelope is a principal determinant of antibody neutralization and progression to AIDS. Although it is undoubtedly an important target for vaccine research, extensive genetic variation in V3 remains an obstacle to the development of an effective vaccine. Comparative methods that exploit the abundance of sequence data can detect interactions between residues of rapidly evolving proteins such as the HIV-1 envelope, revealing biological constraints on their variability. However, previous studies have relied implicitly on two biologically unrealistic assumptions: (1 that founder effects in the evolutionary history of the sequences can be ignored, and; (2 that statistical associations between residues occur exclusively in pairs. We show that comparative methods that neglect the evolutionary history of extant sequences are susceptible to a high rate of false positives (20%-40%. Therefore, we propose a new method to detect interactions that relaxes both of these assumptions. First, we reconstruct the evolutionary history of extant sequences by maximum likelihood, shifting focus from extant sequence variation to the underlying substitution events. Second, we analyze the joint distribution of substitution events among positions in the sequence as a Bayesian graphical model, in which each branch in the phylogeny is a unit of observation. We perform extensive validation of our models using both simulations and a control case of known interactions in HIV-1 protease, and apply this method to detect interactions within V3 from a sample of 1,154 HIV-1 envelope sequences. Our method greatly reduces the number of false positives due to founder effects, while capturing several higher-order interactions among V3 residues. By mapping these interactions to a structural model of the V3 loop, we find that the loop is stratified into distinct evolutionary clusters. We extend our model to
Reboussin, Beth A; Ip, Edward H; Wolfson, Mark
2008-10-01
Under-age drinking is a long-standing public health problem in the USA and the identification of underage drinkers suffering alcohol-related problems has been difficult by using diagnostic criteria that were developed in adult populations. For this reason, it is important to characterize patterns of drinking in adolescents that are associated with alcohol-related problems. Latent class analysis is a statistical technique for explaining heterogeneity in individual response patterns in terms of a smaller number of classes. However, the latent class analysis assumption of local independence may not be appropriate when examining behavioural profiles and could have implications for statistical inference. In addition, if covariates are included in the model, non-differential measurement is also assumed. We propose a flexible set of models for local dependence and differential measurement that use easily interpretable odds ratio parameterizations while simultaneously fitting a marginal regression model for the latent class prevalences. Estimation is based on solving a set of second-order estimating equations. This approach requires only specification of the first two moments and allows for the choice of simple 'working' covariance structures. The method is illustrated by using data from a large-scale survey of under-age drinking. This new approach indicates the effectiveness of introducing local dependence and differential measurement into latent class models for selecting substantively interpretable models over more complex models that are deemed empirically superior.
DEFF Research Database (Denmark)
Gillet, N.; Jault, D.; Finlay, Chris
2013-01-01
, which force the expansions in the spatial and time domains to converge but also hinder the calculation of reliable second-order statistics. To tackle this issue, we propose a stochastic approach that integrates, through time covariance functions, some prior information on the time evolution...... of the geomagnetic field. We consider the time series of spherical harmonic coefficients as realizations of a continuous and differentiable stochastic process. Our specific choice of process, such that it is not twice differentiable, mainly relies on two properties of magnetic observatory records (time spectra...
DEFF Research Database (Denmark)
Gillet, Nicolas; Jault, D.; Finlay, Chris
2013-01-01
, which force the expansions in the spatial and time domains to converge, but also hinders the calculation of reliable second order statistics. To tackle this issue, we propose a stochastic approach that integrates, through time covariance functions, some prior information on the time evolution...... of the geomagnetic field. We consider the time series of spherical harmonic coefficients as realizations of a continuous and differentiable stochastic process. Our specific choice of process, such that it is not twice differentiable, mainly relies on two properties of magnetic observatory records (time spectra...
Aimran, Ahmad Nazim; Ahmad, Sabri; Afthanorhan, Asyraf; Awang, Zainudin
2017-05-01
Structural equation modeling (SEM) is the second generation statistical analysis technique developed for analyzing the inter-relationships among multiple variables in a model. Previous studies have shown that there seemed to be at least an implicit agreement about the factors that should drive the choice between covariance-based structural equation modeling (CB-SEM) and partial least square path modeling (PLS-PM). PLS-PM appears to be the preferred method by previous scholars because of its less stringent assumption and the need to avoid the perceived difficulties in CB-SEM. Along with this issue has been the increasing debate among researchers on the use of CB-SEM and PLS-PM in studies. The present study intends to assess the performance of CB-SEM and PLS-PM as a confirmatory study in which the findings will contribute to the body of knowledge of SEM. Maximum likelihood (ML) was chosen as the estimator for CB-SEM and was expected to be more powerful than PLS-PM. Based on the balanced experimental design, the multivariate normal data with specified population parameter and sample sizes were generated using Pro-Active Monte Carlo simulation, and the data were analyzed using AMOS for CB-SEM and SmartPLS for PLS-PM. Comparative Bias Index (CBI), construct relationship, average variance extracted (AVE), composite reliability (CR), and Fornell-Larcker criterion were used to study the consequence of each estimator. The findings conclude that CB-SEM performed notably better than PLS-PM in estimation for large sample size (100 and above), particularly in terms of estimations accuracy and consistency.
Bayes linear covariance matrix adjustment
Wilkinson, Darren J
1995-01-01
In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated with examples of adjustments for some common problems. The problem of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be a...
Dreano, Denis
2015-04-27
A statistical model is proposed to filter satellite-derived chlorophyll concentration from the Red Sea, and to predict future chlorophyll concentrations. The seasonal trend is first estimated after filling missing chlorophyll data using an Empirical Orthogonal Function (EOF)-based algorithm (Data Interpolation EOF). The anomalies are then modeled as a stationary Gaussian process. A method proposed by Gneiting (2002) is used to construct positive-definite space-time covariance models for this process. After choosing an appropriate statistical model and identifying its parameters, Kriging is applied in the space-time domain to make a one step ahead prediction of the anomalies. The latter serves as the prediction model of a reduced-order Kalman filter, which is applied to assimilate and predict future chlorophyll concentrations. The proposed method decreases the root mean square (RMS) prediction error by about 11% compared with the seasonal average.
Yuan, W.; Liu, S.; Zhou, G.; Tieszen, L.L.; Baldocchi, D.; Bernhofer, C.; Gholz, H.; Goldstein, Allen H.; Goulden, M.L.; Hollinger, D.Y.; Hu, Y.; Law, B.E.; Stoy, Paul C.; Vesala, T.; Wofsy, S.C.
2007-01-01
The quantitative simulation of gross primary production (GPP) at various spatial and temporal scales has been a major challenge in quantifying the global carbon cycle. We developed a light use efficiency (LUE) daily GPP model from eddy covariance (EC) measurements. The model, called EC-LUE, is driven by only four variables: normalized difference vegetation index (NDVI), photosynthetically active radiation (PAR), air temperature, and the Bowen ratio of sensible to latent heat flux (used to calculate moisture stress). The EC-LUE model relies on two assumptions: First, that the fraction of absorbed PAR (fPAR) is a linear function of NDVI; Second, that the realized light use efficiency, calculated from a biome-independent invariant potential LUE, is controlled by air temperature or soil moisture, whichever is most limiting. The EC-LUE model was calibrated and validated using 24,349 daily GPP estimates derived from 28 eddy covariance flux towers from the AmeriFlux and EuroFlux networks, covering a variety of forests, grasslands and savannas. The model explained 85% and 77% of the observed variations of daily GPP for all the calibration and validation sites, respectively. A comparison with GPP calculated from the Moderate Resolution Imaging Spectroradiometer (MODIS) indicated that the EC-LUE model predicted GPP that better matched tower data across these sites. The realized LUE was predominantly controlled by moisture conditions throughout the growing season, and controlled by temperature only at the beginning and end of the growing season. The EC-LUE model is an alternative approach that makes it possible to map daily GPP over large areas because (1) the potential LUE is invariant across various land cover types and (2) all driving forces of the model can be derived from remote sensing data or existing climate observation networks.
A Fixpoint Semantics for Stratified Databases
Institute of Scientific and Technical Information of China (English)
沈一栋
1993-01-01
Przmusinski extended the notion of stratified logic programs,developed by Apt,Blair and Walker,and by van Gelder,to stratified databases that allow both negative premises and disjunctive consequents.However,he did not provide a fixpoint theory for such class of databases.On the other hand,although a fixpoint semantics has been developed by Minker and Rajasekar for non-Horn logic programs,it is tantamount to traditional minimal model semantics which is not sufficient to capture the intended meaning of negation in the premises of clauses in stratified databases.In this paper,a fixpoint approach to stratified databases is developed,which corresponds with the perfect model semantics.Moreover,algorithms are proposed for computing the set of perfect models of a stratified database.
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Khojandi, Anahita; Shylo, Oleg; Mannini, Lucia; Kopell, Brian H; Ramdhani, Ritesh A
2017-07-01
High frequency stimulation (HFS) of the subthalamic nucleus (STN) is a well-established therapy for Parkinson's disease (PD), particularly the cardinal motor symptoms and levodopa induced motor complications. Recent studies have suggested the possible role of 60 Hz stimulation in STN-deep brain stimulation (DBS) for patients with gait disorder. The objective of this study was to develop a computational model, which stratifies patients a priori based on symptomatology into different frequency settings (i.e., high frequency or 60 Hz). We retrospectively analyzed preoperative MDS-Unified Parkinson's Disease Rating Scale III scores (32 indicators) collected from 20 PD patients implanted with STN-DBS at Mount Sinai Medical Center on either 60 Hz stimulation (ten patients) or HFS (130-185 Hz) (ten patients) for an average of 12 months. Predictive models using the Random Forest classification algorithm were built to associate patient/disease characteristics at surgery to the stimulation frequency. These models were evaluated objectively using leave-one-out cross-validation approach. The computational models produced, stratified patients into 60 Hz or HFS (130-185 Hz) with 95% accuracy. The best models relied on two or three predictors out of the 32 analyzed for classification. Across all predictors, gait and rest tremor of the right hand were consistently the most important. Computational models were developed using preoperative clinical indicators in PD patients treated with STN-DBS. These models were able to accurately stratify PD patients into 60 Hz stimulation or HFS (130-185 Hz) groups a priori, offering a unique potential to enhance the utilization of this therapy based on clinical subtypes. © 2017 International Neuromodulation Society.
Camargos, Vitor Passos; César, Cibele Comini; Caiaffa, Waleska Teixeira; Xavier, Cesar Coelho; Proietti, Fernando Augusto
2011-12-01
Researchers in the health field often deal with the problem of incomplete databases. Complete Case Analysis (CCA), which restricts the analysis to subjects with complete data, reduces the sample size and may result in biased estimates. Based on statistical grounds, Multiple Imputation (MI) uses all collected data and is recommended as an alternative to CCA. Data from the study Saúde em Beagá, attended by 4,048 adults from two of nine health districts in the city of Belo Horizonte, Minas Gerais State, Brazil, in 2008-2009, were used to evaluate CCA and different MI approaches in the context of logistic models with incomplete covariate data. Peculiarities in some variables in this study allowed analyzing a situation in which the missing covariate data are recovered and thus the results before and after recovery are compared. Based on the analysis, even the more simplistic MI approach performed better than CCA, since it was closer to the post-recovery results.
Gong, Maozhen
Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.
Zilitinkevich, S S; Kleeorin, N; Rogachevskii, I; Esau, I
2011-01-01
In this paper we advance physical background of the EFB turbulence closure and present its comprehensive description. It is based on four budget equations for the second moments: turbulent kinetic and potential energies (TKE and TPE) and vertical turbulent fluxes of momentum and buoyancy; a new relaxation equation for the turbulent dissipation time-scale; and advanced concept of the inter-component exchange of TKE. The EFB closure is designed for stratified, rotating geophysical flows from neutral to very stable. In accordance to modern experimental evidence, it grants maintaining turbulence by the velocity shear at any gradient Richardson number Ri, and distinguishes between the two principally different regimes: "strong turbulence" at Ri 1 typical of the free atmosphere or deep ocean, where Pr_T asymptotically linearly increases with increasing Ri that implies strong suppressing of the heat transfer compared to momentum transfer. For use in different applications, the EFB turbulence closure is formulated a...
The model-size effect on traditional and modified tests of covariance structures
Boomsma, Anne; Reinecke, Sven; Herzog, W.
2007-01-01
According to Kenny and McCoach (2003), chi-square tests of structural equation models produce inflated Type I error rates when the degrees of freedom increase. So far, the amount of this bias in large models has not been quantified. In a Monte Carlo study of confirmatory factor models with a range o
Schuurman, N.K.; Grasman, R.P.P.P.; Hamaker, E.L.
2016-01-01
Multilevel autoregressive models are especially suited for modeling between-person differences in within-person processes. Fitting these models with Bayesian techniques requires the specification of prior distributions for all parameters. Often it is desirable to specify prior distributions that
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Wang, Chenguang; Daniels, Michael J
2011-09-01
Pattern mixture modeling is a popular approach for handling incomplete longitudinal data. Such models are not identifiable by construction. Identifying restrictions is one approach to mixture model identification (Little, 1995, Journal of the American Statistical Association 90, 1112-1121; Little and Wang, 1996, Biometrics 52, 98-111; Thijs et al., 2002, Biostatistics 3, 245-265; Kenward, Molenberghs, and Thijs, 2003, Biometrika 90, 53-71; Daniels and Hogan, 2008, in Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis) and is a natural starting point for missing not at random sensitivity analysis (Thijs et al., 2002, Biostatistics 3, 245-265; Daniels and Hogan, 2008, in Missing Data in Longitudinal Studies: Strategies for Bayesian Modeling and Sensitivity Analysis). However, when the pattern specific models are multivariate normal, identifying restrictions corresponding to missing at random (MAR) may not exist. Furthermore, identification strategies can be problematic in models with covariates (e.g., baseline covariates with time-invariant coefficients). In this article, we explore conditions necessary for identifying restrictions that result in MAR to exist under a multivariate normality assumption and strategies for identifying sensitivity parameters for sensitivity analysis or for a fully Bayesian analysis with informative priors. In addition, we propose alternative modeling and sensitivity analysis strategies under a less restrictive assumption for the distribution of the observed response data. We adopt the deviance information criterion for model comparison and perform a simulation study to evaluate the performances of the different modeling approaches. We also apply the methods to a longitudinal clinical trial. Problems caused by baseline covariates with time-invariant coefficients are investigated and an alternative identifying restriction based on residuals is proposed as a solution.
Dolan, C.V.; Colom, R.; Abad, F.J.; Wicherts, J.M.; Hessen, D.J.; van de Sluis, S.
2006-01-01
We investigated sex effects and the effects of educational attainment (EA) on the covariance structure of the WAIS-III in a subsample of the Spanish standardization data. We fitted both first order common factor models and second order common factor models. The latter include general intelligence (g
de Castro, Marcelo Souza; Rodriguez, Oscar Mauricio Hernandez
2016-06-01
The study of the hydrodynamic stability of flow patterns is important in the design of equipment and pipelines for multiphase flows. The maintenance of a particular flow pattern becomes important in many applications, e.g., stratified flow pattern in heavy oil production avoiding the formation of emulsions because of the separation of phases and annular flow pattern in heat exchangers which increases the heat transfer coefficient. Flow maps are drawn to orientate engineers which flow pattern is present in a pipeline, for example. The ways how these flow maps are drawn have changed from totally experimental work, to phenomenological models, and then to stability analysis theories. In this work an experimental liquid-liquid flow map, with water and viscous oil as work fluids, drawn via subjective approach with high speed camera was used to compare to approaches of the same theory: the interfacial-tension-force model. This theory was used to drawn the wavy stratified flow pattern transition boundary. This paper presents a comparison between the two approaches of the interfacial-tension-force model for transition boundaries of liquid-liquid flow patterns: (i) solving the wave equation for the wave speed and using average values for wave number and wave speed; and (ii) solving the same equation for the wave number and then using a correlation for the wave speed. The results show that the second approach presents better results.
Cheung, Mike W. L.; Chan, Wai
2009-01-01
Structural equation modeling (SEM) is widely used as a statistical framework to test complex models in behavioral and social sciences. When the number of publications increases, there is a need to systematically synthesize them. Methodology of synthesizing findings in the context of SEM is known as meta-analytic SEM (MASEM). Although correlation…
Bias Correction in the Dynamic Panel Data Model with a Nonscalar Disturbance Covariance Matrix
Bun, M.J.G.
2003-01-01
Approximation formulae are developed for the bias of ordinary and generalized Least Squares Dummy Variable (LSDV) estimators in dynamic panel data models. Results from Kiviet [Kiviet, J. F. (1995), on bias, inconsistency, and efficiency of various estimators in dynamic panel data models, J.
Maximum Likelihood Analysis of a Two-Level Nonlinear Structural Equation Model with Fixed Covariates
Lee, Sik-Yum; Song, Xin-Yuan
2005-01-01
In this article, a maximum likelihood (ML) approach for analyzing a rather general two-level structural equation model is developed for hierarchically structured data that are very common in educational and/or behavioral research. The proposed two-level model can accommodate nonlinear causal relations among latent variables as well as effects…
A Cautionary Note on the Use of Information Fit Indexes in Covariance Structure Modeling with Means
Wicherts, Jelte M.; Dolan, Conor V.
2004-01-01
Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases in which models without mean restrictions (i.e.,…
Cheung, Mike W. L.; Chan, Wai
2009-01-01
Structural equation modeling (SEM) is widely used as a statistical framework to test complex models in behavioral and social sciences. When the number of publications increases, there is a need to systematically synthesize them. Methodology of synthesizing findings in the context of SEM is known as meta-analytic SEM (MASEM). Although correlation…
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
Directory of Open Access Journals (Sweden)
Tommasi J.
2010-10-01
Full Text Available In the [eV;MeV] energy range, modelling of the neutron induced reactions are based on nuclear reaction models having parameters. Estimation of co-variances on cross sections or on nuclear reaction model parameters is a recurrent puzzle in nuclear data evaluation. Major breakthroughs were asked by nuclear reactor physicists to assess proper uncertainties to be used in applications. In this paper, mathematical methods developped in the CONRAD code[2] will be presented to explain the treatment of all type of uncertainties, including experimental ones (statistical and systematic and propagate them to nuclear reaction model parameters or cross sections. Marginalization procedure will thus be exposed using analytical or Monte-Carlo solutions. Furthermore, one major drawback found by reactor physicist is the fact that integral or analytical experiments (reactor mock-up or simple integral experiment, e.g. ICSBEP, … were not taken into account sufficiently soon in the evaluation process to remove discrepancies. In this paper, we will describe a mathematical framework to take into account properly this kind of information.
CSIR Research Space (South Africa)
Kim, S
2008-03-01
Full Text Available Motivated by a large multilevel survey conducted by the US Veterans Health Administration (VHA), we propose a structural equations model which involves a set of latent variables to capture dependence between different responses, a set of facility...
A cautionary note on the use of information fit indexes in covariance structure modeling with means
Wicherts, J.M.; Dolan, C.V.
2004-01-01
Information fit indexes such as Akaike Information Criterion, Consistent Akaike Information Criterion, Bayesian Information Criterion, and the expected cross validation index can be valuable in assessing the relative fit of structural equation models that differ regarding restrictiveness. In cases i
Analysis of Covariance with Linear Regression Error Model on Antenna Control Unit Tracking
2015-10-20
hypotheses, analyses and perhaps modeling to assess test results objectively, i.e., on statistical metrics, probability of confidence, logical inference to...perhaps modeling to assess test results objectively, i.e., based on statistical metrics, probability of confidence and logical inference to...less variable than opinion. Logic, statistical inference and belief are the bases of testable, repeatable and refutable hypothesis and analyses. In
Dynamically constrained uncertainty for the Kalman filter covariance in the presence of model error
Grudzien, Colin; Carrassi, Alberto; Bocquet, Marc
2017-04-01
The forecasting community has long understood the impact of dynamic instability on the uncertainty of predictions in physical systems and this has led to innovative filtering design to take advantage of the knowledge of process models. The advantages of this combined approach to filtering, including both a dynamic and statistical understanding, have included dimensional reductions and robust feature selection in the observational design of filters. In the context of a perfect models we have shown that the uncertainty in prediction is damped along the directions of stability and the support of the uncertainty conforms to the dominant system instabilities. Our current work likewise demonstrates this constraint on the uncertainty for systems with model error, specifically, - we produce analytical upper bounds on the uncertainty in the stable, backwards orthogonal Lyapunov vectors in terms of the local Lyapunov exponents and the scale of the additive noise. - we demonstrate that for systems with model noise, the least upper bound on the uncertainty depends on the inverse relationship of the leading Lyapunov exponent and the observational certainty. - we numerically compute the invariant scaling factor of the model error which determines the asymptotic uncertainty. This dynamic scaling of model error is identifiable independently of the noise and is computable directly in terms of the system's dynamic invariants -- in this way the physical process itself may mollify the growth of modelling errors. For systems with strongly dissipative behaviour, we demonstrate that the growth of the uncertainty can be confined to the unstable-neutral modes independently of the filtering process, and we connect the observational design to take advantage of a dynamic characteristic of the filtering error.
Facchi, Arianna; Masseroni, Daniele; Gharsallah, Olfa; Gandolfi, Claudio
2014-05-01
Rice is of great importance both from a food supply point of view, since it represents the main food in the diet of over half the world's population, and from a water resources point of view, since it consumes almost 40% of the water amount used for irrigation. About 90% of global production takes place in Asia, while European production is quantitatively modest (about 3 million tons). However, Italy is the Europe's leading producer, with over half of total production, almost totally concentrated in a large traditional paddy rice area between the Lombardy and Piedmont regions, in the north-western part of the country. In this area, irrigation of rice is traditionally carried out by continuous flooding. The high water requirement of this irrigation regime encourages the introduction of water saving irrigation practices, as flood irrigation after sowing in dry soil and intermittent irrigation (aerobic rice). In the agricultural season 2013 an intense monitoring activity was conducted on three experimental fields located in the Padana plain (northern Italy) and characterized by different irrigation regimes (traditional flood irrigation, flood irrigation after sowing in dry soil, intermittent irrigation), with the aim of comparing the water balance terms for the three irrigation treatments. Actual evapotranspiration (ET) is one of the terms, but, unlike others water balance components, its field monitoring requires expensive instrumentation. This work explores the possibility of using only one eddy covariance system and Penman-Monteith (PM) type models for the determination of ET fluxes for the three irrigation regimes. An eddy covariance station was installed on the levee between the traditional flooded and the aerobic rice fields, to contemporaneously monitor the ET fluxes from this two treatments as a function of the wind direction. A detailed footprint analysis was conducted - through the application of three different analytical models - to determine the position
How I got to work with Feynman on the covariant quark model
Ravndal, Finn
2014-01-01
In the period 1968 - 1974 I was a graduate student and then a postdoc at Caltech and was involved with the developments of the quark and parton models. Most of this time I worked in close contact with Richard Feynman and thus was present from the parton model was proposed until QCD was formulated. A personal account is presented how the collaboration took place and how the various stages of this development looked like from the inside until QCD was established as a theory for strong interactions with the partons being quarks and gluons.
HIDDEN MARKOV MODELS WITH COVARIATES FOR ANALYSIS OF DEFECTIVE INDUSTRIAL MACHINE PARTS
2014-01-01
Monthly counts of industrial machine part errors are modeled using a two-state Hidden Markov Model (HMM) in order to describe the effect of machine part error correction and the amount of time spent on the error correction on the likelihood of the machine part to be in a “defective” or “non-defective” state. The number of machine parts errors were collected from a thermo plastic injection molding machine in a car bumper auto parts manufacturer in Liberec city, Czech Re...
Directory of Open Access Journals (Sweden)
Damgaard Lars
2006-04-01
Full Text Available Abstract Data on doe longevity in a rabbit population were analysed using a semiparametric log-Normal animal frailty model. Longevity was defined as the time from the first positive pregnancy test to death or culling due to pathological problems. Does culled for other reasons had right censored records of longevity. The model included time dependent covariates associated with year by season, the interaction between physiological state and the number of young born alive, and between order of positive pregnancy test and physiological state. The model also included an additive genetic effect and a residual in log frailty. Properties of marginal posterior distributions of specific parameters were inferred from a full Bayesian analysis using Gibbs sampling. All of the fully conditional posterior distributions defining a Gibbs sampler were easy to sample from, either directly or using adaptive rejection sampling. The marginal posterior mean estimates of the additive genetic variance and of the residual variance in log frailty were 0.247 and 0.690.
Sánchez, Juan Pablo; Korsgaard, Inge Riis; Damgaard, Lars Holm; Baselga, Manuel
2006-01-01
Data on doe longevity in a rabbit population were analysed using a semiparametric log-Normal animal frailty model. Longevity was defined as the time from the first positive pregnancy test to death or culling due to pathological problems. Does culled for other reasons had right censored records of longevity. The model included time dependent covariates associated with year by season, the interaction between physiological state and the number of young born alive, and between order of positive pregnancy test and physiological state. The model also included an additive genetic effect and a residual in log frailty. Properties of marginal posterior distributions of specific parameters were inferred from a full Bayesian analysis using Gibbs sampling. All of the fully conditional posterior distributions defining a Gibbs sampler were easy to sample from, either directly or using adaptive rejection sampling. The marginal posterior mean estimates of the additive genetic variance and of the residual variance in log frailty were 0.247 and 0.690.
Groenendijk, M.; Dolman, A.J.; Ammann, C.; Arneth, A.; Cescatti, A.; Molen, van der M.K.; Moors, E.J.
2011-01-01
Global vegetation models require the photosynthetic parameters, maximum carboxylation capacity (Vcm), and quantum yield (a) to parameterize their plant functional types (PFTs). The purpose of this work is to determine how much the scaling of the parameters from leaf to ecosystem level through a seas
HIDDEN MARKOV MODELS WITH COVARIATES FOR ANALYSIS OF DEFECTIVE INDUSTRIAL MACHINE PARTS
Directory of Open Access Journals (Sweden)
Pornpit Sirima
2014-01-01
Full Text Available Monthly counts of industrial machine part errors are modeled using a two-state Hidden Markov Model (HMM in order to describe the effect of machine part error correction and the amount of time spent on the error correction on the likelihood of the machine part to be in a “defective” or “non-defective” state. The number of machine parts errors were collected from a thermo plastic injection molding machine in a car bumper auto parts manufacturer in Liberec city, Czech Republic from January 2012 to November 2012. A Bayesian method is used for parameter estimation. The results of this study indicate that the machine part error correction and the amount of time spent on the error correction do not improve the machine part status of the individual part, but there is a very strong month-to-month dependence of the machine part states. Using the Mean Absolute Error (MAE criterion, the performance of the proposed model (MAE = 1.62 and the HMM including machine part error correction only (MAE = 1.68, from our previous study, is not significantly different. However, the proposed model has more advantage in the fact that the machine part state can be explained by both the machine part error correction and the amount of time spent on the error correction.
Multilevel Regression Models for Mean and (Co)variance: with Applications in Nursing Research
B. Li (Bayoue)
2014-01-01
markdownabstract__Abstract__ In this chapter, a concise overview is provided for the statistical techniques that are applied in this thesis. This includes two classes of statistical modeling approaches which have been commonly applied in plenty of research areas for many decades. Namely, we will de
Okech, David
2012-01-01
Objectives: Using baseline and second wave data, the study evaluated the measurement and structural properties of parenting stress, personal mastery, and economic strain with N = 381 lower income parents who decided to join and those who did not join in a child development savings account program. Methods: Structural equation modeling mean and…
Subsampling intervals in (un)stable autoregressive models with stationary covariates
van Giersbergen, N.P.A.
2002-01-01
This paper considers confidence intervals based on the subsampling approach for the largest root in possibly unstable AR(p) models with stationary exogenous regressors. The subsampling approach proposed by Politis and Romano (Annals of Statistics, 1994), is able to deal with discontinuities in the
Christen, A.; Coops, N. C.; Crawford, B.; Heyman, E.; Kellett, R.; Liss, K.; Oke, T. R.; Olchovski, I.; Tooke, R.; van der Laan, M.; Voogt, J. A.
2010-12-01
It can be expected that integrative greenhouse-gas emission modeling at block or neighborhood-scales becomes an increasingly relevant part of urban planning processes in the future. A particular challenge forms the geographical distribution of emissions and a proper validation of modeled emissions at this fine scale where consumption statistics are often lacking. Direct flux measurements of GHGs using the eddy-covariance (EC) approach could be a valuable approach to validate such fine-scale urban emission inventories. In combination with micrometeorological source-area models, EC measurements provide spatial-temporal information of emissions - at scales where utility or consumption data is not available. A residential neighborhood in Vancouver, BC, Canada is used as a case study to validate modeled carbon-dioxide (CO2) emissions against direct eddy-covariance flux measurements. The model is a combination of top-down inventory and bottom-up modelling of individual objects (buildings, vegetation, traffic counts). Carbon is conceptually tracked through quantifying inputs and outputs into the modelled urban neighborhood, as well as storage changes within the neighborhood. The spatial modelling of emissions is conceptually separated into four components - buildings, transportation, human respiration and vegetation/soils. Inputs to the emission model include automated urban object classifications (buildings, trees, land cover) based on LiDAR and optical remote sensing data in combination with census data, assessment data, traffic counts and measured radiation and climate data. Based on those inputs, spatial maps of total annual CO2 emissions (or uptake) are modeled at 50 m resolution for 4 km2. The resulting maps (building, transportation, human respiration and vegetation) are then summed to create a map of integral (net-)emissions. The mapped area overlaps with the source-area of a micrometeorological flux tower location in the center of the study area. Continuous CO2
Zöll, Undine; Brümmer, Christian; Schrader, Frederik; Ammann, Christof; Ibrom, Andreas; Flechard, Christophe R.; Nelson, David D.; Zahniser, Mark; Kutsch, Werner L.
2016-09-01
Recent advances in laser spectrometry offer new opportunities to investigate ecosystem-atmosphere exchange of environmentally relevant trace gases. In this study, we demonstrate the applicability of a quantum cascade laser (QCL) absorption spectrometer to continuously measure ammonia concentrations at high time resolution and thus to quantify the net exchange between a seminatural peatland ecosystem and the atmosphere based on the eddy-covariance approach. Changing diurnal patterns of both ammonia concentration and fluxes were found during different periods of the campaign. We observed a clear tipping point in early spring with decreasing ammonia deposition velocities and increasingly bidirectional fluxes that occurred after the switch from dormant vegetation to CO2 uptake but was triggered by a significant weather change. While several biophysical parameters such as temperature, radiation, and surface wetness were identified to partially regulate ammonia exchange at the site, the seasonal concentration pattern was clearly dominated by agricultural practices in the surrounding area. Comparing the results of a compensation point model with our measurement-based flux estimates showed considerable differences in some periods of the campaign due to overestimation of non-stomatal resistances caused by low acid ratios. The total cumulative campaign exchange of ammonia after 9 weeks, however, differed only in a 6 % deviation with 911 and 857 g NH3-N ha-1 deposition being found by measurements and modeling, respectively. Extrapolating our findings to an entire year, ammonia deposition was lower than reported by Hurkuck et al. (2014) for the same site in previous years using denuder systems. This was likely due to a better representation of the emission component in the net signal of eddy-covariance fluxes as well as better adapted site-specific parameters in the model. Our study not only stresses the importance of high-quality measurements for studying and assessing land
Simulation of Longitudinal Exposure Data with Variance-Covariance Structures Based on Mixed Models
2013-01-01
subjects ( intersubject ) and that within subjects (intrasubject). Then, we can model several types of correlations within each subject as necessary, to...discriminates intersubject and intrasubject variances, by splitting εij into two terms: yij =μ+ bi + eij,bi ∼ N(0,σ 2b ),eij ∼ N ( 0,σ 2e ) , (2) where bi is the...1 ρ ρ2 ρ3 ρ 1 ρ ρ2 ρ2 ρ 1 ρ ρ3 ρ2 ρ 1 ⎞⎟⎟⎟⎟⎟⎠ . (5) Among the two matrices in Equation (5), the first one defines intersubject variances
Mäkelä, Jarmo; Susiluoto, Jouni; Markkanen, Tiina; Aurela, Mika; Järvinen, Heikki; Mammarella, Ivan; Hagemann, Stefan; Aalto, Tuula
2016-12-01
We examined parameter optimisation in the JSBACH (Kaminski et al., 2013; Knorr and Kattge, 2005; Reick et al., 2013) ecosystem model, applied to two boreal forest sites (Hyytiälä and Sodankylä) in Finland. We identified and tested key parameters in soil hydrology and forest water and carbon-exchange-related formulations, and optimised them using the adaptive Metropolis (AM) algorithm for Hyytiälä with a 5-year calibration period (2000-2004) followed by a 4-year validation period (2005-2008). Sodankylä acted as an independent validation site, where optimisations were not made. The tuning provided estimates for full distribution of possible parameters, along with information about correlation, sensitivity and identifiability. Some parameters were correlated with each other due to a phenomenological connection between carbon uptake and water stress or other connections due to the set-up of the model formulations. The latter holds especially for vegetation phenology parameters. The least identifiable parameters include phenology parameters, parameters connecting relative humidity and soil dryness, and the field capacity of the skin reservoir. These soil parameters were masked by the large contribution from vegetation transpiration. In addition to leaf area index and the maximum carboxylation rate, the most effective parameters adjusting the gross primary production (GPP) and evapotranspiration (ET) fluxes in seasonal tuning were related to soil wilting point, drainage and moisture stress imposed on vegetation. For daily and half-hourly tunings the most important parameters were the ratio of leaf internal CO2 concentration to external CO2 and the parameter connecting relative humidity and soil dryness. Effectively the seasonal tuning transferred water from soil moisture into ET, and daily and half-hourly tunings reversed this process. The seasonal tuning improved the month-to-month development of GPP and ET, and produced the most stable estimates of water use
Directory of Open Access Journals (Sweden)
Rodolfo Casana
2016-01-01
Full Text Available We have studied the existence of self-dual solitonic solutions in a generalization of the Abelian Chern-Simons-Higgs model. Such a generalization introduces two different nonnegative functions, ω1(|ϕ| and ω(|ϕ|, which split the kinetic term of the Higgs field, |Dμϕ|2→ω1(|ϕ||D0ϕ|2-ω(|ϕ||Dkϕ|2, breaking explicitly the Lorentz covariance. We have shown that a clean implementation of the Bogomolnyi procedure only can be implemented whether ω(|ϕ|∝β|ϕ|2β-2 with β≥1. The self-dual or Bogomolnyi equations produce an infinity number of soliton solutions by choosing conveniently the generalizing function ω1(|ϕ| which must be able to provide a finite magnetic field. Also, we have shown that by properly choosing the generalizing functions it is possible to reproduce the Bogomolnyi equations of the Abelian Maxwell-Higgs and Chern-Simons-Higgs models. Finally, some new self-dual |ϕ|6-vortex solutions have been analyzed from both theoretical and numerical point of view.
Hildebrandt, Tom; Langenbucher, James; Carr, Sasha; Sanjuan, Pilar; Park, Steff
2006-09-01
Long-term use of anabolic-androgenic steroids (AASs) is associated with both positive and negative effects. The authors examined possible mechanisms by which these effects contribute to AAS satisfaction and predict intentions for future AAS use. Five hundred male AAS users completed an interactive Web-based instrument assessing the psychological and physical effects of AAS use. Covariance structure modeling was used to evaluate both direct and indirect effects of AAS consequences on satisfaction with AASs and intentions for future AAS use. Results suggest that gain in muscle mass and psychological benefits from AAS use uniquely contributed to both AAS satisfaction and intentions for future use. Side effects from AAS use also uniquely contributed to AAS satisfaction, but ancillary drug use was found to partially mediate this relationship, suggesting that the satisfaction of experienced AAS users is enhanced by their mastery of side effects through the use of ancillary drugs. The final model explained 29% of the variance in intentions for future AAS use. Mechanisms for sustained AAS use and implications for intervention and prevention strategies are discussed.
A Covariant Master Theory for Novel Galilean Invariant Models and Massive Gravity
Gabadadze, Gregory; Khoury, Justin; Pirtskhalava, David; Trodden, Mark
2012-01-01
Coupling the galileons to a curved background has been a tradeoff between maintaining second order equations of motion, maintaining the galilean shift symmetries, and allowing the background metric to be dynamical. We propose a construction which can achieve all three for a novel class of galilean invariant models, by coupling a scalar with the galilean symmetry to a massive graviton. This generalizes the brane construction for galileons, by adding to the brane a dynamical metric, (non-universally) interacting with the galileon field. Alternatively, it can be thought of as an extension of the ghost-free massive gravity, or as a massive graviton-galileon scalar-tensor theory. In the decoupling limit of these theories, new kinds of galileon invariant interactions arise between the scalar and the longitudinal mode of the graviton. These have higher order equations of motion and infinite powers of the field, yet are ghost-free.
Activities on covariance estimation in Japanese Nuclear Data Committee
Energy Technology Data Exchange (ETDEWEB)
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-03-01
Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)
On the use of integrating FLUXNET eddy covariance and remote sensing data for model evaluation
Reichstein, Markus; Jung, Martin; Beer, Christian; Carvalhais, Nuno; Tomelleri, Enrico; Lasslop, Gitta; Baldocchi, Dennis; Papale, Dario
2010-05-01
The current FLUXNET database (www.fluxdata.org) of CO2, water and energy exchange between the terrestrial biosphere and the atmosphere contains almost 1000 site-years with data from more than 250 sites, encompassing all major biomes of the world and being processed in a standardized way (1-3). In this presentation we show that the information in the data is sufficient to derive generalized empirical relationships between vegetation/respective remote sensing information, climate and the biosphere-atmosphere exchanges across global biomes. These empirical patterns are used to generate global grids of the respective fluxes and derived properties (e.g. radiation and water-use efficiencies or climate sensitivities in general, bowen-ratio, AET/PET ratio). For example we revisit global 'text-book' numbers such as global Gross Primary Productivity (GPP) estimated since the 70's as ca. 120PgC (4), or global evapotranspiration (ET) estimated at 65km3/yr-1 (5) - for the first time with a more solid and direct empirical basis. Evaluation against independent data at regional to global scale (e.g. atmospheric CO2 inversions, runoff data) lends support to the validity of our almost purely empirical up-scaling approaches. Moreover climate factors such as radiation, temperature and water balance are identified as driving factors for variations and trends of carbon and water fluxes, with distinctly different sensitivities between different vegetation types. Hence, these global fields of biosphere-atmosphere exchange and the inferred relations between climate, vegetation type and fluxes should be used for evaluation or benchmarking of climate models or their land-surface components, while overcoming scale-issues with classical point-to-grid-cell comparisons. 1. M. Reichstein et al., Global Change Biology 11, 1424 (2005). 2. D. Baldocchi, Australian Journal of Botany 56, 1 (2008). 3. D. Papale et al., Biogeosciences 3, 571 (2006). 4. D. E. Alexander, R. W. Fairbridge, Encyclopedia of
Treatment Effects with Many Covariates and Heteroskedasticity
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.
The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results are obtai......The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...... then propose a new heteroskedasticity consistent standard error formula that is fully automatic and robust to both (conditional) heteroskedasticity of unknown form and the inclusion of possibly many covariates. We apply our findings to three settings: (i) parametric linear models with many covariates, (ii...
Manifestly covariant electromagnetism
Energy Technology Data Exchange (ETDEWEB)
Hillion, P. [Institut Henri Poincare' , Le Vesinet (France)
1999-03-01
The conventional relativistic formulation of electromagnetism is covariant under the full Lorentz group. But relativity requires covariance only under the proper Lorentz group and the authors present here the formalism covariant under the complex rotation group isomorphic to the proper Lorentz group. The authors discuss successively Maxwell's equations, constitutive relations and potential functions. A comparison is made with the usual formulation.
Serial grey-box model of a stratified thermal tank for hierarchical control of a solar plant
Energy Technology Data Exchange (ETDEWEB)
Arahal, Manuel R. [Universidad de Sevilla, Dpto. de Ingenieria de Sistemas y Automatica, Camino de los Descubrimientos s/n, 41092 Sevilla (Spain); Cirre, Cristina M. [Convenio Universidad de Almeria-Plataforma Solar de Almeria, Ctra. Senes s/n, 04200 Tabernas, Almeria (Spain); Berenguel, Manuel [Universidad de Almeria, Dpto. Lenguajes y Computacion, Ctra. Sacramento s/n, 04120, Almeria (Spain)
2008-05-15
The ACUREX collector field together with a thermal storage tank and a power conversion system forms the Small Solar Power Systems plant of the Plataforma Solar de Almeria, a facility that has been used for research for the last 25 years. A simulator of the collector field produced by the last author has been available to and used as a test-bed for control strategies. Up to now, however, there is not a model for the whole plant. Such model is needed for hierarchical control schemes also proposed by the authors. In this paper a model of the thermal storage tank is derived using the Simultaneous Perturbation Stochastic Approximation technique to adjust the parameters of a serial grey-box model structure. The benefits of the proposed approach are discussed in the context of the intended use, requiring a model capable of simulating the behavior of the storage tank with low computational load and low error over medium to large horizons. The model is tested against real data in a variety of situations showing its performance in terms of simulation error in the temperature profile and in the usable energy stored in the tank. The results obtained demonstrate the viability of the proposed approach. (author)
Carvalhais, Nuno; Thurner, Martin; Forkel, Matthias; Beer, Christian; Reichstein, Markus
2016-04-01
The response of the global terrestrial carbon cycle to climate change and the associated climate-carbon feedback has been shown to be highly uncertain. Ultimately this response depends on how carbon assimilation by vegetation changes relatively to the effective mean turnover time of carbon in vegetation and soils. Consequently, these turnover times of carbon are expected to depend on vegetation longevity and relative allocation to woody and non-woody biomass, and to litter and soil organic matter decomposition rates, which depend on climate variables, but also soil properties, biological activity and chemical composition of the litter. Data oriented estimates of whole ecosystem carbon turnover rates (τ) are based on global datasets of carbon stocks and fluxes and used to diagnose the co-variability of τ with climate. The overall mean global carbon turnover time estimated is 23 years (with 95% confidence intervals between 19 and 30 years), showing a strong spatial variability ranging from 15 years in equatorial regions to 255 years at latitudes north of 75°N. This latitudinal pattern reflects the expected dependencies of metabolic activity and ecosystem dynamics to temperature. However, a strong local correlation of τ with mean annual precipitation patterns is at least as prevalent as the expected effect of temperature on the global patterns of τ. The comparing between observation-based estimates of τ with current state-of-the-art Earth system models shows a consistent latitudinal pattern but a significant underestimation bias of ˜36% globally. Models consistently show a stronger association of τ to temperature and do not reproduce the observed association to mean annual precipitation in different latitudinal bands. A further breakdown of τ focusing on forest background mortality also shows contrasting regional patterns to those of global vegetation models, suggesting that the treatment of plant mortality may be overly simplistic in different model
Chen, Qiushi; Ayer, Turgay; Nastoupil, Loretta J; Koff, Jean L; Staton, Ashley D; Chhatwal, Jagpreet; Flowers, Christopher R
2016-01-01
Diffuse large B-cell lymphoma (DLBCL) demonstrates significant racial differences in age of onset, stage, and survival. To examine whether population-specific models improve prediction of outcomes for African-American (AA) patients with DLBCL, we utilized Surveillance, Epidemiology, and End Results data and compared stratification by the international prognostic index (IPI) in general and AA populations. We also constructed and compared prognostic models for general and AA populations using multivariable logistic regression (LR) and artificial neural network approaches. While the IPI adequately stratified outcomes for the general population, it failed to separate AA DLBCL patients into distinct risk groups. Our AA LR model identified age ≥ 55 (odds ratio 0.45, [95% CI: 0.36, 0.56], male sex (0.75, [0.60, 0.93]), and stage III/IV disease (0.43, [0.34, 0.54]) as adverse predictors of 5-year survival for AA patients. In addition, general-population prognostic models were poorly calibrated for AAs with DLBCL, indicating a need for validated AA-specific prognostic models.
Fluttering in Stratified Flows
Lam, Try; Vincent, Lionel; Kanso, Eva
2016-11-01
The descent motion of heavy objects under the influence of gravitational and aerodynamic forces is relevant to many branches of engineering and science. Examples range from estimating the behavior of re-entry space vehicles to studying the settlement of marine larvae and its influence on underwater ecology. The behavior of regularly shaped objects freely falling in homogeneous fluids is relatively well understood. For example, the complex interaction of a rigid coin with the surrounding fluid will cause it to either fall steadily, flutter, tumble, or be chaotic. Less is known about the effect of density stratification on the descent behavior. Here, we experimentally investigate the descent of discs in both pure water and in a linearly salt-stratified fluids where the density is varied from 1.0 to 1.14 of that of water where the Brunt-Vaisala frequency is 1.7 rad/sec and the Froude number Fr robots for space exploration and underwater missions.
Directory of Open Access Journals (Sweden)
Georg Jocher
2015-01-01
Full Text Available In this paper we present one year of meteorological and flux measurements obtained near Ny-Ålesund, Spitsbergen. Fluxes are derived by the eddy covariance method and by a hydrodynamic model approach (HMA as well. Both methods are compared and analyzed with respect to season and mean wind direction. Concerning the wind field we find a clear distinction between 3 prevailing regimes (which have influence on the flux behavior mainly caused by the topography at the measurement site. Concerning the fluxes we find a good agreement between the HMA and the eddy covariance method in cases of turbulent mixing in summer but deviations at stable conditions, when the HMA almost always shows negative fluxes. Part of the deviation is based on a dependence of HMA fluxes on friction velocity and the influence of the molecular boundary layer. Moreover, the flagging system of the eddy covariance software package TK3 is briefly revised. A new quality criterion for the use of fluxes obtained by the eddy covariance method, which is based on integral turbulence characteristics, is proposed.
Basu, S.; Holtslag, A.A.M.; Wiel, van de B.J.H.; Moene, A.F.; Steeneveld, G.J.
2008-01-01
In single column and large-eddy simulation studies of the atmospheric boundary layer, surface sensible heat flux is often used as a boundary condition. In this paper, we delineate the fundamental shortcomings of such a boundary condition in the context of stable boundary layer modelling and simulati
Thermals in stratified regions of the ISM
Rodriguez-Gonzalez, Ary
2013-01-01
We present a model of a "thermal" (i.e., a hot bubble) rising within an exponentially stratified region of the ISM. This model includes terms representing the ram pressure braking and the entrainment of environmental gas into the thermal. We then calibrate the free parameters associated with these two terms through a comparison with 3D numerical simulations of a rising bubble. Finally, we apply our "thermal" model to the case of a hot bubble produced by a SN within the stratified ISM of the Galactic disk.
THERMALS IN STRATIFIED REGIONS OF THE ISM
Directory of Open Access Journals (Sweden)
A. Rodríguez-González
2013-01-01
Full Text Available We present a model of a “thermal” (i.e., a hot bubble rising within an exponentially stratified region of the ISM. This model includes terms representing the ram pressure braking and the entrainment of environmental gas into the thermal. We then calibrate the free parameters associated with these two terms through a comparison with 3D numerical simulations of a rising bubble. Finally, we apply our “thermal” model to the case of a hot bubble produced by a SN within the stratified ISM of the Galactic disk.
Directory of Open Access Journals (Sweden)
Han-Jin Cui
Full Text Available Intracerebral hemorrhage (ICH is a subtype of stroke associated with high morbidity and mortality rates. No proven treatments are available for this condition. Iron-mediated free radical injury is associated with secondary damage following ICH. Deferoxamine (DFX, a ferric-iron chelator, is a candidate drug for the treatment of ICH. We performed a systematic review of studies involving the administration of DFX following ICH. In total, 20 studies were identified that described the efficacy of DFX in animal models of ICH and assessed changes in the brain water content, neurobehavioral score, or both. DFX reduced the brain water content by 85.7% in animal models of ICH (-0.86, 95% CI: -.48- -0.23; P < 0.01; 23 comparisons, and improved the neurobehavioral score by -1.08 (95% CI: -1.23- -0.92; P < 0.01; 62 comparisons. DFX was most efficacious when administered 2-4 h after ICH at a dose of 10-50 mg/kg depending on species, and this beneficial effect remained for up to 24 h postinjury. The efficacy was higher with phenobarbital anesthesia, intramuscular injection, and lysed erythrocyte infusion, and in Fischer 344 rats or aged animals. Overall, although DFX was found to be effective in experimental ICH, additional confirmation is needed due to possible publication bias, poor study quality, and the limited number of studies conducting clinical trials.
Covariant diagrams for one-loop matching
Zhang, Zhengkang
2016-01-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed "covariant diagrams." The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Náraigh, L Ó; Matar, O; Zaki, T
2009-01-01
We investigate the linear stability of a flat interface that separates a liquid layer from a fully-developed turbulent gas flow. In this context, linear-stability analysis involves the study of the dynamics of a small-amplitude wave on the interface, and we develop a model that describes wave-induced perturbation turbulent stresses (PTS). We demonstrate the effect of the PTS on the stability properties of the system in two cases: for a laminar thin film, and for deep-water waves. In the first case, we find that the PTS have little effect on the growth rate of the waves, although they do affect the structure of the perturbation velocities. In the second case, the PTS enhance the maximum growth rate, although the overall shape of the dispersion curve is unchanged. Again, the PTS modify the structure of the velocity field, especially at longer wavelengths. Finally, we demonstrate a kind of parameter tuning that enables the production of the thin-film (slow) waves in a deep-water setting.
Covariant Hamiltonian field theory
Giachetta, G; Sardanashvily, G
1999-01-01
We study the relationship between the equations of first order Lagrangian field theory on fiber bundles and the covariant Hamilton equations on the finite-dimensional polysymplectic phase space of covariant Hamiltonian field theory. The main peculiarity of these Hamilton equations lies in the fact that, for degenerate systems, they contain additional gauge fixing conditions. We develop the BRST extension of the covariant Hamiltonian formalism, characterized by a Lie superalgebra of BRST and anti-BRST symmetries.
Levy Matrices and Financial Covariances
Burda, Zdzislaw; Jurkiewicz, Jerzy; Nowak, Maciej A.; Papp, Gabor; Zahed, Ismail
2003-10-01
In a given market, financial covariances capture the intra-stock correlations and can be used to address statistically the bulk nature of the market as a complex system. We provide a statistical analysis of three SP500 covariances with evidence for raw tail distributions. We study the stability of these tails against reshuffling for the SP500 data and show that the covariance with the strongest tails is robust, with a spectral density in remarkable agreement with random Lévy matrix theory. We study the inverse participation ratio for the three covariances. The strong localization observed at both ends of the spectral density is analogous to the localization exhibited in the random Lévy matrix ensemble. We discuss two competitive mechanisms responsible for the occurrence of an extensive and delocalized eigenvalue at the edge of the spectrum: (a) the Lévy character of the entries of the correlation matrix and (b) a sort of off-diagonal order induced by underlying inter-stock correlations. (b) can be destroyed by reshuffling, while (a) cannot. We show that the stocks with the largest scattering are the least susceptible to correlations, and likely candidates for the localized states. We introduce a simple model for price fluctuations which captures behavior of the SP500 covariances. It may be of importance for assets diversification.
Covariance evaluation work at LANL
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Young, Phillip [Los Alamos National Laboratory; Hale, Gerald [Los Alamos National Laboratory; Chadwick, M B [Los Alamos National Laboratory; Little, R C [Los Alamos National Laboratory
2008-01-01
Los Alamos evaluates covariances for nuclear data library, mainly for actinides above the resonance regions and light elements in the enUre energy range. We also develop techniques to evaluate the covariance data, like Bayesian and least-squares fitting methods, which are important to explore the uncertainty information on different types of physical quantities such as elastic scattering angular distribution, or prompt neutron fission spectra. This paper summarizes our current activities of the covariance evaluation work at LANL, including the actinide and light element data mainly for the criticality safety study and transmutation technology. The Bayesian method based on the Kalman filter technique, which combines uncertainties in the theoretical model and experimental data, is discussed.
Stably stratified magnetized stars in general relativity
Yoshida, Shijun; Shibata, Masaru
2012-01-01
We construct magnetized stars composed of a fluid stably stratified by entropy gradients in the framework of general relativity, assuming ideal magnetohydrodynamics and employing a barotropic equation of state. We first revisit basic equations for describing stably-stratified stationary axisymmetric stars containing both poloidal and toroidal magnetic fields. As sample models, the magnetized stars considered by Ioka and Sasaki (2004), inside which the magnetic fields are confined, are modified to the ones stably stratified. The magnetized stars newly constructed in this study are believed to be more stable than the existing relativistic models because they have both poloidal and toroidal magnetic fields with comparable strength, and magnetic buoyancy instabilities near the surface of the star, which can be stabilized by the stratification, are suppressed.
Directory of Open Access Journals (Sweden)
Fritjof Luethje
2017-01-01
Full Text Available Very high spatial resolution (VHSR stereo-imagery-derived digital surface models (DSM can be used to generate digital elevation models (DEM. Filtering algorithms and triangular irregular network (TIN densification are the most common approaches. Most filter-based techniques focus on image-smoothing. We propose a new approach which makes use of integrated object-based image analysis (OBIA techniques. An initial land cover classification is followed by stratified land cover ground point sample detection, using object-specific features to enhance the sampling quality. The detected ground point samples serve as the basis for the interpolation of the DEM. A regional uncertainty index (RUI is calculated to express the quality of the generated DEM in regard to the DSM, based on the number of samples per land cover object. The results of our approach are compared to a high resolution Light Detection and Ranging (LiDAR-DEM, and a high level of agreement is observed—especially for non-vegetated and scarcely-vegetated areas. Results show that the accuracy of the DEM is highly dependent on the quality of the initial DSM and—in accordance with the RUI—differs between the different land cover classes.
Udina, Mireia; Sun, Jielun; Kosović, Branko; Soler, Maria Rosa
2016-11-01
Following Sun et al. (J Atmos Sci 69(1):338-351, 2012), vertical variations of turbulent mixing in stably stratified and neutral environments as functions of wind speed are investigated using the large-eddy simulation capability in the Weather Research and Forecasting model. The simulations with a surface cooling rate for the stable boundary layer (SBL) and a range of geostrophic winds for both stable and neutral boundary layers are compared with observations from the Cooperative Atmosphere-Surface Exchange Study 1999 (CASES-99). To avoid the uncertainty of the subgrid scheme, the investigation focuses on the vertical domain when the ratio between the subgrid and the resolved turbulence is small. The results qualitatively capture the observed dependence of turbulence intensity on wind speed under neutral conditions; however, its vertical variation is affected by the damping layer used in absorbing undesirable numerical waves at the top of the domain as a result of relatively large neutral turbulent eddies. The simulated SBL fails to capture the observed temperature variance with wind speed and the observed transition from the SBL to the near-neutral atmosphere with increasing wind speed, although the vertical temperature profile of the simulated SBL resembles the observed profile. The study suggests that molecular thermal conduction responsible for the thermal coupling between the surface and atmosphere cannot be parameterized through the Monin-Obukhov bulk relation for turbulent heat transfer by applying the surface radiation temperature, as is common practice when modelling air-surface interactions.
Directory of Open Access Journals (Sweden)
P. D. Williams
2004-01-01
Full Text Available We report on a numerical study of the impact of short, fast inertia-gravity waves on the large-scale, slowly-evolving flow with which they co-exist. A nonlinear quasi-geostrophic numerical model of a stratified shear flow is used to simulate, at reasonably high resolution, the evolution of a large-scale mode which grows due to baroclinic instability and equilibrates at finite amplitude. Ageostrophic inertia-gravity modes are filtered out of the model by construction, but their effects on the balanced flow are incorporated using a simple stochastic parameterization of the potential vorticity anomalies which they induce. The model simulates a rotating, two-layer annulus laboratory experiment, in which we recently observed systematic inertia-gravity wave generation by an evolving, large-scale flow. We find that the impact of the small-amplitude stochastic contribution to the potential vorticity tendency, on the model balanced flow, is generally small, as expected. In certain circumstances, however, the parameterized fast waves can exert a dominant influence. In a flow which is baroclinically-unstable to a range of zonal wavenumbers, and in which there is a close match between the growth rates of the multiple modes, the stochastic waves can strongly affect wavenumber selection. This is illustrated by a flow in which the parameterized fast modes dramatically re-partition the probability-density function for equilibrated large-scale zonal wavenumber. In a second case study, the stochastic perturbations are shown to force spontaneous wavenumber transitions in the large-scale flow, which do not occur in their absence. These phenomena are due to a stochastic resonance effect. They add to the evidence that deterministic parameterizations in general circulation models, of subgrid-scale processes such as gravity wave drag, cannot always adequately capture the full details of the nonlinear interaction.
Yang, Hanyu; Cranford, James A; Li, Runze; Buu, Anne
2015-02-20
This study proposes a generalized time-varying effect model that can be used to characterize a discrete longitudinal covariate process and its time-varying effect on a later outcome that may be discrete. The proposed method can be applied to examine two important research questions for daily process data: measurement reactivity and predictive validity. We demonstrate these applications using health risk behavior data collected from alcoholic couples through an interactive voice response system. The statistical analysis results show that the effect of measurement reactivity may only be evident in the first week of interactive voice response assessment. Moreover, the level of urge to drink before measurement reactivity takes effect may be more predictive of a later depression outcome. Our simulation study shows that the performance of the proposed method improves with larger sample sizes, more time points, and smaller proportions of zeros in the binary longitudinal covariate.
Bergshoeff, E.; Pope, C.N.; Stelle, K.S.
1990-01-01
We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.
Metzger, S.; Xu, K.; Desai, A. R.; Taylor, J. R.; Kljun, N.; Schneider, D.; Kampe, T. U.; Fox, A. M.
2013-12-01
Process-based models, such as land surface models (LSMs), allow insight in the spatio-temporal distribution of stocks and the exchange of nutrients, trace gases etc. among environmental compartments. More recently, LSMs also become capable of assimilating time-series of in-situ reference observations. This enables calibrating the underlying functional relationships to site-specific characteristics, or to constrain the model results after each time-step in an attempt to minimize drift. The spatial resolution of LSMs is typically on the order of 10^2-10^4 km2, which is suitable for linking regional to continental scales and beyond. However, continuous in-situ observations of relevant stock and exchange variables, such as tower-based eddy-covariance (EC) fluxes, represent orders of magnitude smaller spatial scales (10^-6-10^1 km2). During data assimilation, this significant gap in spatial representativeness is typically either neglected, or side-stepped using simple tiling approaches. Moreover, at ';coarse' resolutions, a single LSM evaluation per time-step implies linearity among the underlying functional relationships as well as among the sub-grid land cover fractions. This, however, is not warranted for land-atmosphere exchange processes over more complex terrain. Hence, it is desirable to explicitly consider spatial variability at LSM sub-grid scales. Here we present a procedure that determines from a single EC tower the spatially integrated probability density function (PDF) of the surface-atmosphere exchange for individual land covers. These PDFs allow quantifying the expected value, as well as spatial variability over a target domain, can be assimilated in tiling-capable LSMs, and mitigate linearity assumptions at ';coarse' resolutions. The procedure is based on the extraction and extrapolation of environmental response functions (ERFs), for which a technical-oriented companion poster is submitted. In short, the subsequent steps are: (i) Time
Messina, Paula V; Besada-Porto, Jose Miguel; González-Díaz, Humberto; Ruso, Juan M
2015-11-10
Studies of the self-aggregation of binary systems are of both theoretical and practical importance. They provide an opportunity to investigate the influence of the molecular structure of the hydrophobe on the nonideality of mixing. On the other hand, linear free energy relationship (LFER) models, such as Hansch's equations, may be used to predict the properties of chemical compounds such as drugs or surfactants. However, the task becomes more difficult once we want to predict simultaneaously the effect over multiple output properties of binary systems of perturbations under multiple input experimental boundary conditions (b(j)). As a consequence, we need computational chemistry or chemoinformatics models that may help us to predict different properties of the autoaggregation process of mixed surfactants under multiple conditions. In this work, we have developed the first model that combines perturbation theory (PT) and LFER ideas. The model uses as input covariance PT operators (CPTOs). CPTOs are calculated as the difference between covariance ΔCov((i)μ(k)) functions before and after multiple perturbations in the binary system. In turn, covariances calculated as the product of two Box-Jenkins operators (BJO) operators. BJOs are used to measure the deviation of the structure of different chemical compounds from a set of molecules measured under a given subset of experimental conditions. The best CPT-LFER model found predicted the effects of 25,000 perturbations over 9 different properties of binary systems. We also reported experimental studies of different experimental properties of the binary system formed by sodium glycodeoxycholate and didodecyldimethylammonium bromide (NaGDC-DDAB). Last, we used our CPT-LFER model to carry out a 1000 data point simulation of the properties of the NaGDC-DDAB system under different conditions not studied experimentally.
Forecasting Covariance Matrices: A Mixed Frequency Approach
DEFF Research Database (Denmark)
Halbleib, Roxana; Voev, Valeri
This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...... matrix dynamics. Our empirical results show that the new mixing approach provides superior forecasts compared to multivariate volatility specifications using single sources of information....
Energy Technology Data Exchange (ETDEWEB)
Verdiere, N.; Suri, C. [Laboratoire de mecanique appliquee, 25 - Besancon (France)
1996-01-01
Composite materials are used in the manufacture of water transport pipework for use in PWR`s. Estimation of their life expectancy relies on long and costly tests (ASTM D2992B standard). It would be extremely advantageous to have another method relying only on short laboratory tests which could be based on a mechanical behaviour and damage model. For several years, the Laboratoire de Mecanique Appliquee de Besancon has been developing a mechanical behaviour model for composite material tubes for different types of multiaxial stresses. However, this model does not take into account the fatigue behaviour. We therefore needed to find out how this type of stress could be incorporated into the model. To this end, research was undertaken in the form of a thesis (by E. Joseph) both to perfect the multiaxial fatigue stress testing machines and to take into account this type of behaviour in the mechanical model. This study covered glass fibre/epoxy resin composite material tubes and allowed their behaviour to be modelled. An important part of the work concerned the instrumentation and adaptation of test machines which hitherto did not exist so that the research could be carried out. For each of the stress axes (traction, internal pressure without vacuum effect ({Sigma}{sup zz}=0) and internal pressure with vacuum effect ({Sigma}{sup zz}=1/2{Sigma}{sup {theta}{theta}})), instantaneous behaviour was studied. Three stress levels and frequency values were used to define the fatigue behaviour. (authors). 23 refs., 41 figs., 5 tabs.
Covariant diagrams for one-loop matching
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2016-10-15
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Institute of Scientific and Technical Information of China (English)
伍长春; 张润楚
2006-01-01
In stratified survey sampling, sometimes we have complete auxiliary information. One of the fundamental questions is how to effectively use the complete auxiliary information at the estimation stage. In this paper, we extend the model-calibration method to obtain estimators of the finite population mean by using complete auxiliary information from stratified sampling survey data. We show that the resulting estimators effectively use auxiliary information at the estimation stage and possess a number of attractive features such as asymptotically design-unbiased irrespective of the working model and approximately model-unbiased under the model. When a linear working-model is used, the resulting estimators reduce to the usual calibration estimator(or GREG).
The covariate-adjusted frequency plot.
Holling, Heinz; Böhning, Walailuck; Böhning, Dankmar; Formann, Anton K
2016-04-01
Count data arise in numerous fields of interest. Analysis of these data frequently require distributional assumptions. Although the graphical display of a fitted model is straightforward in the univariate scenario, this becomes more complex if covariate information needs to be included into the model. Stratification is one way to proceed, but has its limitations if the covariate has many levels or the number of covariates is large. The article suggests a marginal method which works even in the case that all possible covariate combinations are different (i.e. no covariate combination occurs more than once). For each covariate combination the fitted model value is computed and then summed over the entire data set. The technique is quite general and works with all count distributional models as well as with all forms of covariate modelling. The article provides illustrations of the method for various situations and also shows that the proposed estimator as well as the empirical count frequency are consistent with respect to the same parameter.
Thepparat, Mongkol; Boonkum, Wuttigrai; Duangjinda, Monchai; Tumwasorn, Sornthep; Nakavisut, Sansak; Thongchumroon, Thumrong
2015-07-01
The objectives of this study were to compare covariance functions (CF) and estimate the heritability of milk yield from test-day records among exotic (Saanen, Anglo-Nubian, Toggenburg and Alpine) and crossbred goats (Thai native and exotic breed), using a random regression model. A total of 1472 records of test-day milk yield were used, collected from 112 does between 2003 and 2006. CF of the study were Wilmink function, second- and third-order Legendre polynomials, and linear splines 4 knots located at 5, 25, 90 and 155 days in milk (SP25-90) and 5, 35, 95 and 155 of days in milk (SP35-95). Variance components were estimated by restricted maximum likelihood method (REML). Goodness of fit, Akaike information criterion (AIC), percentage of squared bias (PSB), mean square error (MSE), and empirical correlation (RHO) between the observed and predicted values were used to compare models. The results showed that CF had an impact on (co)variance estimation in random regression models (RRM). The RRM with splines 4 knots located at 5, 25, 90 and 155 of days in milk had the lowest AIC, PSB and MSE, and the highest RHO. The heritability estimated throughout lactation obtained with this model ranged from 0.13 to 0.23. © 2014 Japanese Society of Animal Science.
Covariant Formulations of Superstring Theories.
Mikovic, Aleksandar Radomir
1990-01-01
Chapter 1 contains a brief introduction to the subject of string theory, and tries to motivate the study of superstrings and covariant formulations. Chapter 2 describes the Green-Schwarz formulation of the superstrings. The Hamiltonian and BRST structure of the theory is analysed in the case of the superparticle. Implications for the superstring case are discussed. Chapter 3 describes the Siegel's formulation of the superstring, which contains only the first class constraints. It is shown that the physical spectrum coincides with that of the Green-Schwarz formulation. In chapter 4 we analyse the BRST structure of the Siegel's formulation. We show that the BRST charge has the wrong cohomology, and propose a modification, called first ilk, which gives the right cohomology. We also propose another superparticle model, called second ilk, which has infinitely many coordinates and constraints. We construct the complete BRST charge for it, and show that it gives the correct cohomology. In chapter 5 we analyse the properties of the covariant vertex operators and the corresponding S-matrix elements by using the Siegel's formulation. We conclude that the knowledge of the ghosts is necessary, even at the tree level, in order to obtain the correct S-matrix. In chapter 6 we attempt to calculate the superstring loops, in a covariant gauge. We calculate the vacuum-to -vacuum amplitude, which is also the cosmological constant. We show that it vanishes to all loop orders, under the assumption that the free covariant gauge-fixed action exists. In chapter 7 we present our conclusions, and briefly discuss the random lattice approach to the string theory, as a possible way of resolving the problem of the covariant quantization and the nonperturbative definition of the superstrings.
George, Steven Z.
2015-01-01
Background The effectiveness of risk stratification for low back pain (LBP) management has not been demonstrated in outpatient physical therapy settings. Objective The purposes of this study were: (1) to assess implementation of a stratified care approach for LBP management by evaluating short-term treatment effects and (2) to determine feasibility of conducting a larger-scale study. Design This was a 2-phase, preliminary study. Methods In phase 1, clinicians were randomly assigned to receive standard (n=6) or stratified care (n=6) training. Stratified care training included 8 hours of content focusing on psychologically informed practice. Changes in LBP attitudes and beliefs were assessed using the Pain Attitudes and Beliefs Scale for Physiotherapists (PABS-PT) and the Health Care Providers Pain and Impairment Relationship Scale (HC-PAIRS). In phase 2, clinicians receiving the stratified care training were instructed to incorporate those strategies in their practice and 4-week patient outcomes were collected using a numerical pain rating scale (NPRS), and the Oswestry Disability Index (ODI). Study feasibility was assessed to identify potential barriers for completion of a larger-scale study. Results In phase 1, minimal changes were observed for PABS-PT and HC-PAIRS scores for standard care clinicians (Cohen d=0.00–0.28). Decreased biomedical (−4.5±2.5 points, d=1.08) and increased biopsychosocial (+5.5±2.0 points, d=2.86) treatment orientations were observed for stratified care clinicians, with these changes sustained 6 months later on the PABS-PT. In phase 2, patients receiving stratified care (n=67) had greater between-group improvements in NPRS (0.8 points; 95% confidence interval=0.1, 1.5; d=0.40) and ODI (8.9% points; 95% confidence interval=4.1, 13.6; d=0.76) scores compared with patients receiving standard physical therapy care (n=33). Limitations In phase 2, treatment was not randomly assigned, and therapist adherence to treatment recommendations was
Beneciuk, Jason M; George, Steven Z
2015-08-01
The effectiveness of risk stratification for low back pain (LBP) management has not been demonstrated in outpatient physical therapy settings. The purposes of this study were: (1) to assess implementation of a stratified care approach for LBP management by evaluating short-term treatment effects and (2) to determine feasibility of conducting a larger-scale study. This was a 2-phase, preliminary study. In phase 1, clinicians were randomly assigned to receive standard (n=6) or stratified care (n=6) training. Stratified care training included 8 hours of content focusing on psychologically informed practice. Changes in LBP attitudes and beliefs were assessed using the Pain Attitudes and Beliefs Scale for Physiotherapists (PABS-PT) and the Health Care Providers Pain and Impairment Relationship Scale (HC-PAIRS). In phase 2, clinicians receiving the stratified care training were instructed to incorporate those strategies in their practice and 4-week patient outcomes were collected using a numerical pain rating scale (NPRS), and the Oswestry Disability Index (ODI). Study feasibility was assessed to identify potential barriers for completion of a larger-scale study. In phase 1, minimal changes were observed for PABS-PT and HC-PAIRS scores for standard care clinicians (Cohen d=0.00-0.28). Decreased biomedical (-4.5±2.5 points, d=1.08) and increased biopsychosocial (+5.5±2.0 points, d=2.86) treatment orientations were observed for stratified care clinicians, with these changes sustained 6 months later on the PABS-PT. In phase 2, patients receiving stratified care (n=67) had greater between-group improvements in NPRS (0.8 points; 95% confidence interval=0.1, 1.5; d=0.40) and ODI (8.9% points; 95% confidence interval=4.1, 13.6; d=0.76) scores compared with patients receiving standard physical therapy care (n=33). In phase 2, treatment was not randomly assigned, and therapist adherence to treatment recommendations was not monitored. This study was not adequately powered to
Covariant electromagnetic field lines
Hadad, Y.; Cohen, E.; Kaminer, I.; Elitzur, A. C.
2017-08-01
Faraday introduced electric field lines as a powerful tool for understanding the electric force, and these field lines are still used today in classrooms and textbooks teaching the basics of electromagnetism within the electrostatic limit. However, despite attempts at generalizing this concept beyond the electrostatic limit, such a fully relativistic field line theory still appears to be missing. In this work, we propose such a theory and define covariant electromagnetic field lines that naturally extend electric field lines to relativistic systems and general electromagnetic fields. We derive a closed-form formula for the field lines curvature in the vicinity of a charge, and show that it is related to the world line of the charge. This demonstrates how the kinematics of a charge can be derived from the geometry of the electromagnetic field lines. Such a theory may also provide new tools in modeling and analyzing electromagnetic phenomena, and may entail new insights regarding long-standing problems such as radiation-reaction and self-force. In particular, the electromagnetic field lines curvature has the attractive property of being non-singular everywhere, thus eliminating all self-field singularities without using renormalization techniques.
Angel, Yoseline
2016-10-25
Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multitemporal data and spatio-Temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-Temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.
Angel, Yoseline
2016-09-26
Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multi-temporal data and spatio-temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.
How stratified is mantle convection?
Puster, Peter; Jordan, Thomas H.
1997-04-01
We quantify the flow stratification in the Earth's mid-mantle (600-1500 km) in terms of a stratification index for the vertical mass flux, Sƒ (z) = 1 - ƒ(z) / ƒref (z), in which the reference value ƒref(z) approximates the local flux at depth z expected for unstratified convection (Sƒ=0). Although this flux stratification index cannot be directly constrained by observations, we show from a series of two-dimensional convection simulations that its value can be related to a thermal stratification index ST(Z) defined in terms of the radial correlation length of the temperature-perturbation field δT(z, Ω). ST is a good proxy for Sƒ at low stratifications (SƒUniformitarian Principle. The bound obtained here from global tomography is consistent with local seismological evidence for slab flux into the lower mantle; however, the total material flux has to be significantly greater (by a factor of 2-3) than that due to slabs alone. A stratification index, Sƒ≲0.2, is sufficient to exclude many stratified convection models still under active consideration, including most forms of chemical layering between the upper and lower mantle, as well as the more extreme versions of avalanching convection governed by a strong endothermic phase change.
Kiriushcheva, N; Kuzmin, S V
2011-01-01
We argue that the field-parametrization dependence of Dirac's procedure, for Hamiltonians with first-class constraints not only preserves covariance in covariant theories, but in non-covariant gauge theories it allows one to find the natural field parametrization in which the Hamiltonian formulation automatically leads to the simplest gauge symmetry.
DEFF Research Database (Denmark)
Collalti, A.; Marconi, S.; Ibrom, Andreas;
2016-01-01
This study evaluates the performances of the new version (v.5.1) of 3D-CMCC Forest Ecosystem Model (FEM) in simulating gross primary productivity (GPP), against eddy covariance GPP data for 10 FLUXNET forest sites across Europe. A new carbon allocation module, coupled with new both phenological......, with the exception of the two Mediterranean sites. We find that 3D-CMCC FEM tends to better simulate the timing of inter-annual anomalies than their magnitude within measurements' uncertainty. In six of eight sites where data are available, the model well reproduces the 2003 summer drought event. Finally, for three...... sites we evaluate whether a more accurate representation of forest structural characteristics (i.e. cohorts, forest layers) and species composition can improve model results. In two of the three sites results reveal that model slightly increases its performances although, statistically speaking...
Schep, Daniel G; Zhao, Jianhua; Rubinstein, John L
2016-03-22
Rotary ATPases couple ATP synthesis or hydrolysis to proton translocation across a membrane. However, understanding proton translocation has been hampered by a lack of structural information for the membrane-embedded a subunit. The V/A-ATPase from the eubacterium Thermus thermophilus is similar in structure to the eukaryotic V-ATPase but has a simpler subunit composition and functions in vivo to synthesize ATP rather than pump protons. We determined the T. thermophilus V/A-ATPase structure by cryo-EM at 6.4 Å resolution. Evolutionary covariance analysis allowed tracing of the a subunit sequence within the map, providing a complete model of the rotary ATPase. Comparing the membrane-embedded regions of the T. thermophilus V/A-ATPase and eukaryotic V-ATPase from Saccharomyces cerevisiae allowed identification of the α-helices that belong to the a subunit and revealed the existence of previously unknown subunits in the eukaryotic enzyme. Subsequent evolutionary covariance analysis enabled construction of a model of the a subunit in the S. cerevisae V-ATPase that explains numerous biochemical studies of that enzyme. Comparing the two a subunit structures determined here with a structure of the distantly related a subunit from the bovine F-type ATP synthase revealed a conserved pattern of residues, suggesting a common mechanism for proton transport in all rotary ATPases.
Frasinski, Leszek J.
2016-08-01
Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.
Covariant Bardeen perturbation formalism
Vitenti, S. D. P.; Falciano, F. T.; Pinto-Neto, N.
2014-05-01
In a previous work we obtained a set of necessary conditions for the linear approximation in cosmology. Here we discuss the relations of this approach with the so-called covariant perturbations. It is often argued in the literature that one of the main advantages of the covariant approach to describe cosmological perturbations is that the Bardeen formalism is coordinate dependent. In this paper we will reformulate the Bardeen approach in a completely covariant manner. For that, we introduce the notion of pure and mixed tensors, which yields an adequate language to treat both perturbative approaches in a common framework. We then stress that in the referred covariant approach, one necessarily introduces an additional hypersurface choice to the problem. Using our mixed and pure tensors approach, we are able to construct a one-to-one map relating the usual gauge dependence of the Bardeen formalism with the hypersurface dependence inherent to the covariant approach. Finally, through the use of this map, we define full nonlinear tensors that at first order correspond to the three known gauge invariant variables Φ, Ψ and Ξ, which are simultaneously foliation and gauge invariant. We then stress that the use of the proposed mixed tensors allows one to construct simultaneously gauge and hypersurface invariant variables at any order.
Bergstra, J A; van Vlijmen, S F M
2011-01-01
The terminology of sourcing, outsourcing and insourcing is developed in detail on the basis of the preliminary definitions of outsourcing and insourcing and related activities and competences as given in our three previous papers on business mereology, on the concept of a sourcement, and on outsourcing competence respectively. Besides providing more a detailed semantic analysis we will introduce, explain, and illustrate a number of additional concepts including: principal unit of a sourcement, theme of a sourcement, current sourcement, (un)stable sourcement, and sourcement transformation. A three level terminology is designed: (i) factual level: operational facts that hold for sourcements including histories thereof, (ii) business level: roles and objectives of various parts of the factual level description, thus explaining each partner's business process and business objectives, (iii) contract level: specification of intended facts and intended business models as found at the business level. Orthogonal to th...
Covariant canonical quantization
Energy Technology Data Exchange (ETDEWEB)
Hippel, G.M. von [University of Regina, Department of Physics, Regina, Saskatchewan (Canada); Wohlfarth, M.N.R. [Universitaet Hamburg, Institut fuer Theoretische Physik, Hamburg (Germany)
2006-09-15
We present a manifestly covariant quantization procedure based on the de Donder-Weyl Hamiltonian formulation of classical field theory. This procedure agrees with conventional canonical quantization only if the parameter space is d=1 dimensional time. In d>1 quantization requires a fundamental length scale, and any bosonic field generates a spinorial wave function, leading to the purely quantum-theoretical emergence of spinors as a byproduct. We provide a probabilistic interpretation of the wave functions for the fields, and we apply the formalism to a number of simple examples. These show that covariant canonical quantization produces both the Klein-Gordon and the Dirac equation, while also predicting the existence of discrete towers of identically charged fermions with different masses. Covariant canonical quantization can thus be understood as a ''first'' or pre-quantization within the framework of conventional QFT. (orig.)
Covariant canonical quantization
Von Hippel, G M; Hippel, Georg M. von; Wohlfarth, Mattias N.R.
2006-01-01
We present a manifestly covariant quantization procedure based on the de Donder-Weyl Hamiltonian formulation of classical field theory. Covariant canonical quantization agrees with conventional canonical quantization only if the parameter space is d=1 dimensional time. In d>1 quantization requires a fundamental length scale, and any bosonic field generates a spinorial wave function, leading to the purely quantum-theoretical emergence of spinors as a byproduct. We provide a probabilistic interpretation of the wave functions for the fields, and apply the formalism to a number of simple examples. These show that covariant canonical quantization produces both the Klein-Gordon and the Dirac equation, while also predicting the existence of discrete towers of identically charged fermions with different masses.
Kuntoro, Hadiyan Yusuf; Indarto,
2015-01-01
In the chemical, petroleum and nuclear industries, pipelines are often used to transport fluids from one process site to another one. The understanding of the fluids behavior inside the pipelines is the most important consideration for the engineers and scientists. From the previous studies, there are several two-phase flow patterns in horizontal pipe. One of them is stratified flow pattern, which is characterized by the liquid flowing along the bottom of the pipe and the gas moving above it cocurrently. Another flow patterns are slug and plug flow patterns. This kind of flow triggers the damage in pipelines, such as corrosion, abrasion, and blasting pipe. Therefore, slug and plug flow patterns are undesirable in pipelines, and the flow is maintained at the stratified flow condition for safety reason. In this paper, the analytical-based study on the experiment of the stratified flow pattern in a 26 mm i.d. horizontal pipe is presented. The experiment is performed to develop a high quality database of the stra...
Covariance Applications with Kiwi
Mattoon, C. M.; Brown, D.; Elliott, J. B.
2012-05-01
The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL) is developing a new tool, named `Kiwi', that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL) and large-scale Uncertainty Quantification (UQ) studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.
Covariance Applications with Kiwi
Directory of Open Access Journals (Sweden)
Elliott J.B.
2012-05-01
Full Text Available The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL is developing a new tool, named ‘Kiwi’, that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL and large-scale Uncertainty Quantification (UQ studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.
Covariant Kaon Dynamics and Properties of Quasi-particle Models%K介子协变动力学及其准粒子模型的性质
Institute of Scientific and Technical Information of China (English)
王艳艳; 朱玉兰; 邢永忠; 郑玉明
2011-01-01
简要综述了中高能重离子碰撞中K介子的产生及研究进展。重点介绍了K介子协变动力学模型,并在此框架内分析了中高能重离子碰撞中K＋介子以及与其伴随产生的Λ超子的集体流特征。结果表明：协变动力学模型能够很好地给出K＋介子和Λ超子的微分直接流。相对而言,软势给出的集体流与实验值符合更好。同时,通过对不同输运模型中K介子准粒子模型的基本属性进行对比分析,明确了协变动力学模型中K介子准粒子模型的质量及能量随核物质密度的变化特征,以及周围核子的运动对于K介子基本属性的影响。%In the present paper,we briefly review the progress in the study of kaon production in heavy-ion collisions at intermediate and high energies and introduce the covariant kaon dynamics model.The collective flows of positively charged kaon and the lambda hyperon associated produced with kaon are studied in the framework of the dynamics.It shows that the directed differential flow of K＋ meson and Λ hyperon can be reasonably reproduced in the covariant kaon dynamics model.The calculated results with soft equation of nuclear matter are in better agrement with experimental data.Meawhile,a detailed comparison of the properites of different quasi-particle models in various transport model and the influence of nucleon＇s movement on the effective mass and energy of the quasi-particle in the covariant kaon dynamics model are discussed.
Electromagnetic waves in stratified media
Wait, James R; Fock, V A; Wait, J R
2013-01-01
International Series of Monographs in Electromagnetic Waves, Volume 3: Electromagnetic Waves in Stratified Media provides information pertinent to the electromagnetic waves in media whose properties differ in one particular direction. This book discusses the important feature of the waves that enables communications at global distances. Organized into 13 chapters, this volume begins with an overview of the general analysis for the electromagnetic response of a plane stratified medium comprising of any number of parallel homogeneous layers. This text then explains the reflection of electromagne
Multi Dimensional CTL and Stratified Datalog
Directory of Open Access Journals (Sweden)
Theodore Andronikos
2010-02-01
Full Text Available In this work we define Multi Dimensional CTL (MD-CTL in short by extending CTL which is thedominant temporal specification language in practice. The need for Multi Dimensional CTL is mainlydue to the advent of semi-structured data. The common path nature of CTL and XPath which provides asuitable model for semi-structured data, has caused the emergence of work on specifying a relation amongthem aiming at exploiting the nice properties of CTL. Although the advantages of such an approach havealready been noticed [36, 26, 5], no formal definition of MD-CTL has been given. The goal of this workis twofold; a we define MD-CTL and prove that the “nice” properties of CTL (linear model checking andbounded model property transfer also to MD-CTL, b we establish new results on stratified Datalog. Inparticular, we define a fragment of stratified Datalog called Multi Branching Temporal (MBT in shortprograms that has the same expressive power as MD-CTL. We prove that by devising a linear translationbetween MBT and MD-CTL. We actually give the exact translation rules for both directions. We furtherbuild on this relation to prove that query evaluation is linear and checking satisfiability, containment andequivalence are EXPTIME–complete for MBT programs. The class MBT is the largest fragment of stratifiedDatalog for which such results exist in the literature.
Connan, O; Maro, D; Hébert, D; Solier, L; Caldeira Ideas, P; Laguionie, P; St-Amant, N
2015-10-01
The behaviour of tritium in the environment is linked to the water cycle. We compare three methods of calculating the tritium evapotranspiration flux from grassland cover. The gradient and eddy covariance methods, together with a method based on the theoretical Penmann-Monteith model were tested in a study carried out in 2013 in an environment characterised by high levels of tritium activity. The results show that each of the three methods gave similar results. The various constraints applying to each method are discussed. The results show a tritium evapotranspiration flux of around 15 mBq m(-2) s(-1) in this environment. These results will be used to improve the entry parameters for the general models of tritium transfers in the environment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Robust Asymptotic Analysis for Mean and Covariance Structure Model%均值方差结构模型的渐近稳健推断
Institute of Scientific and Technical Information of China (English)
夏业茂; 刘应安
2011-01-01
均值方差模型广泛应用于行为、教育、医学、社会和心理学的研究.经典的极大似然估计对于异常点和分布扰动易受影响.本文基于目标函数最小化给出稳健估计,并基于稳健偏差提出模型拟合.%Mean and covariance structure model is widely applied in behavioral,educational,medical,social and psychological research.The classic maximum likelihood estimate is vulnerable to outliers and distributional deviation.In this paper,robust estimate based on minimizing the objective function is proposed,and M-ratio test based on the robust deviance is suggested to assess the model fit.Empirical results are illustrated by a real example.
Covariate analysis of bivariate survival data
Energy Technology Data Exchange (ETDEWEB)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data
Energy Technology Data Exchange (ETDEWEB)
Jun, Sung C [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Plis, Sergey M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Ranken, Doug M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Schmidt, David M [MS-D454, Applied Modern Physics Group, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2006-11-07
The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We
Stratified medicine and reimbursement issues
Fugel, Hans-Joerg; Nuijten, Mark; Postma, Maarten
2012-01-01
Stratified Medicine (SM) has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to
Directory of Open Access Journals (Sweden)
Sheng Zhong
2016-10-01
Full Text Available We consider the problem of genetic association testing of a binary trait in a sample that contains related individuals, where we adjust for relevant covariates and allow for missing data. We propose CERAMIC, an estimating equation approach that can be viewed as a hybrid of logistic regression and linear mixed-effects model (LMM approaches. CERAMIC extends the recently proposed CARAT method to allow samples with related individuals and to incorporate partially missing data. In simulations, we show that CERAMIC outperforms existing LMM and generalized LMM approaches, maintaining high power and correct type 1 error across a wider range of scenarios. CERAMIC results in a particularly large power increase over existing methods when the sample includes related individuals with some missing data (e.g., when some individuals with phenotype and covariate information have missing genotype, because CERAMIC is able to make use of the relationship information to incorporate partially missing data in the analysis while correcting for dependence. Because CERAMIC is based on a retrospective analysis, it is robust to misspecification of the phenotype model, resulting in better control of type 1 error and higher power than that of prospective methods, such as GMMAT, when the phenotype model is misspecified. CERAMIC is computationally efficient for genomewide analysis in samples of related individuals of almost any configuration, including small families, unrelated individuals and even large, complex pedigrees. We apply CERAMIC to data on type 2 diabetes (T2D from the Framingham Heart Study. In a genome scan, 9 of the 10 smallest CERAMIC p-values occur in or near either known T2D susceptibility loci or plausible candidates, verifying that CERAMIC is able to home in on the important loci in a genome scan.
Energy Technology Data Exchange (ETDEWEB)
Gullberg, Grant T.; Huesman, Ronald H.; Reutter, Bryan W.; Qi,Jinyi; Ghosh Roy, Dilip N.
2004-01-01
In dynamic cardiac SPECT estimates of kinetic parameters ofa one-compartment perfusion model are usually obtained in a two stepprocess: 1) first a MAP iterative algorithm, which properly models thePoisson statistics and the physics of the data acquisition, reconstructsa sequence of dynamic reconstructions, 2) then kinetic parameters areestimated from time activity curves generated from the dynamicreconstructions. This paper provides a method for calculating thecovariance matrix of the kinetic parameters, which are determined usingweighted least squares fitting that incorporates the estimated varianceand covariance of the dynamic reconstructions. For each transaxial slicesets of sequential tomographic projections are reconstructed into asequence of transaxial reconstructions usingfor each reconstruction inthe time sequence an iterative MAP reconstruction to calculate themaximum a priori reconstructed estimate. Time-activity curves for a sumof activity in a blood region inside the left ventricle and a sum in acardiac tissue region are generated. Also, curves for the variance of thetwo estimates of the sum and for the covariance between the two ROIestimates are generated as a function of time at convergence using anexpression obtained from the fixed-point solution of the statisticalerror of the reconstruction. A one-compartment model is fit to the tissueactivity curves assuming a noisy blood input function to give weightedleast squares estimates of blood volume fraction, wash-in and wash-outrate constants specifying the kinetics of 99mTc-teboroxime for theleftventricular myocardium. Numerical methods are used to calculate thesecond derivative of the chi-square criterion to obtain estimates of thecovariance matrix for the weighted least square parameter estimates. Eventhough the method requires one matrix inverse for each time interval oftomographic acquisition, efficient estimates of the tissue kineticparameters in a dynamic cardiac SPECT study can be obtained with
Charvat, Hadrien; Remontet, Laurent; Bossard, Nadine; Roche, Laurent; Dejardin, Olivier; Rachet, Bernard; Launoy, Guy; Belot, Aurélien
2016-08-15
The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.
2015-05-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Energy Technology Data Exchange (ETDEWEB)
Casana, Rodolfo; Ferreira, Manoel M.; Pinheiro, Paulo R.D. [Universidade Federal do Maranhao (UFMA), Departamento de Fisica, Sao Luis, MA (Brazil); Gomes, A.R. [Centro Federal de Educacao Tecnologica do Maranhao, Departamento de Ciencias Exatas, Sao Luis, Maranhao (Brazil)
2009-08-15
In this work, we focus on some properties of the parity-even sector of the CPT-even electrodynamics of the standard model extension. We analyze how the six non-birefringent terms belonging to this sector modify the static and stationary classical solutions of the usual Maxwell theory. We observe that the parity-even terms do not couple the electric and magnetic sectors (at least in the stationary regime). The Green's method is used to obtain solutions for the field strengths E and B at first order in the Lorentz-covariance-violating parameters. Explicit solutions are attained for point-like and spatially extended sources, for which a dipolar expansion is achieved. Finally, an Earth-based experiment is presented that can lead (in principle) to an upper bound on the anisotropic coefficients as stringent as ({kappa}{sub e-}){sup ij}<2.9 x 10{sup -20}. (orig.)
Casana, Rodolfo; Ferreira, Manoel M.; Gomes, A. R.; Pinheiro, Paulo R. D.
2009-08-01
In this work, we focus on some properties of the parity-even sector of the CPT-even electrodynamics of the standard model extension. We analyze how the six non-birefringent terms belonging to this sector modify the static and stationary classical solutions of the usual Maxwell theory. We observe that the parity-even terms do not couple the electric and magnetic sectors (at least in the stationary regime). The Green’s method is used to obtain solutions for the field strengths E and B at first order in the Lorentz-covariance-violating parameters. Explicit solutions are attained for point-like and spatially extended sources, for which a dipolar expansion is achieved. Finally, an Earth-based experiment is presented that can lead (in principle) to an upper bound on the anisotropic coefficients as stringent as left(widetilde{kappa}_{e-}right)^{ij}<2.9×10^{-20}.
Reyes, J. J.; Tague, N.; Kruger, C. E.; Johnson, K.; Adam, J. C.
2015-12-01
Grasses in rangeland ecosystems cover a large portion of the contiguous United States and are used to support the production of livestock. These grasslands experience a wide range of precipitation and temperature regimes, as well as management activities like grazing. Assessing the coupled response of biomass to both climatic change and human activities is important to decision makers to ensure the sustainable management of their lands. The objective of this study is to examine the sensitivity of biomass under co-varying conditions of climate and grazing management. For this, we used the Regional Hydro-ecologic Simulation System (RHESSys), a physically-based model that simulates coupled water and biogeochemical processes. We selected representative grassland sites using the Köppen-Geiger climate classification system and information on major grass species. Historical data on precipitation, temperature, and grazing patterns (intensity, frequency, duration) were incrementally perturbed to simulate climatic change and possible changes in management. To visualize this multi-dimensional parameter space, we created surface response plots of varying climate and grazing factors for the mean and variance of both aboveground and belowground biomass, as well as the ratio between the two. Mean biomass generally increased with warmer temperatures and decreased with more intense grazing. The sensitivity of biomass (i.e. variance) increased with more extreme perturbations in climate and intense types of grazing management. However, co-varying climate conditions with either grazing intensity, frequency, or duration revealed different biomass responses and tradeoffs. For example, some changes in grazing duration could be reversed by changes in climate. Effects of high intensity grazing could be buffered depending on the timing of grazing (i.e. start/end date). Using simple perturbations with process-based modeling provides useful information for land managers for future planning.
Saltas, Ippocratis D
2016-01-01
We derive the 1-loop effective action of the cubic Galileon coupled to quantum-gravitational fluctuations in a background and gauge-independent manner, employing the covariant framework of DeWitt and Vilkovisky. Although the bare action respects shift symmetry, the coupling to gravity induces an effective mass to the scalar, of the order of the cosmological constant, as a direct result of the non-flat field-space metric, the latter ensuring the field-reparametrization invariance of the formalism. Within a gauge-invariant regularization scheme, we discover novel, gravitationally induced non-Galileon higher-derivative interactions in the effective action. These terms, previously unnoticed within standard, non-covariant frameworks, are not Planck suppressed. Unless tuned to be sub-dominant, their presence could have important implications for the classical and quantum phenomenology of the theory.
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Using Analysis of Covariance (ANCOVA) with Fallible Covariates
Culpepper, Steven Andrew; Aguinis, Herman
2011-01-01
Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…
Using Analysis of Covariance (ANCOVA) with Fallible Covariates
Culpepper, Steven Andrew; Aguinis, Herman
2011-01-01
Analysis of covariance (ANCOVA) is used widely in psychological research implementing nonexperimental designs. However, when covariates are fallible (i.e., measured with error), which is the norm, researchers must choose from among 3 inadequate courses of action: (a) know that the assumption that covariates are perfectly reliable is violated but…
2015-12-01
11: Control enrollment (months 8-24) HARVEY Completed. After building the dataset, iPads were programmed for survey data acquisition by the...Density Notification Laws have been passed in 23 states since this grant was awarded in 2011. Virginia was the third state to have a density notification...the right language for enrollment materials, obtain their perception of the importance of the study, and understand their views regarding a new model
Roedig, Edna; Cuntz, Matthias; Huth, Andreas
2015-04-01
The effects of climatic inter-annual fluctuations and human activities on the global carbon cycle are uncertain and currently a major issue in global vegetation models. Individual-based forest gap models, on the other hand, model vegetation structure and dynamics on a small spatial (1000 years). They are well-established tools to reproduce successions of highly-diverse forest ecosystems and investigate disturbances as logging or fire events. However, the parameterizations of the relationships between short-term climate variability and forest model processes are often uncertain in these models (e.g. daily variable temperature and gross primary production (GPP)) and cannot be constrained from forest inventories. We addressed this uncertainty and linked high-resolution Eddy-covariance (EC) data with an individual-based forest gap model. The forest model FORMIND was applied to three diverse tropical forest sites in the Amazonian rainforest. Species diversity was categorized into three plant functional types. The parametrizations for the steady-state of biomass and forest structure were calibrated and validated with different forest inventories. The parameterizations of relationships between short-term climate variability and forest model processes were evaluated with EC-data on a daily time step. The validations of the steady-state showed that the forest model could reproduce biomass and forest structures from forest inventories. The daily estimations of carbon fluxes showed that the forest model reproduces GPP as observed by the EC-method. Daily fluctuations of GPP were clearly reflected as a response to daily climate variability. Ecosystem respiration remains a challenge on a daily time step due to a simplified soil respiration approach. In the long-term, however, the dynamic forest model is expected to estimate carbon budgets for highly-diverse tropical forests where EC-measurements are rare.
Bizzotto, Roberto; zamuner, stefano; Mezzalana, Enrica; De Nicolao, Giuseppe; Gomeni, Roberto; Hooker, Andrew C; Karlsson, Mats O.
2011-01-01
Mixed-effect Markov chain models have been recently proposed to characterize the time course of transition probabilities between sleep stages in insomniac patients. The most recent one, based on multinomial logistic functions, was used as a base to develop a final model combining the strengths of the existing ones. This final model was validated on placebo data applying also new diagnostic methods and then used for the inclusion of potential age, gender, and BMI effects. Internal validation w...
Tay, Louis; Huang, Qiming; Vermunt, Jeroen K.
2016-01-01
In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…
Indian Academy of Sciences (India)
Om Prakash; Devendra Kumar; Y K Dwivedi
2012-12-01
The paper investigates the effects of heat transfer in MHD flow of viscoelastic stratified fluid in porous medium on a parallel plate channel inclined at an angle . A laminar convection flow for incompressible conducting fluid is considered. It is assumed that the plates are kept at different temperatures which decay with time. The partial differential equations governing the flow are solved by perturbation technique. Expressions for the velocity of fluid and particle phases, temperature field, Nusselt number, skin friction and flow flux are obtained within the channel. The effects of various parameters like stratification factor, magnetic field parameter, Prandtl number on temperature field, heat transfer, skin friction, flow flux, velocity for both the fluid and particle phases are displayed through graphs and discussed numerically.
Directory of Open Access Journals (Sweden)
J. Ingwersen
2014-12-01
Full Text Available The energy balance of eddy covariance (EC flux data is normally not closed. Therefore, at least if used for modeling, EC flux data are usually post-closed, i.e. the measured turbulent fluxes are adjusted so as to close the energy balance. At the current state of knowledge, however, it is not clear how to partition the missing energy in the right way. Eddy flux data therefore contain some uncertainty due to the unknown nature of the energy balance gap, which should be considered in model evaluation and the interpretation of simulation results. We propose to construct the post-closure method uncertainty band (PUB, which essentially designates the differences between non-adjusted flux data and flux data adjusted with the three post-closure methods (Bowen ratio, latent heat flux (LE and sensible heat flux (H method. To demonstrate this approach, simulations with the NOAH-MP land surface model were evaluated based on EC measurements conducted at a winter wheat stand in Southwest Germany in 2011, and the performance of the Jarvis and Ball–Berry stomatal resistance scheme was compared. The width of the PUB of the LE was up to 110 W m–2 (21% of net radiation. Our study shows that it is crucial to account for the uncertainty of EC flux data originating from lacking energy balance closure. Working with only a single post-closing method might result in severe misinterpretations in model-data comparisons.
Institute of Scientific and Technical Information of China (English)
Colin JFerster; JA TonYTrofymow; Nicholas C Coops; Baozhang Chen; Thomas AndreWBlack
2015-01-01
Background:The global network of eddy-covariance (EC) flux-towers has improved the understanding of the terrestrial carbon (C) cycle, however, the network has a relatively limited spatial extent compared to forest inventory data and plots. Developing methods to use inventory-based and EC flux measurements together with modeling approaches is necessary evaluate forest C dynamics across broad spatial extents. Methods:Changes in C stock change (ΔC) were computed based on repeated measurements of forest inventory plots and compared with separate measurements of cumulative net ecosystem productivity (ΣNEP) over four years (2003–2006) for Douglas-fir (Pseudotsuga menziesi var menziesi ) dominated regeneration (HDF00), juvenile (HDF88 and HDF90) and near-rotation (DF49) aged stands (6, 18, 20, 57 years old in 2006, respectively) in coastal British Columbia.ΔC was determined from forest inventory plot data alone, and in a hybrid approach using inventory data along with litter fall data and published decay equations to determine the change in detrital pools. TheseΔC-based estimates were then compared withΣNEP measured at an eddy-covariance flux-tower (EC-flux) and modelled by the Carbon Budget Model-Canadian Forest Sector (CBM-CFS3) using historic forest inventory and forest disturbance data. Footprint analysis was used with remote sensing, soils and topography data to evaluate how well the inventory plots represented the range of stand conditions within the area of the flux-tower footprint and to spatial y scale the plot data to the area of the EC-flux and model based estimates. Results:The closest convergence among methods was for the juvenile stands while the largest divergences were for the regenerating clearcut, followed by the near-rotation stand. At the regenerating clearcut, footprint weighting of CBM-CFS3ΣNEP increased convergence with EC fluxΣNEP, but not forΔC. While spatial scaling and footprint weighting did not increase convergence forΔC, they did
Bartolucci, Francesco; Farcomeni, Alessio
2015-03-01
Mixed latent Markov (MLM) models represent an important tool of analysis of longitudinal data when response variables are affected by time-fixed and time-varying unobserved heterogeneity, in which the latter is accounted for by a hidden Markov chain. In order to avoid bias when using a model of this type in the presence of informative drop-out, we propose an event-history (EH) extension of the latent Markov approach that may be used with multivariate longitudinal data, in which one or more outcomes of a different nature are observed at each time occasion. The EH component of the resulting model is referred to the interval-censored drop-out, and bias in MLM modeling is avoided by correlated random effects, included in the different model components, which follow common latent distributions. In order to perform maximum likelihood estimation of the proposed model by the expectation-maximization algorithm, we extend the usual forward-backward recursions of Baum and Welch. The algorithm has the same complexity as the one adopted in cases of non-informative drop-out. We illustrate the proposed approach through simulations and an application based on data coming from a medical study about primary biliary cirrhosis in which there are two outcomes of interest, one continuous and the other binary. © 2014, The International Biometric Society.
Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver
2013-01-01
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Directory of Open Access Journals (Sweden)
Fränzi Korner-Nievergelt
Full Text Available Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Vannitsem, Stephane
2015-01-01
We study a simplified coupled atmosphere-ocean model using the formalism of covariant Lyapunov vectors (CLVs), which link physically-based directions of perturbations to growth/decay rates. The model is obtained via a severe truncation of quasi-geostrophic equations for the two fluids, and includes a simple yet physically meaningful representation of their dynamical/thermodynamical coupling. The model has 36 degrees of freedom, and the parameters are chosen so that a chaotic behaviour is observed. One finds two positive Lyapunov exponents (LEs), sixteen negative LEs, and eighteen near-zero LEs. The presence of many near-zero LEs results from the vast time-scale separation between the characteristic time scales of the two fluids, and leads to nontrivial error growth properties in the tangent space spanned by the corresponding CLVs, which are geometrically very degenerate. Such CLVs correspond to two different classes of ocean/atmosphere coupled modes. The tangent space spanned by the CLVs corresponding to the ...
Choi, Youn-Jeng; Alexeev, Natalia; Cohen, Allan S.
2015-01-01
The purpose of this study was to explore what may be contributing to differences in performance in mathematics on the Trends in International Mathematics and Science Study 2007. This was done by using a mixture item response theory modeling approach to first detect latent classes in the data and then to examine differences in performance on items…
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Andersson, C David; Hillgren, J Mikael; Lindgren, Cecilia; Qian, Weixing; Akfur, Christine; Berg, Lotta; Ekström, Fredrik; Linusson, Anna
2015-03-01
Scientific disciplines such as medicinal- and environmental chemistry, pharmacology, and toxicology deal with the questions related to the effects small organic compounds exhort on biological targets and the compounds' physicochemical properties responsible for these effects. A common strategy in this endeavor is to establish structure-activity relationships (SARs). The aim of this work was to illustrate benefits of performing a statistical molecular design (SMD) and proper statistical analysis of the molecules' properties before SAR and quantitative structure-activity relationship (QSAR) analysis. Our SMD followed by synthesis yielded a set of inhibitors of the enzyme acetylcholinesterase (AChE) that had very few inherent dependencies between the substructures in the molecules. If such dependencies exist, they cause severe errors in SAR interpretation and predictions by QSAR-models, and leave a set of molecules less suitable for future decision-making. In our study, SAR- and QSAR models could show which molecular sub-structures and physicochemical features that were advantageous for the AChE inhibition. Finally, the QSAR model was used for the prediction of the inhibition of AChE by an external prediction set of molecules. The accuracy of these predictions was asserted by statistical significance tests and by comparisons to simple but relevant reference models.
Covariant Magnetic Connection Hypersurfaces
Pegoraro, F
2016-01-01
In the single fluid, nonrelativistic, ideal-Magnetohydrodynamic (MHD) plasma description magnetic field lines play a fundamental role by defining dynamically preserved "magnetic connections" between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D {\\it magnetic connection hypersurfaces} in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when ${\\bf E} \\cdot {\\bf B} = 0$.
Universality of Covariance Matrices
Pillai, Natesh S
2011-01-01
We prove the universality of covariance matrices of the form $H_{N \\times N} = {1 \\over N} \\tp{X}X$ where $[X]_{M \\times N}$ is a rectangular matrix with independent real valued entries $[x_{ij}]$ satisfying $\\E \\,x_{ij} = 0$ and $\\E \\,x^2_{ij} = {1 \\over M}$, $N, M\\to \\infty$. Furthermore it is assumed that these entries have sub-exponential tails. We will study the asymptotics in the regime $N/M = d_N \\in (0,\\infty), \\lim_{N\\to \\infty}d_N \
Covariant Projective Extensions
Institute of Scientific and Technical Information of China (English)
许天周; 梁洁
2003-01-01
@@ The theory of crossed products of C*-algebras by groups of automorphisms is a well-developed area of the theory of operator algebras. Given the importance and the success ofthat theory, it is natural to attempt to extend it to a more general situation by, for example,developing a theory of crossed products of C*-algebras by semigroups of automorphisms, or evenof endomorphisms. Indeed, in recent years a number of papers have appeared that are concernedwith such non-classicaltheories of covariance algebras, see, for instance [1-3].
Mullah, Muhammad Abu Shadeque; Benedetti, Andrea
2016-11-01
Besides being mainly used for analyzing clustered or longitudinal data, generalized linear mixed models can also be used for smoothing via restricting changes in the fit at the knots in regression splines. The resulting models are usually called semiparametric mixed models (SPMMs). We investigate the effect of smoothing using SPMMs on the correlation and variance parameter estimates for serially correlated longitudinal normal, Poisson and binary data. Through simulations, we compare the performance of SPMMs to other simpler methods for estimating the nonlinear association such as fractional polynomials, and using a parametric nonlinear function. Simulation results suggest that, in general, the SPMMs recover the true curves very well and yield reasonable estimates of the correlation and variance parameters. However, for binary outcomes, SPMMs produce biased estimates of the variance parameters for high serially correlated data. We apply these methods to a dataset investigating the association between CD4 cell count and time since seroconversion for HIV infected men enrolled in the Multicenter AIDS Cohort Study.
Energy Technology Data Exchange (ETDEWEB)
Morenas, Vincent [Ecole Doctorale des Sciences Fondamentales, Universite Blaise Pascal, U.F.R. de Recherche Scientifique et Technique, F-63177 Aubiere (France)
1997-12-19
The study of semileptonic decays is of crucial importance for the physics of beauty. It was usually believed that the rates of these reactions were saturated by the channels leading to the production of ground state D and D{sup *} mesons only. Yet, experimental results have shown recently that the contribution of orbitally excited mesons are not that small. In these thesis it is presented a study of the semileptonic decays of B mesons into the first orbitally excited charmed states D{sup **}: by using the formalism of Bakamjian-Thomas to construct the mesonic states, together with the hypothesis of infinite mass limit of the heavy quark, we provide a covariant description of the hadronic transition amplitude; moreover, all the `good` properties of the heavy quark symmetries are naturally fulfilled. We then fixed the dynamics of the bound states of quarks by introducing four spectroscopic models and made numerical predictions, which are discussed and compared to other theoretical and experimental data when available. Finally, we also applied this formalism to the study of annihilation processes: the transition amplitude are then also written in a covariant way and the properties of heavy quark symmetries fulfilled. Numerical predictions of decay constants were made with the same four spectroscopic models. (author) 87 refs., 20 figs., 13 tabs.
Dillon, Joshua S; Hewitt, Jacqueline N; Tegmark, Max; Barry, N; Beardsley, A P; Bowman, J D; Briggs, F; Carroll, P; de Oliveira-Costa, A; Ewall-Wice, A; Feng, L; Greenhill, L J; Hazelton, B J; Hernquist, L; Hurley-Walker, N; Jacobs, D C; Kim, H S; Kittiwisit, P; Lenc, E; Line, J; Loeb, A; McKinley, B; Mitchell, D A; Morales, M F; Offringa, A R; Paul, S; Pindor, B; Pober, J C; Procopio, P; Riding, J; Sethi, S; Shankar, N Udaya; Subrahmanyan, R; Sullivan, I; Thyagarajan, Nithyanandan; Tingay, S J; Trott, C; Wayth, R B; Webster, R L; Wyithe, S; Bernardi, G; Cappallo, R J; Deshpande, A A; Johnston-Hollitt, M; Kaplan, D L; Lonsdale, C J; McWhirter, S R; Morgan, E; Oberoi, D; Ord, S M; Prabu, T; Srivani, K S; Williams, A; Williams, C L
2015-01-01
The separation of the faint cosmological background signal from bright astrophysical foregrounds remains one of the most daunting challenges of mapping the high-redshift intergalactic medium with the redshifted 21 cm line of neutral hydrogen. Advances in mapping and modeling of diffuse and point source foregrounds have improved subtraction accuracy, but no subtraction scheme is perfect. Precisely quantifying the errors and error correlations due to missubtracted foregrounds allows for both the rigorous analysis of the 21 cm power spectrum and for the maximal isolation of the "EoR window" from foreground contamination. We present a method to infer the covariance of foreground residuals from the data itself in contrast to previous attempts at a priori modeling. We demonstrate our method by setting limits on the power spectrum using a 3 h integration from the 128-tile Murchison Widefield Array. Observing between 167 and 198 MHz, we find at 95% confidence a best limit of Delta^2(k) < 3.7 x 10^4 mK^2 at comovin...
Land, M C
2001-01-01
This paper examines the Stark effect, as a first order perturbation of manifestly covariant hydrogen-like bound states. These bound states are solutions to a relativistic Schr\\"odinger equation with invariant evolution parameter, and represent mass eigenstates whose eigenvalues correspond to the well-known energy spectrum of the non-relativistic theory. In analogy to the nonrelativistic case, the off-diagonal perturbation leads to a lifting of the degeneracy in the mass spectrum. In the covariant case, not only do the spectral lines split, but they acquire an imaginary part which is lnear in the applied electric field, thus revealing induced bound state decay in first order perturbation theory. This imaginary part results from the coupling of the external field to the non-compact boost generator. In order to recover the conventional first order Stark splitting, we must include a scalar potential term. This term may be understood as a fifth gauge potential, which compensates for dependence of gauge transformat...
Directory of Open Access Journals (Sweden)
Xanthe L Strudwick
Full Text Available Human keratinocytes are difficult to isolate and have a limited lifespan. Traditionally, immortalised keratinocyte cell lines are used in vitro due to their ability to bypass senescence and survive indefinitely. However these cells do not fully retain their ability to differentiate in vitro and they are unable to form a normal stratum corneum in organotypic culture. Here we aimed to generate a pool of phenotypically similar keratinocytes from human donors that could be used in monolayer culture, without a fibroblast feeder layer, and in 3D human skin equivalent models. Primary human neonatal epidermal keratinocytes (HEKn were cultured in low calcium, (0.07 mM media, +/-10 μM Y-27632 ROCK inhibitor (HEKn-CaY. mRNA and protein was extracted and expression of differentiation markers Keratin 14 (K14, Keratin 10 (K10 and Involucrin (Inv assessed by qRT-PCR and Western blotting. The differentiation potential of the HEKn-CaY cultures was assessed by increasing calcium levels and removing the Y-27632 for 72 hrs prior to assessment of K14, K10 and Inv. The ability of the HEKn-CaY, to form a stratified epithelium was assessed using a human skin equivalent (HSE model in the absence of Y-27632. Increased proliferative capacity, expansion potential and lifespan of HEKn was observed with the combination of low calcium and 10 μM ROCK inhibitor Y-27632. The removal of Y-27632 and the addition of high calcium to induce differentiation allowed the cells to behave as primary keratinocytes even after extended serial passaging. Prolonged lifespan HEK-CaYs were capable of forming an organised stratified epidermis in 3D HSE cultures, demonstrating their ability to fully stratify and retain their original, primary characteristics. In conclusion, the use of 0.07 mM Calcium and 10 μM Y-27632 in HEKn monocultures provides the opportunity to culture primary human keratinocytes without a cell feeder layer for extended periods of culture whilst retaining their ability to
Estimating the power spectrum covariance matrix with fewer mock samples
Pearson, David W
2015-01-01
The covariance matrices of power-spectrum (P(k)) measurements from galaxy surveys are difficult to compute theoretically. The current best practice is to estimate covariance matrices by computing a sample covariance of a large number of mock catalogues. The next generation of galaxy surveys will require thousands of large volume mocks to determine the covariance matrices to desired accuracy. The errors in the inverse covariance matrix are larger and scale with the number of P(k) bins, making the problem even more acute. We develop a method of estimating covariance matrices using a theoretically justified, few-parameter model, calibrated with mock catalogues. Using a set of 600 BOSS DR11 mock catalogues, we show that a seven parameter model is sufficient to fit the covariance matrix of BOSS DR11 P(k) measurements. The covariance computed with this method is better than the sample covariance at any number of mocks and only ~100 mocks are required for it to fully converge and the inverse covariance matrix conver...
Stratified Medicine and Reimbursement Issues
Directory of Open Access Journals (Sweden)
Hans-Joerg eFugel
2012-10-01
Full Text Available Stratified Medicine (SM has the potential to target patient populations who will most benefit from a therapy while reducing unnecessary health interventions associated with side effects. The link between clinical biomarkers/diagnostics and therapies provides new opportunities for value creation to strengthen the value proposition to pricing and reimbursement (P&R authorities. However, the introduction of SM challenges current reimbursement schemes in many EU countries and the US as different P&R policies have been adopted for drugs and diagnostics. Also, there is a lack of a consistent process for value assessment of more complex diagnostics in these markets. New, innovative approaches and more flexible P&R systems are needed to reflect the added value of diagnostic tests and to stimulate investments in new technologies. Yet, the framework for access of diagnostic–based therapies still requires further development while setting the right incentives and appropriate align stakeholders interests when realizing long- term patient benefits. This article addresses the reimbursement challenges of SM approaches in several EU countries and the US outlining some options to overcome existing reimbursement barriers for stratified medicine.
Institute of Scientific and Technical Information of China (English)
吴怀琴
2015-01-01
As the ninth-grade students' writing ability differs greatly, the author tries to apply the stratified teaching model in the English writing class responding to the demands of English Curriculum Standards. The application of the stratified teaching model is based on Master Learning Theory and ZPD theory, which helps to stimulate the students' learning interest and developing their confidence and ulti-mately improve the writing skills of students at different levels.%结合九年级学生英语写作水平参差不齐的现状,笔者在《英语课程标准》的倡导下,以掌握学习理论和最近发展区理论为基础,将分层教学模式应用到英语写作教学课堂中,有针对性地组织教学活动,以激发学生的学习兴趣,培养学生的自信心,从而提高不同层次学生的写作水平.
Chi, Zhiyi
2010-01-01
Two extensions of generalized linear models are considered. In the first one, response variables depend on multiple linear combinations of covariates. In the second one, only response variables are observed while the linear covariates are missing. We derive stochastic Lipschitz continuity results for the loss functions involved in the regression problems and apply them to get bounds on estimation error for Lasso. Multivariate comparison results on Rademacher complexity are obtained as tools to establish the stochastic Lipschitz continuity results.
Chi, J.; Maureira, F.; Waldo, S.; O'Keeffe, P.; Pressley, S. N.; Stockle, C. O.; Lamb, B. K.
2014-12-01
Local meteorology, crop management practices and site characteristics have important impacts on carbon and water cycling in agricultural ecosystems. This study focuses on carbon and water fluxes measured using eddy covariance (EC) methods and crop simulation models in the Inland Pacific Northwest (IPNW), in association with the Regional Approaches to Climate Change (REACCH) program. The agricultural ecosystem is currently challenged by higher pressure on water resources as a consequence of population growth and increasing exposure to impacts associated with different types of crop managements. In addition, future climate projections for this region show a likely increase in temperature and significant reductions in precipitation that will affect carbon and water dynamics. This new scenario requires an understanding of crop management by assessing efficient ways to face the impacts of climate change at the micrometeorological level, especially in regards to carbon and water flow. We focus on three different crop management sites. One site (LIND) under crop-fallow is situated in a low-rainfall area. The other two sites, one no-till site (CAF-NT) and one conventional tillage site (CAF-CT), are located in an area of high-rainfall with continuous cropping. In this study, we used CropSyst micro-basin model to simulate the responses in carbon and water budgets at each site. Based on the EC processed results for net ecosystem exchange (NEE) of CO2, the CAF-NT site was a carbon sink during 2013 when spring garbanzo was planted; while the paired CAF-CT site, under similar crop rotation and meteorological conditions, was a carbon source during the same period. The LIND site was also a carbon sink where winter wheat was growing during 2013. Model results for CAF-NT showed good agreement with the EC carbon and water flux measurements during 2013. Through comparisons between measurements and modeling results, both short and long term processes that influence carbon and water
Hubeny, Veronika E
2014-01-01
A recently explored interesting quantity in AdS/CFT, dubbed 'residual entropy', characterizes the amount of collective ignorance associated with either boundary observers restricted to finite time duration, or bulk observers who lack access to a certain spacetime region. However, the previously-proposed expression for this quantity involving variation of boundary entanglement entropy (subsequently renamed to 'differential entropy') works only in a severely restrictive context. We explain the key limitations, arguing that in general, differential entropy does not correspond to residual entropy. Given that the concept of residual entropy as collective ignorance transcends these limitations, we identify two correspondingly robust, covariantly-defined constructs: a 'strip wedge' associated with boundary observers and a 'rim wedge' associated with bulk observers. These causal sets are well-defined in arbitrary time-dependent asymptotically AdS spacetimes in any number of dimensions. We discuss their relation, spec...
Deriving covariant holographic entanglement
Dong, Xi; Lewkowycz, Aitor; Rangamani, Mukund
2016-11-01
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Deriving covariant holographic entanglement
Dong, Xi; Rangamani, Mukund
2016-01-01
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Renyi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Covariant Macroscopic Quantum Geometry
Hogan, Craig J
2012-01-01
A covariant noncommutative algebra of position operators is presented, and interpreted as the macroscopic limit of a geometry that describes a collective quantum behavior of the positions of massive bodies in a flat emergent space-time. The commutator defines a quantum-geometrical relationship between world lines that depends on their separation and relative velocity, but on no other property of the bodies, and leads to a transverse uncertainty of the geometrical wave function that increases with separation. The number of geometrical degrees of freedom in a space-time volume scales holographically, as the surface area in Planck units. Ongoing branching of the wave function causes fluctuations in transverse position, shared coherently among bodies with similar trajectories. The theory can be tested using appropriately configured Michelson interferometers.
Saltas, Ippocratis D.; Vitagliano, Vincenzo
2017-05-01
We derive the 1-loop effective action of the cubic Galileon coupled to quantum-gravitational fluctuations in a background and gauge-independent manner, employing the covariant framework of DeWitt and Vilkovisky. Although the bare action respects shift symmetry, the coupling to gravity induces an effective mass to the scalar, of the order of the cosmological constant, as a direct result of the nonflat field-space metric, the latter ensuring the field-reparametrization invariance of the formalism. Within a gauge-invariant regularization scheme, we discover novel, gravitationally induced non-Galileon higher-derivative interactions in the effective action. These terms, previously unnoticed within standard, noncovariant frameworks, are not Planck suppressed. Unless tuned to be subdominant, their presence could have important implications for the classical and quantum phenomenology of the theory.
Covariant holographic entanglement negativity
Chaturvedi, Pankaj; Sengupta, Gautam
2016-01-01
We conjecture a holographic prescription for the covariant entanglement negativity of $d$-dimensional conformal field theories dual to non static bulk $AdS_{d+1}$ gravitational configurations in the framework of the $AdS/CFT$ correspondence. Application of our conjecture to a $AdS_3/CFT_2$ scenario involving bulk rotating BTZ black holes exactly reproduces the entanglement negativity of the corresponding $(1+1)$ dimensional conformal field theories and precisely captures the distillable quantum entanglement. Interestingly our conjecture for the scenario involving dual bulk extremal rotating BTZ black holes also accurately leads to the entanglement negativity for the chiral half of the corresponding $(1+1)$ dimensional conformal field theory at zero temperature.
Energy Technology Data Exchange (ETDEWEB)
Moriyoshi, Y.; Muroki, T.; Song, Y. [Chiba University, Chiba (Japan). Faculty of Engineering
1995-10-25
The ignition mechanism of a pilot flame in a stratified charge mixture was examined using a model combustion chamber of a Wankel-type rotary engine. Experimental study such as LDV measurement, pressure data analysis, high-speed photography and image analysis provides detailed knowledge concerning the stratified charge combustion, which is complemented by theoretical study of the mixture formation process inside the combustion chamber. Characteristics of the pilot flame as an ignition source and the mixture formation inside the model chamber required for enhanced combustion are determined in this study. 6 refs., 11 figs., 2 tabs.
Optimal covariate designs theory and applications
Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar
2015-01-01
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...
Suppression of stratified explosive interactions
Energy Technology Data Exchange (ETDEWEB)
Meeks, M.K.; Shamoun, B.I.; Bonazza, R.; Corradini, M.L. [Wisconsin Univ., Madison, WI (United States). Dept. of Nuclear Engineering and Engineering Physics
1998-01-01
Stratified Fuel-Coolant Interaction (FCI) experiments with Refrigerant-134a and water were performed in a large-scale system. Air was uniformly injected into the coolant pool to establish a pre-existing void which could suppress the explosion. Two competing effects due to the variation of the air flow rate seem to influence the intensity of the explosion in this geometrical configuration. At low flow rates, although the injected air increases the void fraction, the concurrent agitation and mixing increases the intensity of the interaction. At higher flow rates, the increase in void fraction tends to attenuate the propagated pressure wave generated by the explosion. Experimental results show a complete suppression of the vapor explosion at high rates of air injection, corresponding to an average void fraction of larger than 30%. (author)
Institute of Scientific and Technical Information of China (English)
孟朝霞
2012-01-01
以分层项目教学法为基础，强调在同一教学内容和教学课堂上，针对不同层次的学生选择和设计不同层次的项目内容、学习内容和教学实践内容，形成应用能力为核心的c程序设计课程分层培养教学模式，提升计算机公共课程的教学质量，加强了地方型本科应用性人才的计算机应用能力培养质量。%Based on the Stratified item teaching methodology, this paper emphasizes selection and design of different levels of content in items, learning and teaching practice according to the students of different levels in the same content and teaching classroom. This methodology aims at establishing a stratified teaching model of program C design, which takes application as its core, promoting teaching quality of public computer education, and strengthening quality of talents in terms of computer application in the local colleges.
Directory of Open Access Journals (Sweden)
Severino Cavalcante de Sousa Júnior
2010-05-01
Full Text Available Foram utilizados 35.732 registros de peso do nascimento aos 660 dias de idade de 8.458 animais da raça Tabapuã para estimar funções de covariância utilizando modelos de regressão aleatória sobre polinômios de Legendre. Os modelos incluíram: como aleatórios, os efeitos genético aditivo direto, materno, de ambiente permanente de animal e materno; como fixos, os efeitos de grupo de contemporâneo; como covariáveis, a idade do animal à pesagem e a idade da vaca ao parto (linear e quadrática; e sobre a idade à pesagem, polinômio ortogonal de Legendre (regressão cúbica foi considerado para modelar a curva média da população. O resíduo foi modelado considerando sete classes de variância e os modelos foram comparados pelos critérios de informação Bayesiano de Schwarz e Akaike. O melhor modelo apresentou ordens 4, 3, 6, 3 para os efeitos genético aditivo direto e materno, de ambiente permanente de animal e materno, respectivamente. As estimativas de covariância e herdabilidades, obtidas utilizando modelo bicaracter, e de regressão aleatória foram semelhantes. As estimativas de herdabilidade para o efeito genético aditivo direto, obtidas com o modelo de regressão aleatória, aumentaram do nascimento (0,15 aos 660 dias de idade (0,45. Maiores estimativas de herdabilidade materna foram obtidas para pesos medidos logo após o nascimento. As correlações genéticas variaram de moderadas a altas e diminuíram com o aumento da distância entre as pesagens. A seleção para maiores pesos em qualquer idade promove maior ganho de peso do nascimento aos 660 dias de idade.In order to estimate covariance functions by using random regression models on Legendre polynomials, 35,732 weight records from birth to 660 days of age of 8,458 animals of Tabapuã cattle were used. The models included: as random effects, direct additive genetic effect, maternal effect, and animal and maternal permanent environmental effets; contemporary groups
Covariant formulation of pion-nucleon scattering
Lahiff, A. D.; Afnan, I. R.
A covariant model of elastic pion-nucleon scattering based on the Bethe-Salpeter equation is presented. The kernel consists of s- and u-channel nucleon and delta poles, along with rho and sigma exchange in the t-channel. A good fit is obtained to the s- and p-wave phase shifts up to the two-pion production threshold.
On turbulence in a stratified environment
Sarkar, Sutanu
2015-11-01
John Lumley, motivated by atmospheric observations, made seminal contributions to the statistical theory (Lumley and Panofsky 1964, Lumley 1964) and second-order modeling (Zeman and Lumley 1976) of turbulence in the environment. Turbulent processes in the ocean share many features with the atmosphere, e.g., shear, stratification, rotation and rough topography. Results from direct and large eddy simulations of two model problems will be used to illustrate some of the features of turbulence in a stratified environment. The first problem concerns a shear layer in nonuniform stratification, a situation typical of both the atmosphere and the ocean. The second problem, considered to be responsible for much of the turbulent mixing that occurs in the ocean interior, concerns topographically generated internal gravity waves. Connections will be made to data taken during observational campaigns in the ocean.
Relativistic Covariance and Quark-Diquark Wave Functions
Dillig, M
2006-01-01
We derive covariant wave functions for hadrons composed of two constituents for arbitrary Lorentz boosts. Focussing explicitly on baryons as quark-diquark systems, we reduce their manifestly covariant Bethe-Salpeter equation to covariant 3-dimensional forms by projecting on the relative quark-diquark energy. Guided by a phenomenological multi gluon exchange representation of covariant confining kernels, we derive explicit solutions for harmonic confinement and for the MIT Bag Model. We briefly sketch implications of breaking the spherical symmetry of the ground state and the transition from the instant form to the light cone via the infinite momentum frame.
Stratified wake of an accelerating hydrofoil
Ben-Gida, Hadar; Gurka, Roi
2015-01-01
Wakes of towed and self-propelled bodies in stratified fluids are significantly different from non-stratified wakes. Long time effects of stratification on the development of the wakes of bluff bodies moving at constant speed are well known. In this experimental study we demonstrate how buoyancy affects the initial growth of vortices developing in the wake of a hydrofoil accelerating from rest. Particle image velocimetry measurements were applied to characterize the wake evolution behind a NACA 0015 hydrofoil accelerating in water and for low Reynolds number and relatively strong and stably stratified fluid (Re=5,000, Fr~O(1)). The analysis of velocity and vorticity fields, following vortex identification and an estimate of the circulation, reveal that the vortices in the stratified fluid case are stretched along the streamwise direction in the near wake. The momentum thickness profiles show lower momentum thickness values for the stratified late wake compared to the non-stratified wake, implying that the dra...
Information content of household-stratified epidemics
Directory of Open Access Journals (Sweden)
T.M. Kinyanjui
2016-09-01
Full Text Available Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs.
Linear Inviscid Damping for Couette Flow in Stratified Fluid
Yang, Jincheng
2016-01-01
We study the inviscid damping of Coutte flow with an exponentially stratified density. The optimal decay rates of the velocity field and density are obtained for general perturbations with minimal regularity. For Boussinesq approximation model, the decay rates we get are consistent with the previous results in the literature. We also study the decay rates for the full equations of stratified fluids, which were not studied before. For both models, the decay rates depend on the Richardson number in a very similar way. Besides, we also study the inviscid damping of perturbations due to the exponential stratification when there is no shear.
Gaussian covariance matrices for anisotropic galaxy clustering measurements
Grieb, Jan Niklas; Salazar-Albornoz, Salvador; Vecchia, Claudio dalla
2015-01-01
Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. In the era of precision cosmology, accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. For cases where only a limited set of simulations is available, assessing the data covariance is not possible or only leads to a noisy estimate. Also, relying on simulated realisations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these two points in mind, this work aims at presenting a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements f...
Numerical Simulation on Stratified Flow over an Isolated Mountain Ridge
Institute of Scientific and Technical Information of China (English)
LI Ling; Shigeo Kimura
2007-01-01
The characteristics of stratified flow over an isolated mountain ridge have been investigated numerically. The two-dimensional model equations, based on the time-dependent Reynolds averaged NavierStokes equations, are solved numerically using an implicit time integration in a fitted body grid arrangement to simulate stratified flow over an isolated ideally bell-shaped mountain. The simulation results are in good agreement with the existing corresponding analytical and approximate solutions. It is shown that for atmospheric conditions where non-hydrostatic effects become dominant, the model is able to reproduce typical flow features. The dispersion characteristics of gaseous pollutants in the stratified flow have also been studied. The dispersion patterns for two typical atmospheric conditions are compared. The results show that the presence of a gravity wave causes vertical stratification of the pollutant concentration and affects the diffusive characteristics of the pollutants.
Shestakova, Tatiana A; Aguilera, Mònica; Ferrio, Juan Pedro; Gutiérrez, Emilia; Voltas, Jordi
2014-08-01
Identifying how physiological responses are structured across environmental gradients is critical to understanding in what manner ecological factors determine tree performance. Here, we investigated the spatiotemporal patterns of signal strength of carbon isotope discrimination (Δ(13)C) and oxygen isotope composition (δ(18)O) for three deciduous oaks (Quercus faginea (Lam.), Q. humilis Mill. and Q. petraea (Matt.) Liebl.) and one evergreen oak (Q. ilex L.) co-occurring in Mediterranean forests along an aridity gradient. We hypothesized that contrasting strategies in response to drought would lead to differential climate sensitivities between functional groups. Such differential sensitivities could result in a contrasting imprint on stable isotopes, depending on whether the spatial or temporal organization of tree-ring signals was analysed. To test these hypotheses, we proposed a mixed modelling framework to group isotopic records into potentially homogeneous subsets according to taxonomic or geographical criteria. To this end, carbon and oxygen isotopes were modelled through different variance-covariance structures for the variability among years (at the temporal level) or sites (at the spatial level). Signal-strength parameters were estimated from the outcome of selected models. We found striking differences between deciduous and evergreen oaks in the organization of their temporal and spatial signals. Therefore, the relationships with climate were examined independently for each functional group. While Q. ilex exhibited a large spatial dependence of isotopic signals on the temperature regime, deciduous oaks showed a greater dependence on precipitation, confirming their higher susceptibility to drought. Such contrasting responses to drought among oak types were also observed at the temporal level (interannual variability), with stronger associations with growing-season water availability in deciduous oaks. Thus, our results indicate that Mediterranean deciduous
Clustering of floating particles in stratified turbulence
Boffetta, Guido; de Lillo, Filippo; Musacchio, Stefano; Sozza, Alessandro
2016-11-01
We study the dynamics of small floating particles transported by stratified turbulence in presence of a mean linear density profile as a simple model for the confinement and the accumulation of plankton in the ocean. By means of extensive direct numerical simulations we investigate the statistical distribution of floaters as a function of the two dimensionless parameters of the problem. We find that vertical confinement of particles is mainly ruled by the degree of stratification, with a weak dependency on the particle properties. Conversely, small scale fractal clustering, typical of non-neutral particles in turbulence, depends on the particle relaxation time and is only weakly dependent on the flow stratification. The implications of our findings for the formation of thin phytoplankton layers are discussed.
Unbiased risk estimation method for covariance estimation
Lescornel, Hélène; Chabriac, Claudie
2011-01-01
We consider a model selection estimator of the covariance of a random process. Using the Unbiased Risk Estimation (URE) method, we build an estimator of the risk which allows to select an estimator in a collection of model. Then, we present an oracle inequality which ensures that the risk of the selected estimator is close to the risk of the oracle. Simulations show the efficiency of this methodology.
Covariant representations of subproduct systems
Viselter, Ami
2010-01-01
A celebrated theorem of Pimsner states that a covariant representation $T$ of a $C^*$-correspondence $E$ extends to a $C^*$-representation of the Toeplitz algebra of $E$ if and only if $T$ is isometric. This paper is mainly concerned with finding conditions for a covariant representation of a \\emph{subproduct system} to extend to a $C^*$-representation of the Toeplitz algebra. This framework is much more general than the former. We are able to find sufficient conditions, and show that in important special cases, they are also necessary. Further results include the universality of the tensor algebra, dilations of completely contractive covariant representations, Wold decompositions and von Neumann inequalities.
Core science: Stratified by a sunken impactor
Nakajima, Miki
2016-10-01
There is potential evidence for a stratified layer at the top of the Earth's core, but its origin is not well understood. Laboratory experiments suggest that the stratified layer could be a sunken remnant of the giant impact that formed the Moon.
General covariance in computational electrodynamics
DEFF Research Database (Denmark)
Shyroki, Dzmitry; Lægsgaard, Jesper; Bang, Ole;
2007-01-01
We advocate the generally covariant formulation of Maxwell equations as underpinning some recent advances in computational electrodynamics—in the dimensionality reduction for separable structures; in mesh truncation for finite-difference computations; and in adaptive coordinate mapping as opposed...
Zhao, Wenle; Hill, Michael D; Palesch, Yuko
2015-12-01
In many clinical trials, baseline covariates could affect the primary outcome. Commonly used strategies to balance baseline covariates include stratified constrained randomization and minimization. Stratification is limited to few categorical covariates. Minimization lacks the randomness of treatment allocation. Both apply only to categorical covariates. As a result, serious imbalances could occur in important baseline covariates not included in the randomization algorithm. Furthermore, randomness of treatment allocation could be significantly compromised because of the high proportion of deterministic assignments associated with stratified block randomization and minimization, potentially resulting in selection bias. Serious baseline covariate imbalances and selection biases often contribute to controversial interpretation of the trial results. The National Institute of Neurological Disorders and Stroke recombinant tissue plasminogen activator Stroke Trial and the Captopril Prevention Project are two examples. In this article, we propose a new randomization strategy, termed the minimal sufficient balance randomization, which will dually prevent serious imbalances in all important baseline covariates, including both categorical and continuous types, and preserve the randomness of treatment allocation. Computer simulations are conducted using the data from the National Institute of Neurological Disorders and Stroke recombinant tissue plasminogen activator Stroke Trial. Serious imbalances in four continuous and one categorical covariate are prevented with a small cost in treatment allocation randomness. A scenario of simultaneously balancing 11 baseline covariates is explored with similar promising results. The proposed minimal sufficient balance randomization algorithm can be easily implemented in computerized central randomization systems for large multicenter trials.
Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins
Tolwinski-Ward, S. E.; Wang, D.
2015-12-01
Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.
Population dynamics of sinking phytoplankton in stratified waters
Huisman, J.; Sommeijer, B.P.
2002-01-01
We analyze the predictions of a reaction-advection-diffusion model to pinpoint the necessary conditions for bloom development of sinking phytoplanktonspecies in stratified waters. This reveals that there are two parameter windows that can sustain sinking phytoplankton, a turbulence window and atherm
Optimal stratification of item pools in α-stratified computerized adaptive testing
Chang, Hua-Hua; Linden, van der Wim J.
2003-01-01
A method based on 0-1 linear programming (LP) is presented to stratify an item pool optimally for use in α-stratified adaptive testing. Because the 0-1 LP model belongs to the subclass of models with a network flow structure, efficient solutions are possible. The method is applied to a previous item
Development of covariance capabilities in EMPIRE code
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Covariant Quantum Gravity with Continuous Quantum Geometry I: Covariant Hamiltonian Framework
Pilc, Marián
2016-01-01
The first part of the series is devoted to the formulation of the Einstein-Cartan Theory within the covariant hamiltonian framework. In the first section the general multisymplectic approach is revised and the notion of the d-jet bundles is introduced. Since the whole Standard Model Lagrangian (including gravity) can be written as the functional of the forms, the structure of the d-jet bundles is more appropriate for the covariant hamiltonian analysis than the standard jet bundle approach. The definition of the local covariant Poisson bracket on the space of covariant observables is recalled. The main goal of the work is to show that the gauge group of the Einstein-Cartan theory is given by the semidirect product of the local Lorentz group and the group of spacetime diffeomorphisms. Vanishing of the integral generators of the gauge group is equivalent to equations of motion of the Einstein-Cartan theory and the local covariant algebra generated by Noether's currents is closed Lie algebra.
Institute of Scientific and Technical Information of China (English)
王建宏; 朱永红; 肖绚
2012-01-01
When the observed input-output data are corrupted by observed noise in an aircraft flutter statistic model, one should obtain accurate parameters of the model. We combine the instrumental variable identification method and covariance matching method to develop a new method: the instrumental variable covariance method. In the statistic model of aircraft flutter, we introduce some instrumental variables to develop a covariance function. We present the procedure to solve the criterion function and partial derivatives expression. We derive the asymptotic covariance matrix expression obtained with the proposed method, and use the asymptotic covariance matrix, expression to verify effectiveness of the method, and design the external excitation signal. The method is applied to identify the transfer function in the current loop of a flight simulator and parameters of an aircraft flutter model. The simulation shows effectiveness of the method.%针对飞机颤振随机模型中输入输出观测数据带有观测噪声的问题,为得到较准确的飞机颤振模态参数,将辅助变量辨识方法与方差匹配方法相结合,形成一种新的辨识策略——辅助变量方差辨识方法.在飞机颤振随机模型中,通过引入辅助变量来构造方差函数,导出最小化优化目标准则函数的求解过程,并详细地给出对应的偏导式.根据渐近分析理论,推导参数估计值的渐近方差矩阵表达形式.利用此渐近方差矩阵不仅可以衡量辨识方法的有效性,而且可以设计最优激励信号.将提出的方法用于飞行仿真转台电流环被控对象的传递函数辨识和飞机颤振模态参数辨识,验证了该方法的有效性.
Covariant holography of a tachyonic accelerating universe
Rozas-Fernández, Alberto
2014-01-01
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state $w=p/\\rho$, both for $w>-1$ and $w<-1$. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analysed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of a S matrix at infinite distances.
Covariant holography of a tachyonic accelerating universe
Energy Technology Data Exchange (ETDEWEB)
Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)
2014-08-15
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)
Boer, Martin P; Wright, Deanne; Feng, Lizhi; Podlich, Dean W; Luo, Lang; Cooper, Mark; van Eeuwijk, Fred A
2007-11-01
Complex quantitative traits of plants as measured on collections of genotypes across multiple environments are the outcome of processes that depend in intricate ways on genotype and environment simultaneously. For a better understanding of the genetic architecture of such traits as observed across environments, genotype-by-environment interaction should be modeled with statistical models that use explicit information on genotypes and environments. The modeling approach we propose explains genotype-by-environment interaction by differential quantitative trait locus (QTL) expression in relation to environmental variables. We analyzed grain yield and grain moisture for an experimental data set composed of 976 F(5) maize testcross progenies evaluated across 12 environments in the U.S. corn belt during 1994 and 1995. The strategy we used was based on mixed models and started with a phenotypic analysis of multi-environment data, modeling genotype-by-environment interactions and associated genetic correlations between environments, while taking into account intraenvironmental error structures. The phenotypic mixed models were then extended to QTL models via the incorporation of marker information as genotypic covariables. A majority of the detected QTL showed significant QTL-by-environment interactions (QEI). The QEI were further analyzed by including environmental covariates into the mixed model. Most QEI could be understood as differential QTL expression conditional on longitude or year, both consequences of temperature differences during critical stages of the growth.
Covariate-adjusted measures of discrimination for survival data
DEFF Research Database (Denmark)
White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth
2015-01-01
MOTIVATION: Discrimination statistics describe the ability of a survival model to assign higher risks to individuals who experience earlier events: examples are Harrell's C-index and Royston and Sauerbrei's D, which we call the D-index. Prognostic covariates whose distributions are controlled...... by the study design (e.g. age and sex) influence discrimination and can make it difficult to compare model discrimination between studies. Although covariate adjustment is a standard procedure for quantifying disease-risk factor associations, there are no covariate adjustment methods for discrimination...
Torsion and geometrostasis in covariant superstrings
Energy Technology Data Exchange (ETDEWEB)
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.
Calcul Stochastique Covariant à Sauts & Calcul Stochastique à Sauts Covariants
Maillard-Teyssier, Laurence
2003-01-01
We propose a stochastic covariant calculus forcàdlàg semimartingales in the tangent bundle $TM$ over a manifold $M$. A connection on $M$ allows us to define an intrinsic derivative ofa $C^1$ curve $(Y_t)$ in $TM$, the covariantderivative. More precisely, it is the derivative of$(Y_t)$ seen in a frame moving parallelly along its projection curve$(x_t)$ on $M$. With the transfer principle, Norris defined thestochastic covariant integration along a continuous semimartingale in$TM$. We describe t...
Covariate-free and Covariate-dependent Reliability.
Bentler, Peter M
2016-12-01
Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.
Computation of mixing in large stably stratified enclosures
Zhao, Haihua
This dissertation presents a set of new numerical models for the mixing and heat transfer problems in large stably stratified enclosures. Basing on these models, a new computer code, BMIX++ (Berkeley mechanistic MIXing code in C++), was developed by Christensen (2001) and the author. Traditional lumped control volume methods and zone models cannot model the detailed information about the distributions of temperature, density, and pressure in enclosures and therefore can have significant errors. 2-D and 3-D CFD methods require very fine grid resolution to resolve thin substructures such as jets, wall boundaries, yet such fine grid resolution is difficult or impossible to provide due to computational expense. Peterson's scaling (1994) showed that stratified mixing processes in large stably stratified enclosures can be described using one-dimensional differential equations, with the vertical transport by free and wall jets modeled using standard integral techniques. This allows very large reductions in computational effort compared to three-dimensional numerical modeling of turbulent mixing in large enclosures. The BMIX++ code was developed to implement the above ideas. The code uses a Lagrangian approach to solve 1-D transient governing equations for the ambient fluid and uses analytical models or 1-D integral models to compute substructures. 1-D transient conduction model for the solid boundaries, pressure computation and opening models are also included to make the code more versatile. The BMIX++ code was implemented in C++ and the Object-Oriented-Programming (OOP) technique was intensively used. The BMIX++ code was successfully applied to different types of mixing problems such as stratification in a water tank due to a heater inside, water tank exchange flow experiment simulation, early stage building fire analysis, stratification produced by multiple plumes, and simulations for the UCB large enclosure experiments. Most of these simulations gave satisfying
The Universal Aspect Ratio of Vortices in Rotating Stratifi?ed Flows: Experiments and Observations
Aubert, Oriane; Gal, Patrice Le; Marcus, Philip S
2012-01-01
We validate a new law for the aspect ratio $\\alpha = H/L$ of vortices in a rotating, stratified flow, where $H$ and $L$ are the vertical half-height and horizontal length scale of the vortices. The aspect ratio depends not only on the Coriolis parameter f and buoyancy (or Brunt-Vaisala) frequency $\\bar{N}$ of the background flow, but also on the buoyancy frequency $N_c$ within the vortex and on the Rossby number $Ro$ of the vortex such that $\\alpha = f \\sqrt{[Ro (1 + Ro)/(N_c^2- \\bar{N}^2)]}$. This law for $\\alpha$ is obeyed precisely by the exact equilibrium solution of the inviscid Boussinesq equations that we show to be a useful model of our laboratory vortices. The law is valid for both cyclones and anticyclones. Our anticyclones are generated by injecting fluid into a rotating tank filled with linearly-stratified salt water. The vortices are far from the top and bottom boundaries of the tank, so there is no Ekman circulation. In one set of experiments, the vortices viscously decay, but as they do, they c...
Comparison of Methods for Handling Missing Covariate Data
Johansson, Åsa M.; Karlsson, Mats O
2013-01-01
Missing covariate data is a common problem in nonlinear mixed effects modelling of clinical data. The aim of this study was to implement and compare methods for handling missing covariate data in nonlinear mixed effects modelling under different missing data mechanisms. Simulations generated data for 200 individuals with a 50% difference in clearance between males and females. Three different types of missing data mechanisms were simulated and information about sex was missing for 50% of the ...
Improving on the empirical covariance matrix using truncated PCA with white noise residuals
Jewson, S
2005-01-01
The empirical covariance matrix is not necessarily the best estimator for the population covariance matrix: we describe a simple method which gives better estimates in two examples. The method models the covariance matrix using truncated PCA with white noise residuals. Jack-knife cross-validation is used to find the truncation that maximises the out-of-sample likelihood score.
Covariant Thermodynamics and Relativity
Lopez-Monsalvo, C S
2011-01-01
This thesis deals with the dynamics of irreversible processes within the context of the general theory of relativity. In particular, we address the problem of the 'infinite' speed of propagation of thermal disturbances in a dissipative fluid. The present work builds on the multi-fluid variational approach to relativistic dissipation, pioneered by Carter, and provides a dynamical theory of heat conduction. The novel property of such approach is the thermodynamic interpretation associated with a two-fluid system whose constituents are matter and entropy. The dynamics of this model leads to a relativistic generalisation of the Cattaneo equation; the constitutive relation for causal heat transport. A comparison with the Israel and Stewart model is presented and its equivalence is shown. This discussion provides new insights into the not-well understood definition of a non-equilibrium temperature. The variational approach to heat conduction presented in this thesis constitutes a mathematically promising formalism ...
Covariant Quantization of CPT-violating Photons
Colladay, D; Noordmans, J P; Potting, R
2016-01-01
We perform the covariant canonical quantization of the CPT- and Lorentz-symmetry-violating photon sector of the minimal Standard-Model Extension, which contains a general (timelike, lightlike, or spacelike) fixed background tensor $k_{AF}^\\mu$. Well-known stability issues, arising from complex-valued energy states, are solved by introducing a small photon mass, orders of magnitude below current experimental bounds. We explicitly construct a covariant basis of polarization vectors, in which the photon field can be expanded. We proceed to derive the Feynman propagator and show that the theory is microcausal. Despite the occurrence of negative energies and vacuum-Cherenkov radiation, we do not find any runaway stability issues, because the energy remains bounded from below. An important observation is that the ordering of the roots of the dispersion relations is the same in any observer frame, which allows for a frame-independent condition that selects the correct branch of the dispersion relation. This turns ou...
Chiral Four-Dimensional Heterotic Covariant Lattices
Beye, Florian
2014-01-01
In the covariant lattice formalism, chiral four-dimensional heterotic string vacua are obtained from certain even self-dual lattices which completely decompose into a left-mover and a right-mover lattice. The main purpose of this work is to classify all right-mover lattices that can appear in such a chiral model, and to study the corresponding left-mover lattices using the theory of lattice genera. In particular, the Smith-Minkowski-Siegel mass formula is employed to calculate a lower bound on the number of left-mover lattices. Also, the known relationship between asymmetric orbifolds and covariant lattices is considered in the context of our classification.
Directory of Open Access Journals (Sweden)
V. G. Krishna
2016-01-01
Full Text Available Vertical component record sections of local earthquake seismograms from a state-of-the-art Koyna-Warna digital seismograph network are assembled in the reduced time versus epicentral distance frame, similar to those obtained in seismic refraction profiling. The record sections obtained for an average source depth display the processed seismograms from nearly equal source depths with similar source mechanisms and recorded in a narrow azimuth range, illuminating the upper crustal P and S velocity structure in the region. Further, the seismogram characteristics of the local earthquake sources are found to vary significantly for different source mechanisms and the amplitude variations exceed those due to velocity model stratification. In the present study a large number of reflectivity synthetic seismograms are obtained in near offset ranges for a stratified upper crustal model having sharp discontinuities with 7%-10% velocity contrasts. The synthetics are obtained for different source regimes (e.g., strike-slip, normal, reverse and different sets of source parameters (strike, dip, and rake within each regime. Seismogram sections with dominantly strike-slip mechanism are found to be clearly favorable in revealing the velocity stratification for both P and S waves. In contrast the seismogram sections for earthquakes of other source mechanisms seem to display the upper crustal P phases poorly with low amplitudes even in presence of sharp discontinuities of high velocity contrasts. The observed seismogram sections illustrated here for the earthquake sources with strike-slip and normal mechanisms from the Koyna-Warna seismic region substantiate these findings. Travel times and reflectivity synthetic seismograms are used for 1-D modeling of the observed virtual source local earthquake seismogram sections and inferring the upper crustal velocity structure in the Koyna-Warna region. Significantly, the inferred upper crustal velocity model in the region
Inferring Meta-covariates in Classification
Harris, Keith; McMillan, Lisa; Girolami, Mark
This paper develops an alternative method for gene selection that combines model based clustering and binary classification. By averaging the covariates within the clusters obtained from model based clustering, we define “meta-covariates” and use them to build a probit regression model, thereby selecting clusters of similarly behaving genes, aiding interpretation. This simultaneous learning task is accomplished by an EM algorithm that optimises a single likelihood function which rewards good performance at both classification and clustering. We explore the performance of our methodology on a well known leukaemia dataset and use the Gene Ontology to interpret our results.
Cosmic Censorship Conjecture revisited: Covariantly
Hamid, Aymen I M; Maharaj, Sunil D
2014-01-01
In this paper we study the dynamics of the trapped region using a frame independent semi-tetrad covariant formalism for general Locally Rotationally Symmetric (LRS) class II spacetimes. We covariantly prove some important geometrical results for the apparent horizon, and state the necessary and sufficient conditions for a singularity to be locally naked. These conditions bring out, for the first time in a quantitative and transparent manner, the importance of the Weyl curvature in deforming and delaying the trapped region during continual gravitational collapse, making the central singularity locally visible.
Directory of Open Access Journals (Sweden)
C.R. Marcondes
2002-02-01
magnitude entre elas. Importante mudança de posto ocorreu entre a característica padronizada e a não padronizada, principalmente para DEP materna.The objectives of this study were to estimate genetic parameters for non-standardized weights at nursing (PR120, at weaning (PR240, at yearling (PR365 and at post yearling (PR550, and to predict EPD’s (expected progeny differences for these traits using records from 29,769 Nellores. Covariance components and genetic parameters were estimated by mixed-model methodology, REML, using an animal model. Models for PR120, PR240, PR365 and PR455 included the random direct and maternal animal effects, the dam permanent environmental effect and the error. Fixed effects were contemporary group (CG and age of cow at parturition (CIVP and the covariate age of the calf at measuring. Two additional models for PR365, PR455 and PR550 analyses were used: the first included CG and CIVP, animal and maternal direct effect, residual and age of the calf (as covariate, and the second included CG and CIVP (as fixed effects, animal direct effect, residual and age of calf at measuring. Observed means±standard deviations were: 127±25kg (PR120; 191±34kg (PR240; 225±42kg (PR365; 266±51kg (PR455 and 310±56kg (PR550. From single-trait analyses, direct and maternal heritabilities for PR120, PR240, PR365 and PR455 were, respectively, .23 and .08; .19 and .10; .24 and .04; .30 and .04. Direct heritabilities were .39; .44 and .43, respectively, for PR365, PR455 and PR550. In the model without permanent effect, direct and maternal heritabilities for PR365, PR455 and PR550 were .25 and .08; .32 and .07; .38 and .03, respectively. When the estimates for standardized traits at the same period were compared, no differences in magnitude were found. Rank correlation had important changes when standardized and non-standardized traits were compared.
Distribution of vaccine/antivirals and the 'least spread line' in a stratified population
Goldstein, E.; Apolloni, A.; Lewis, B.; Miller, J. C.; Macauley, M.; Eubank, S.; Lipsitch, M.; Wallinga, J.
2010-01-01
We describe a prioritization scheme for an allocation of a sizeable quantity of vaccine or antivirals in a stratified population. The scheme builds on an optimal strategy for reducing the epidemic's initial growth rate in a stratified mass-action model. The strategy is tested on the EpiSims network
Magnetic flux concentrations from turbulent stratified convection
Käpylä, P J; Kleeorin, N; Käpylä, M J; Rogachevskii, I
2015-01-01
(abridged) Context: The mechanisms that cause the formation of sunspots are still unclear. Aims: We study the self-organisation of initially uniform sub-equipartition magnetic fields by highly stratified turbulent convection. Methods: We perform simulations of magnetoconvection in Cartesian domains that are $8.5$-$24$ Mm deep and $34$-$96$ Mm wide. We impose either a vertical or a horizontal uniform magnetic field in a convection-driven turbulent flow. Results: We find that super-equipartition magnetic flux concentrations are formed near the surface with domain depths of $12.5$ and $24$ Mm. The size of the concentrations increases as the box size increases and the largest structures ($20$ Mm horizontally) are obtained in the 24 Mm deep models. The field strength in the concentrations is in the range of $3$-$5$ kG. The concentrations grow approximately linearly in time. The effective magnetic pressure measured in the simulations is positive near the surface and negative in the bulk of the convection zone. Its ...
Gravity-induced stresses in stratified rock masses
Amadei, B.; Swolfs, H.S.; Savage, W.Z.
1988-01-01
This paper presents closed-form solutions for the stress field induced by gravity in anisotropic and stratified rock masses. These rocks are assumed to be laterally restrained. The rock mass consists of finite mechanical units, each unit being modeled as a homogeneous, transversely isotropic or isotropic linearly elastic material. The following results are found. The nature of the gravity induced stress field in a stratified rock mass depends on the elastic properties of each rock unit and how these properties vary with depth. It is thermodynamically admissible for the induced horizontal stress component in a given stratified rock mass to exceed the vertical stress component in certain units and to be smaller in other units; this is not possible for the classical unstratified isotropic solution. Examples are presented to explore the nature of the gravity induced stress field in stratified rock masses. It is found that a decrease in rock mass anisotropy and a stiffening of rock masses with depth can generate stress distributions comparable to empirical hyperbolic distributions previously proposed in the literature. ?? 1988 Springer-Verlag.
Gaussian covariance matrices for anisotropic galaxy clustering measurements
Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio
2016-04-01
Measurements of the redshift-space galaxy clustering have been a prolific source of cosmological information in recent years. Accurate covariance estimates are an essential step for the validation of galaxy clustering models of the redshift-space two-point statistics. Usually, only a limited set of accurate N-body simulations is available. Thus, assessing the data covariance is not possible or only leads to a noisy estimate. Further, relying on simulated realizations of the survey data means that tests of the cosmology dependence of the covariance are expensive. With these points in mind, this work presents a simple theoretical model for the linear covariance of anisotropic galaxy clustering observations with synthetic catalogues. Considering the Legendre moments (`multipoles') of the two-point statistics and projections into wide bins of the line-of-sight parameter (`clustering wedges'), we describe the modelling of the covariance for these anisotropic clustering measurements for galaxy samples with a trivial geometry in the case of a Gaussian approximation of the clustering likelihood. As main result of this paper, we give the explicit formulae for Fourier and configuration space covariance matrices. To validate our model, we create synthetic halo occupation distribution galaxy catalogues by populating the haloes of an ensemble of large-volume N-body simulations. Using linear and non-linear input power spectra, we find very good agreement between the model predictions and the measurements on the synthetic catalogues in the quasi-linear regime.
Energy Technology Data Exchange (ETDEWEB)
Pang, Yang [Columbia Univ., New York, NY (United States)]|[Brookhaven National Labs., Upton, NY (United States)
1997-09-22
Many phenomenological models for relativistic heavy ion collisions share a common framework - the relativistic Boltzmann equations. Within this framework, a nucleus-nucleus collision is described by the evolution of phase-space distributions of several species of particles. The equations can be effectively solved with the cascade algorithm by sampling each phase-space distribution with points, i.e. {delta}-functions, and by treating the interaction terms as collisions of these points. In between collisions, each point travels on a straight line trajectory. In most implementations of the cascade algorithm, each physical particle, e.g. a hadron or a quark, is often represented by one point. Thus, the cross-section for a collision of two points is just the cross-section of the physical particles, which can be quite large compared to the local density of the system. For an ultra-relativistic nucleus-nucleus collision, this could lead to a large violation of the Lorentz invariance. By using the invariance property of the Boltzmann equation under a scale transformation, a Lorentz invariant cascade algorithm can be obtained. The General Cascade Program - GCP - is a tool for solving the relativistic Boltzmann equation with any number of particle species and very general interactions with the cascade algorithm.
How covariant is the galaxy luminosity function?
Smith, Robert E
2012-01-01
We investigate the error properties of certain galaxy luminosity function (GLF) estimators. Using a cluster expansion of the density field, we show how, for both volume and flux limited samples, the GLF estimates are covariant. The covariance matrix can be decomposed into three pieces: a diagonal term arising from Poisson noise; a sample variance term arising from large-scale structure in the survey volume; an occupancy covariance term arising due to galaxies of different luminosities inhabiting the same cluster. To evaluate the theory one needs: the mass function and bias of clusters, and the conditional luminosity function (CLF). We use a semi-analytic model (SAM) galaxy catalogue from the Millennium run N-body simulation and the CLF of Yang et al. (2003) to explore these effects. The GLF estimates from the SAM and the CLF qualitatively reproduce results from the 2dFGRS. We also measure the luminosity dependence of clustering in the SAM and find reasonable agreement with 2dFGRS results for bright galaxies. ...
Covariant description of isothermic surfaces
Tafel, Jacek
2014-01-01
We present a covariant formulation of the Gauss-Weingarten equations and the Gauss-Mainardi-Codazzi equations for surfaces in 3-dimensional curved spaces. We derive a coordinate invariant condition on the first and second fundamental form which is necessary and sufficient for the surface to be isothermic.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Covariation Neglect among Novice Investors
Hedesstrom, Ted Martin; Svedsater, Henrik; Garling, Tommy
2006-01-01
In 4 experiments, undergraduates made hypothetical investment choices. In Experiment 1, participants paid more attention to the volatility of individual assets than to the volatility of aggregated portfolios. The results of Experiment 2 show that most participants diversified even when this increased risk because of covariation between the returns…
[Clinical research XIX. From clinical judgment to analysis of covariance].
Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Moreno, Jorge; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2014-01-01
The analysis of covariance (ANCOVA) is based on the general linear models. This technique involves a regression model, often multiple, in which the outcome is presented as a continuous variable, the independent variables are qualitative or are introduced into the model as dummy or dichotomous variables, and factors for which adjustment is required (covariates) can be in any measurement level (i.e. nominal, ordinal or continuous). The maneuvers can be entered into the model as 1) fixed effects, or 2) random effects. The difference between fixed effects and random effects depends on the type of information we want from the analysis of the effects. ANCOVA effect separates the independent variables from the effect of co-variables, i.e., corrects the dependent variable eliminating the influence of covariates, given that these variables change in conjunction with maneuvers or treatments, affecting the outcome variable. ANCOVA should be done only if it meets three assumptions: 1) the relationship between the covariate and the outcome is linear, 2) there is homogeneity of slopes, and 3) the covariate and the independent variable are independent from each other.
Properties of Endogenous Post-Stratified Estimation using remote sensing data
John Tipton; Jean Opsomer; Gretchen. Moisen
2013-01-01
Post-stratification is commonly used to improve the precision of survey estimates. In traditional poststratification methods, the stratification variable must be known at the population level. When suitable covariates are available at the population level, an alternative approach consists of fitting a model on the covariates, making predictions for the population and...
On Stratified Vortex Motions under Gravity.
2014-09-26
AD-A156 930 ON STRATIFIED VORTEX MOTIONS UNDER GRAVITY (U) NAVAL i/i RESEARCH LAB WASHINGTON DC Y T FUNG 20 JUN 85 NRL-MIR-5564 UNCLASSIFIED F/G 20/4...Under Gravity LCn * Y. T. Fung Fluid Dynamics Branch - Marine Technologyv Division June 20, 1985 SO Cyk. NAVAL RESEARCH LABORATORY Washington, D.C...DN880-019 TITLE (Include Security Classification) On Stratified Vortex Motions Under Gravity 12 PERSONAL AUTHOR(S) Funa, Y.T. 13a. TYPE OF REPORT 13b
Mixing by microorganisms in stratified fluids
Wagner, Gregory L; Lauga, Eric
2014-01-01
We examine the vertical mixing induced by the swimming of microorganisms at low Reynolds and P\\'eclet numbers in a stably stratified ocean, and show that the global contribution of oceanic microswimmers to vertical mixing is negligible. We propose two approaches to estimating the mixing efficiency, $\\eta$, or the ratio of the rate of potential energy creation to the total rate-of-working on the ocean by microswimmers. The first is based on scaling arguments and estimates $\\eta$ in terms of the ratio between the typical organism size, $a$, and an intrinsic length scale for the stratified flow, $\\ell = \\left ( \
Institute of Scientific and Technical Information of China (English)
2015-01-01
The premarital sex of senior students in some universities of Anhui province is investigated. To protect the privacy of respondents, applying randomized response technique and stratified three-stage method, the proportion of senior students premari-tal sex is studied using attribute characteristic Warner model. According to total probability formulas and variance's basic properties in Probability and Mathematical Statistics and the classical sampling theory of Cochran, the proportion and variance of senior college students premarital sex are deduced at all levels and stages. The survey reveals that the proportion of senior students premarital sex is high. Therefore, we should actively instruct the undergraduates to treat the issues of premarital sex properly and rationally.%对安徽省某高校大四学生婚前性行为进行抽样调查,为保护被调查对象的隐私,采用随机应答技术( Random-ized Response Technique,简写为RRT)结合分层三阶段抽样调查方法,利用属性特征敏感问题Warner模型分析该校大四学生发生婚前性行为的比例。运用全概率公式及方差的基本性质等概率论与数理统计知识,结合Cochran W. G的经典抽样理论,推导出各层各阶段大四学生发生婚前性行为的比例及其方差。调查结果显示大四学生婚前性行为发生比例高。为此,应该积极引导大学生理性正确的对待婚前性行为。
Dolan, C V; Boomsma, D I; Neale, M C
1999-01-01
Sib pair-selection strategies, designed to identify the most informative sib pairs in order to detect a quantitative-trait locus (QTL), give rise to a missing-data problem in genetic covariance-structure modeling of QTL effects. After selection, phenotypic data are available for all sibs, but marker data-and, consequently, the identity-by-descent (IBD) probabilities-are available only in selected sib pairs. One possible solution to this missing-data problem is to assign prior IBD probabilities (i.e., expected values) to the unselected sib pairs. The effect of this assignment in genetic covariance-structure modeling is investigated in the present paper. Two maximum-likelihood approaches to estimation are considered, the pi-hat approach and the IBD-mixture approach. In the simulations, sample size, selection criteria, QTL-increaser allele frequency, and gene action are manipulated. The results indicate that the assignment of prior IBD probabilities results in serious estimation bias in the pi-hat approach. Bias is also present in the IBD-mixture approach, although here the bias is generally much smaller. The null distribution of the log-likelihood ratio (i.e., in absence of any QTL effect) does not follow the expected null distribution in the pi-hat approach after selection. In the IBD-mixture approach, the null distribution does agree with expectation.
Stratified source-sampling techniques for Monte Carlo eigenvalue analysis.
Energy Technology Data Exchange (ETDEWEB)
Mohamed, A.
1998-07-10
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results.
Group Lasso estimation of high-dimensional covariance matrices
Bigot, Jérémie; Loubes, Jean-Michel; Alvarez, Lilian Muniz
2010-01-01
In this paper, we consider the Group Lasso estimator of the covariance matrix of a stochastic process corrupted by an additive noise. We propose to estimate the covariance matrix in a high-dimensional setting under the assumption that the process has a sparse representation in a large dictionary of basis functions. Using a matrix regression model, we propose a new methodology for high-dimensional covariance matrix estimation based on empirical contrast regularization by a group Lasso penalty. Using such a penalty, the method selects a sparse set of basis functions in the dictionary used to approximate the process, leading to an approximation of the covariance matrix into a low dimensional space. Consistency of the estimator is studied in Frobenius and operator norms and an application to sparse PCA is proposed.
Agilan, V.; Umamahesh, N. V.
2017-03-01
Present infrastructure design is primarily based on rainfall Intensity-Duration-Frequency (IDF) curves with so-called stationary assumption. However, in recent years, the extreme precipitation events are increasing due to global climate change and creating non-stationarity in the series. Based on recent theoretical developments in the Extreme Value Theory (EVT), recent studies proposed a methodology for developing non-stationary rainfall IDF curve by incorporating trend in the parameters of the Generalized Extreme Value (GEV) distribution using Time covariate. But, the covariate Time may not be the best covariate and it is important to analyze all possible covariates and find the best covariate to model non-stationarity. In this study, five physical processes, namely, urbanization, local temperature changes, global warming, El Niño-Southern Oscillation (ENSO) cycle and Indian Ocean Dipole (IOD) are used as covariates. Based on these five covariates and their possible combinations, sixty-two non-stationary GEV models are constructed. In addition, two non-stationary GEV models based on Time covariate and one stationary GEV model are also constructed. The best model for each duration rainfall series is chosen based on the corrected Akaike Information Criterion (AICc). From the findings of this study, it is observed that the local processes (i.e., Urbanization, local temperature changes) are the best covariate for short duration rainfall and global processes (i.e., Global warming, ENSO cycle and IOD) are the best covariate for the long duration rainfall of the Hyderabad city, India. Furthermore, the covariate Time is never qualified as the best covariate. In addition, the identified best covariates are further used to develop non-stationary rainfall IDF curves of the Hyderabad city. The proposed methodology can be applied to other situations to develop the non-stationary IDF curves based on the best covariate.
Bayesian adjustment for covariate measurement errors: a flexible parametric approach.
Hossain, Shahadut; Gustafson, Paul
2009-05-15
In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.
Discrete Symmetries in Covariant LQG
Rovelli, Carlo
2012-01-01
We study time-reversal and parity ---on the physical manifold and in internal space--- in covariant loop gravity. We consider a minor modification of the Holst action which makes it transform coherently under such transformations. The classical theory is not affected but the quantum theory is slightly different. In particular, the simplicity constraints are slightly modified and this restricts orientation flips in a spinfoam to occur only across degenerate regions, thus reducing the sources of potential divergences.
Phenotypic covariance at species’ borders
2013-01-01
Background Understanding the evolution of species limits is important in ecology, evolution, and conservation biology. Despite its likely importance in the evolution of these limits, little is known about phenotypic covariance in geographically marginal populations, and the degree to which it constrains, or facilitates, responses to selection. We investigated phenotypic covariance in morphological traits at species’ borders by comparing phenotypic covariance matrices (P), including the degree of shared structure, the distribution of strengths of pair-wise correlations between traits, the degree of morphological integration of traits, and the ranks of matricies, between central and marginal populations of three species-pairs of coral reef fishes. Results Greater structural differences in P were observed between populations close to range margins and conspecific populations toward range centres, than between pairs of conspecific populations that were both more centrally located within their ranges. Approximately 80% of all pair-wise trait correlations within populations were greater in the north, but these differences were unrelated to the position of the sampled population with respect to the geographic range of the species. Conclusions Neither the degree of morphological integration, nor ranks of P, indicated greater evolutionary constraint at range edges. Characteristics of P observed here provide no support for constraint contributing to the formation of these species’ borders, but may instead reflect structural change in P caused by selection or drift, and their potential to evolve in the future. PMID:23714580
Turbulent Mixing in Stably Stratified Flows
2008-03-01
Liege Colloquium on Ocean Hydrodynamics, volume 46, page 19889898. Elsevier, 1987. R. M. Kerr. Higher-order derivative correlations and the alignment of...19th International Liege Colloquium on Ocean Hydrodynamics, volume 46, pages 3-9. Elsevier, 1988. P. Meunier and G. Spedding. Stratified propelled
Nitrogen transformations in stratified aquatic microbial ecosystems
DEFF Research Database (Denmark)
Revsbech, Niels Peter; Risgaard-Petersen, N.; Schramm, Andreas
2006-01-01
Abstract New analytical methods such as advanced molecular techniques and microsensors have resulted in new insights about how nitrogen transformations in stratified microbial systems such as sediments and biofilms are regulated at a µm-mm scale. A large and ever-expanding knowledge base about n...
Energy Technology Data Exchange (ETDEWEB)
Alfred Stadler, Franz Gross
2010-10-01
We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.
Survival analysis of cervical cancer using stratified Cox regression
Purnami, S. W.; Inayati, K. D.; Sari, N. W. Wulan; Chosuvivatwong, V.; Sriplung, H.
2016-04-01
Cervical cancer is one of the mostly widely cancer cause of the women death in the world including Indonesia. Most cervical cancer patients come to the hospital already in an advanced stadium. As a result, the treatment of cervical cancer becomes more difficult and even can increase the death's risk. One of parameter that can be used to assess successfully of treatment is the probability of survival. This study raises the issue of cervical cancer survival patients at Dr. Soetomo Hospital using stratified Cox regression based on six factors such as age, stadium, treatment initiation, companion disease, complication, and anemia. Stratified Cox model is used because there is one independent variable that does not satisfy the proportional hazards assumption that is stadium. The results of the stratified Cox model show that the complication variable is significant factor which influent survival probability of cervical cancer patient. The obtained hazard ratio is 7.35. It means that cervical cancer patient who has complication is at risk of dying 7.35 times greater than patient who did not has complication. While the adjusted survival curves showed that stadium IV had the lowest probability of survival.
Institute of Scientific and Technical Information of China (English)
靖增群
2012-01-01
One of the prime reasons for the shortage of tourism management undergraduates for the need of tourism industry lies in the divorce of the personnel training in this aspect from the reality of tourism industry. While tourism industry is in need of professionals not only good at skills and expertise but also familiar with tourism management, the training model for tourism management majors in most undergraduate institutions of higher learning still follows the beaten track, thus having made students unskilled in techniques, inadequate in management capacity, and having led to the ultimate embarrassing underachievement of students. In this paper, the stratified and major-oriented training model is proposed, which is aimed to not only lay down rather solid basic theory for students but also to develop their fairly proficient expertise and some management capacity so as to realize the synchronization bewteen teaching and tourism industry.%本科旅游管理专业人才不适应旅游业需求，根本的原因之一就是其培养脱离我国旅游业实际。旅游业既需要技术技能强，又需要懂旅游管理的专门人才，而多数本科院校旅游管理专业的培养模式，因循守旧，造成学生技术技能不熟练，实际的管理能力不具备，最终造成学生高不成低不就的尴尬处境。文章提出分层次、分方向培养的模式，既培养学生较为扎实的基础理论，又培养其较为熟练的技术技能与一定的管理能力，从而实现教学与旅游业的对接。
Rae, Gordon
2008-11-01
Several authors have suggested that prior to conducting a confirmatory factor analysis it may be useful to group items into a smaller number of item 'parcels' or 'testlets'. The present paper mathematically shows that coefficient alpha based on these parcel scores will only exceed alpha based on the entire set of items if W, the ratio of the average covariance of items between parcels to the average covariance of items within parcels, is greater than unity. If W is less than unity, however, and errors of measurement are uncorrelated, then stratified alpha will be a better lower bound to the reliability of a measure than the other two coefficients. Stratified alpha are also equal to the true reliability of a test when items within parcels are essentially tau-equivalent if one assumes that errors of measurement are not correlated.
Competing risks and time-dependent covariates
DEFF Research Database (Denmark)
Cortese, Giuliana; Andersen, Per K
2010-01-01
Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates...
Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme
Hickmann, K. S.; Godinez, H. C.
2015-12-01
When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.
Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.
2015-12-01
We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1
Computational protein design quantifies structural constraints on amino acid covariation.
Directory of Open Access Journals (Sweden)
Noah Ollikainen
Full Text Available Amino acid covariation, where the identities of amino acids at different sequence positions are correlated, is a hallmark of naturally occurring proteins. This covariation can arise from multiple factors, including selective pressures for maintaining protein structure, requirements imposed by a specific function, or from phylogenetic sampling bias. Here we employed flexible backbone computational protein design to quantify the extent to which protein structure has constrained amino acid covariation for 40 diverse protein domains. We find significant similarities between the amino acid covariation in alignments of natural protein sequences and sequences optimized for their structures by computational protein design methods. These results indicate that the structural constraints imposed by protein architecture play a dominant role in shaping amino acid covariation and that computational protein design methods can capture these effects. We also find that the similarity between natural and designed covariation is sensitive to the magnitude and mechanism of backbone flexibility used in computational protein design. Our results thus highlight the necessity of including backbone flexibility to correctly model precise details of correlated amino acid changes and give insights into the pressures underlying these correlations.