On the a priori estimation of collocation error covariance functions: a feasibility study
DEFF Research Database (Denmark)
Arabelos, D.N.; Forsberg, René; Tscherning, C.C.
2007-01-01
and the associated error covariance functions were conducted in the Arctic region north of 64 degrees latitude. The correlation between the known features of the data and the parameters variance and correlation length of the computed error covariance functions was estimated using multiple regression analysis......Error covariance estimates are necessary information for the combination of solutions resulting from different kinds of data or methods, or for the assimilation of new results in already existing solutions. Such a combination or assimilation process demands proper weighting of the data, in order...... for the combination to be optimal and the error estimates of the results realistic. One flexible method for the gravity field approximation is least-squares collocation leading to optimal solutions for the predicted quantities and their error covariance estimates. The drawback of this method is related to the current...
Agbemava, Sylvester; Afanasjev, Anatoli
2017-09-01
Because of the complexity of nuclear many-body problem modern theoretical tools rely on some approximations in its solution. As a result, it becomes necessary to estimate theoretical uncertainties in the description of physical observables. This is especially important when one deals with the extrapolations beyond the known regions. There are two types of such uncertainties: systematic and statistical. Systematic theoretical uncertainties in the description of physical observables within the covariant density functional theory have been evaluated in. Present work is focused on the evaluation of statistical uncertainties for major classes of covariant energy density functionals (CEDFs) and their propagation with particle number (towards extremes of nuclear landscape) and deformation. These uncertainties are evaluated for different classes of physical observables (ground state and single-particle properties, fission barriers) and compared with systematic ones. Moreover, the correlations between the parameters of the CEDFs are evaluated with the goal to see to which degree they are independent. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under Award No. DE-SC0013037.
Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables.
Song, Xiao; Wang, Ching-Yun
2014-12-01
In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed.
Scale-dependent background-error covariance localisation
Directory of Open Access Journals (Sweden)
Mark Buehner
2015-12-01
Full Text Available A new approach is presented and evaluated for efficiently applying scale-dependent spatial localisation to ensemble background-error covariances within an ensemble-variational data assimilation system. The approach is primarily motivated by the requirements of future data assimilation systems for global numerical weather prediction that will be capable of resolving the convective scale. Such systems must estimate the global and synoptic scales at least as well as current global systems while also effectively making use of information from frequent and spatially dense observation networks to constrain convective-scale features. Scale-dependent covariance localisation allows a wider range of scales to be efficiently estimated while simultaneously assimilating all available observations. In the context of an idealised numerical experiment, it is shown that using scale-dependent localisation produces an improved ensemble-based estimate of spatially varying covariances as compared with standard spatial localisation. When applied to an ensemble of Arctic sea-ice concentration, it is demonstrated that strong spatial gradients in the relative contribution of different spatial scales in the ensemble covariances result in strong spatial variations in the overall amount of spatial localisation. This feature is qualitatively similar to what might be expected when applying an adaptive localisation approach that estimates a spatially varying localisation function from the ensemble itself. When compared with standard spatial localisation, scale-dependent localisation also results in a lower analysis error for sea-ice concentration over all spatial scales.
Empirical State Error Covariance Matrix for Batch Estimation
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
An Empirical State Error Covariance Matrix Orbit Determination Example
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance
An Empirical State Error Covariance Matrix for Batch State Estimation
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the
Bayesian adjustment for covariate measurement errors: a flexible parametric approach.
Hossain, Shahadut; Gustafson, Paul
2009-05-15
In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.
2015-05-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Optimal solution error covariance in highly nonlinear problems of variational data assimilation
Directory of Open Access Journals (Sweden)
V. Shutyaev
2012-03-01
Full Text Available The problem of variational data assimilation (DA for a nonlinear evolution model is formulated as an optimal control problem to find the initial condition, boundary conditions and/or model parameters. The input data contain observation and background errors, hence there is an error in the optimal solution. For mildly nonlinear dynamics, the covariance matrix of the optimal solution error can be approximated by the inverse Hessian of the cost function. For problems with strongly nonlinear dynamics, a new statistical method based on the computation of a sample of inverse Hessians is suggested. This method relies on the efficient computation of the inverse Hessian by means of iterative methods (Lanczos and quasi-Newton BFGS with preconditioning. Numerical examples are presented for the model governed by the Burgers equation with a nonlinear viscous term.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Estimating the background covariance error for the Global Data Assimilation System of CPTEC/INPE
Bastarz, C. F.; Goncalves, L.
2013-05-01
The global data assimilation system at CPTEC/INPE, named G3Dvar is based in the Gridoint Statistical Interpolation (GSI/NCEP/GMAO) and in the general circulation model from that same center (GCM/CPTEC/INPE). The G3Dvar is a tri-dimensional variational data assimilation system that uses a Background Error Covariance Matrix (BE) fixed (in its current implementation, it uses the matrix from Global Forecast System - GFS/NCEP). The goal of this work is to present the preliminary results of the calculation of the new BE based on the GCM/CPTEC/INPE using a methodology similar to the one used for the GSI/WRFDA, called gen_be. The calculation is done in 5 distinct steps in the analysis increment space. (a) stream function and potential velocity are determined from the wind fields; (b) the mean of the stream function and potential velocity are calculated in order to obtain the perturbation fields for the remaing variables (streamfunction, potencial velocity, temperature, relative humidity and surface pressure); (c) the covariances of the perturbation fields, regression coeficients and balance between streamfunction, temperature and surface pressure are estimated. For this particular system, i.e. GCM/CPTEC/INPE, the necessity for constrains towards the statistical balance between streamfuncion and potential velocity, temperature and surface pressure will be evaluated as well as the how it affects the BE matrix calculation. Hence, this work will investigate the necessary procedures for calculating BE and show how does that differs from the standard calculation and how it is calibrated/adjusted based on the GCM/CPTEC/INPE. Results from a comparison between the main differences between the GFS BE and the newly calculated GCM/CPTEC/INPE BE are discussed in addition to an impact study using the different background error covariance matrices.
DEFF Research Database (Denmark)
Yang, Yukay
I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...
Ocean Spectral Data Assimilation Without Background Error Covariance Matrix
2016-01-01
533 Chu PC, Wang GH, Chen YC (2002) Japan/East Sea (JES) circulation and thermohaline 534 structure, Part 3, Autocorrelation Functions. J Phys...Oceanogr, 32, 3596-3615. 535 536 Chu PC, Wang GH (2003) Seasonal variability of thermohaline front in the central South China 537 Sea. J Oceanogr, 59
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the
Linear mixed models for replication data to efficiently allow for covariate measurement error.
Bartlett, Jonathan W; De Stavola, Bianca L; Frost, Chris
2009-11-10
It is well known that measurement error in the covariates of regression models generally causes bias in parameter estimates. Correction for such biases requires information concerning the measurement error, which is often in the form of internal validation or replication data. Regression calibration (RC) is a popular approach to correct for covariate measurement error, which involves predicting the true covariate using error-prone measurements. Likelihood methods have previously been proposed as an alternative approach to estimate the parameters in models affected by measurement error, but have been relatively infrequently employed in medical statistics and epidemiology, partly because of computational complexity and concerns regarding robustness to distributional assumptions. We show how a standard random-intercepts model can be used to obtain maximum likelihood (ML) estimates when the outcome model is linear or logistic regression under certain normality assumptions, when internal error-prone replicate measurements are available. Through simulations we show that for linear regression, ML gives more efficient estimates than RC, although the gain is typically small. Furthermore, we show that RC and ML estimates remain consistent even when the normality assumptions are violated. For logistic regression, our implementation of ML is consistent if the true covariate is conditionally normal given the outcome, in contrast to RC. In simulations, this ML estimator showed less bias in situations where RC gives non-negligible biases. Our proposal makes the ML approach to dealing with covariate measurement error more accessible to researchers, which we hope will improve its viability as a useful alternative to methods such as RC.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Fratini, G.; McDermitt, D. K.; Papale, D.
2013-08-01
Errors in gas concentration measurements by infrared gas analysers can occur during eddy-covariance campaigns, associated with actual or apparent instrumental drifts or to biases due to thermal expansion, dirt contamination, aging of components or errors in field operations. If occurring on long time scales (hours to days), these errors are normally ignored during flux computation, under the assumption that errors in mean gas concentrations do not affect the estimation of turbulent fluctuations and, hence, of covariances. By analysing instrument theory of operation, and using numerical simulations and field data, we show that this is not the case for instruments with curvilinear calibrations; we further show that if not appropriately accounted for, concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are about 30-40% the fractional errors in concentrations. We quantify these errors and characterize their dependency on main determinants. We then propose a correction procedure that largely - potentially completely - eliminates these errors. The correction, to be applied during flux computation, is based on knowledge of instrument calibration curves and on field or laboratory calibration data. Finally, we demonstrate the occurrence of such errors and validate the correction procedure by means of a field experiment, and accordingly provide recommendations for in situ operations. The correction described in this paper will soon be available in the EddyPro software (licor.com/eddypro"target="_blank">www.licor.com/eddypro).
Electron localization functions and local measures of the covariance
Indian Academy of Sciences (India)
The electron localization measure proposed by Becke and Edgecombe is shown to be related to the covariance of the electron pair distribution. Just as with the electron localization function, the local covariance does not seem to be, in and of itself, a useful quantity for elucidating shell structure. A function of the local ...
Sang, Huiyan
2011-12-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.
Dagne, Getachew A.; Huang, Yangxin
2013-01-01
Common problems to many longitudinal HIV/AIDS, cancer, vaccine and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time-varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left-censored data falling below a limit of detection (LOD) may sometimes have a proportion larger than expected under a usually assumed log-normal distribution. In such cases, alternative models which can account for a high proportion of censored data should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew-normal distribution for an outcome with possible left-censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed-effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left-censoring, skewness and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. PMID:23553914
Menga, G.
1975-01-01
An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
Comparative test on several forms of background error covariance in 3DVar
Shao, Aimei
2013-04-01
The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to
On the impact of covariate measurement error on spatial regression modelling.
Huque, Md Hamidul; Bondell, Howard; Ryan, Louise
2014-12-01
Spatial regression models have grown in popularity in response to rapid advances in GIS (Geographic Information Systems) technology that allows epidemiologists to incorporate geographically indexed data into their studies. However, it turns out that there are some subtle pitfalls in the use of these models. We show that presence of covariate measurement error can lead to significant sensitivity of parameter estimation to the choice of spatial correlation structure. We quantify the effect of measurement error on parameter estimates, and then suggest two different ways to produce consistent estimates. We evaluate the methods through a simulation study. These methods are then applied to data on Ischemic Heart Disease (IHD).
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
Local Covariance Functions and Density Distributions.
1984-06-01
function K(oR) will be virtually zero for large distances. We may therefore approximate sin 1, and obtain : 2Jz +1=P± f ’ K(,R) P (cos 4)do (2.30) - 2 0 The...Moritz, H.: Advanced physical geodesy. Herbert Wichman Verlag, Karlsruhe, 1980. - Nash, R.A. and S.K. Jordan: Statistical geodesy - an engineering
Sensitivity of Lower Stratospheric Assimilated Ozone on Error Covariance Modeling and Data Selection
Stajner, Ivanka; Rood, Richard B.; Winslow, Nathan; Wargan, Krzysztof; Pawson, Steven
2002-01-01
Assimilated ozone is produced at the NASA/Goddard Data Assimilation Office by blending ozone retrieved from the Solar Backscatter UltraViolet/2 (SBUV/2) instrument and the Earth Probe Total Ozone Mapping Spectrometer (EP TOMS) measurements into an off-line transport model. The current system tends to overestimate the amount of lower stratospheric ozone. This is a region where ozone plays a key role in the forcing of climate. A biased ozone field in this region will adversely impact calculations of the stratosphere-troposphere exchange and, when used as a first guess in retrievals, the values determined from satellite observations. Since these are all important applications of assimilated ozone products, effort is being directed towards reducing this bias. The SBUV ozone data have a coarse vertical resolution with increased uncertainty below the ozone maximum, and TOMS provides only total ozone columns. Thus, the assimilated ozone in the lower stratosphere, and its vertical distribution in particular, are only weakly constrained by the incoming SBUV and TOMS data. Consequently, the assimilated ozone distribution should be sensitive to changes in inputs to the statistical analysis scheme. Accordingly, the sensitivity of the assimilated lower stratospheric ozone fields to changes in the TOMS error-covariance modeling and the SBUV data selection has been investigated. The use of a spatially correlated TOMS error covariance model led to improvements in the product. However, withholding the SBUV/2 data for the layer between 63 and 126 hPa typically degraded the product, a result which vindicates the use of this layer ozone product, despite its known errors. These efforts to improve the lower stratospheric distribution will be extended to include a more advanced forecast error covariance model, and by assimilating ozone products from new instruments on Envisat and EOS Aura.
Covariant density functional theory for nuclear matter
Energy Technology Data Exchange (ETDEWEB)
Badarch, U.
2007-07-01
The present thesis is organized as follows. In Chapter 2 we study the Nucleon-Nucleon (NN) interaction in Dirac-Brueckner (DB) approach. We start by considering the NN interaction in free-space in terms of the Bethe-Salpeter (BS) equation to the meson exchange potential model. Then we present the DB approach for nuclear matter by extending the BS equation for the in-medium NN interaction. From the solution of the three-dimensional in-medium BS equation, we derive the DB self-energies and total binding energy which are the main results of the DB approach, which we later incorporate in the field theoretical calculation of the nuclear equation of state. In Chapter 3, we introduce the basic concepts of density functional theory in the context of Quantum Hadrodynamics (QHD-I). We reach the main point of this work in Chapter 4 where we introduce the DDRH approach. In the DDRH theory, the medium dependence of the meson-nucleon vertices is expressed as functionals of the baryon field operators. Because of the complexities of the operator-valued functionals we decide to use the mean-field approximation. In Chapter 5, we contrast microscopic and phenomenological approaches to extracting density dependent meson-baryon vertices. Chapter 6 gives the results of our studies of the EOS of infinite nuclear matter in detail. Using formulas derived in Chapters 4 and 5 we calculate the properties of symmetric and asymmetric nuclear matter and pure neutron matter. (orig.)
Weir, Kent A.; Wells, Eugene M.
1990-01-01
The design and operation of a Strapdown Navigation Analysis Program (SNAP) developed to perform covariance analysis on spacecraft inertial-measurement-unit (IMU) navigation errors are described and demonstrated. Consideration is given to the IMU modeling subroutine (with user-specified sensor characteristics), the data input procedures, state updates and the simulation of instrument failures, the determination of the nominal trajectory, the mapping-matrix and Monte Carlo covariance-matrix propagation methods, and aided-navigation simulation. Numerical results are presented in tables for sample applications involving (1) the Galileo/IUS spacecraft from its deployment from the Space Shuttle to a point 10 to the 8th ft from the center of the earth and (2) the TDRS-C/IUS spacecraft from Space Shuttle liftoff to a point about 2 h before IUS deployment. SNAP is shown to give reliable results for both cases, with good general agreement between the mapping-matrix and Monte Carlo predictions.
Dreano, Denis
2017-04-05
Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.
Improving the error backpropagation algorithm with a modified error function.
Oh, S H
1997-01-01
This letter proposes a modified error function to improve the error backpropagation (EBP) algorithm of multilayer perceptrons (MLPs) which suffers from slow learning speed. To accelerate the learning speed of the EBP algorithm, the proposed method reduces the probability that output nodes are near the wrong extreme value of sigmoid activation function. This is acquired through a strong error signal for the incorrectly saturated output node and a weak error signal for the correctly saturated output node. The weak error signal for the correctly saturated output node, also, prevents overspecialization of learning for training patterns. The effectiveness of the proposed method is demonstrated in a handwritten digit recognition task.
Resting-state brain organization revealed by functional covariance networks.
Directory of Open Access Journals (Sweden)
Zhiqiang Zhang
Full Text Available BACKGROUND: Brain network studies using techniques of intrinsic connectivity network based on fMRI time series (TS-ICN and structural covariance network (SCN have mapped out functional and structural organization of human brain at respective time scales. However, there lacks a meso-time-scale network to bridge the ICN and SCN and get insights of brain functional organization. METHODOLOGY AND PRINCIPAL FINDINGS: We proposed a functional covariance network (FCN method by measuring the covariance of amplitude of low-frequency fluctuations (ALFF in BOLD signals across subjects, and compared the patterns of ALFF-FCNs with the TS-ICNs and SCNs by mapping the brain networks of default network, task-positive network and sensory networks. We demonstrated large overlap among FCNs, ICNs and SCNs and modular nature in FCNs and ICNs by using conjunctional analysis. Most interestingly, FCN analysis showed a network dichotomy consisting of anti-correlated high-level cognitive system and low-level perceptive system, which is a novel finding different from the ICN dichotomy consisting of the default-mode network and the task-positive network. CONCLUSION: The current study proposed an ALFF-FCN approach to measure the interregional correlation of brain activity responding to short periods of state, and revealed novel organization patterns of resting-state brain activity from an intermediate time scale.
Partially linear varying coefficient models stratified by a functional covariate
Maity, Arnab
2012-10-01
We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
DEFF Research Database (Denmark)
Bingham, Rory J.; Tscherning, Christian; Knudsen, Per
2011-01-01
The availability of the full error variance-covariance matrices for the GOCE gravity field models is an important feature of the GOCE mission. Potentially, it will allow users to evaluate the accuracy of a geoid or mean dynamic topography (MDT) derived from the gravity field model at any particular...... location, design optimal filters to remove errors from the surfaces, and rigorously assimilate a geoid/MDT into ocean models, or otherwise combine the GOCE gravity field with other data. Here we present an initial investigation into the error characteristics of the GOCE gravity field models...... assimilation is provided. Finally, we consider some of the practical issues relating to the handling of the huge files containing the error variance-covariance information....
Radial Covariance Functions Motivated by Spatial Random Field Models with Local Interactions
Hristopulos, Dionissios T.
2014-01-01
We derive explicit expressions for a family of radially symmetric, non-differentiable, Spartan covariance functions in $\\mathbb{R}^2$ that involve the modified Bessel function of the second kind. In addition to the characteristic length and the amplitude coefficient, the Spartan covariance parameters include the rigidity coefficient $\\eta_{1}$ which determines the shape of the covariance function. If $ \\eta_{1} >> 1$ Spartan covariance functions exhibit multiscaling. We also derive a family o...
Error reduction technique using covariant approximation and application to nucleon form factor
Blum, Thomas; Shintani, Eigo
2012-01-01
We demonstrate the new class of variance reduction techniques for hadron propagator and nucleon isovector form factor in the realistic lattice of $N_f=2+1$ domain-wall fermion. All-mode averaging (AMA) is one of the powerful tools to reduce the statistical noise effectively for wider varieties of observables compared to existing techniques such as low-mode averaging (LMA). We adopt this technique to hadron two-point functions and three-point functions, and compare with LMA and traditional source-shift method in the same ensembles. We observe AMA is much more cost effective in reducing statistical error for these observables.
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Shibata, Keiichi
1997-09-01
A covariance evaluation system for the evaluated nuclear data library was established. The parameter estimation method and the least squares method with a spline function are used to generate the covariance data. Uncertainties of nuclear reaction model parameters are estimated from experimental data uncertainties, then the covariance of the evaluated cross sections is calculated by means of error propagation. Computer programs ELIESE-3, EGNASH4, ECIS, and CASTHY are used. Covariances of {sup 238}U reaction cross sections were calculated with this system. (author)
Asymptotic behavior of the likelihood function of covariance matrices of spatial Gaussian processes
DEFF Research Database (Denmark)
Zimmermann, Ralf
2010-01-01
The covariance structure of spatial Gaussian predictors (aka Kriging predictors) is generally modeled by parameterized covariance functions; the associated hyperparameters in turn are estimated via the method of maximum likelihood. In this work, the asymptotic behavior of the maximum likelihood...... of spatial Gaussian predictor models as a function of its hyperparameters is investigated theoretically. Asymptotic sandwich bounds for the maximum likelihood function in terms of the condition number of the associated covariance matrix are established. As a consequence, the main result is obtained...
Cross-covariance functions for multivariate random fields based on latent dimensions
Apanasovich, T. V.
2010-02-16
The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable and computationally feasible classes of cross-covariance functions in closed form. We focus on spatio-temporal cross-covariance functions that can be nonseparable, asymmetric and can have different covariance structures, for instance different smoothness parameters, in each component. We discuss estimation of these models and perform a small simulation study to demonstrate our approach. We illustrate our methodology on a trivariate spatio-temporal pollution dataset from California and demonstrate that our cross-covariance performs better than other competing models. © 2010 Biometrika Trust.
Hydrodynamic Covariant Symplectic Structure from Bilinear Hamiltonian Functions
Directory of Open Access Journals (Sweden)
Capozziello S.
2005-07-01
Full Text Available Starting from generic bilinear Hamiltonians, constructed by covariant vector, bivector or tensor fields, it is possible to derive a general symplectic structure which leads to holonomic and anholonomic formulations of Hamilton equations of motion directly related to a hydrodynamic picture. This feature is gauge free and it seems a deep link common to all interactions, electromagnetism and gravity included. This scheme could lead toward a full canonical quantization.
Preconditioning of the background error covariance matrix in data assimilation for the Caspian Sea
Arcucci, Rossella; D'Amore, Luisa; Toumi, Ralf
2017-06-01
Data Assimilation (DA) is an uncertainty quantification technique used for improving numerical forecasted results by incorporating observed data into prediction models. As a crucial point into DA models is the ill conditioning of the covariance matrices involved, it is mandatory to introduce, in a DA software, preconditioning methods. Here we present first studies concerning the introduction of two different preconditioning methods in a DA software we are developing (we named S3DVAR) which implements a Scalable Three Dimensional Variational Data Assimilation model for assimilating sea surface temperature (SST) values collected into the Caspian Sea by using the Regional Ocean Modeling System (ROMS) with observations provided by the Group of High resolution sea surface temperature (GHRSST). We also present the algorithmic strategies we employ.
Dynamically constrained uncertainty for the Kalman filter covariance in the presence of model error
Grudzien, Colin; Carrassi, Alberto; Bocquet, Marc
2017-04-01
The forecasting community has long understood the impact of dynamic instability on the uncertainty of predictions in physical systems and this has led to innovative filtering design to take advantage of the knowledge of process models. The advantages of this combined approach to filtering, including both a dynamic and statistical understanding, have included dimensional reductions and robust feature selection in the observational design of filters. In the context of a perfect models we have shown that the uncertainty in prediction is damped along the directions of stability and the support of the uncertainty conforms to the dominant system instabilities. Our current work likewise demonstrates this constraint on the uncertainty for systems with model error, specifically, - we produce analytical upper bounds on the uncertainty in the stable, backwards orthogonal Lyapunov vectors in terms of the local Lyapunov exponents and the scale of the additive noise. - we demonstrate that for systems with model noise, the least upper bound on the uncertainty depends on the inverse relationship of the leading Lyapunov exponent and the observational certainty. - we numerically compute the invariant scaling factor of the model error which determines the asymptotic uncertainty. This dynamic scaling of model error is identifiable independently of the noise and is computable directly in terms of the system's dynamic invariants -- in this way the physical process itself may mollify the growth of modelling errors. For systems with strongly dissipative behaviour, we demonstrate that the growth of the uncertainty can be confined to the unstable-neutral modes independently of the filtering process, and we connect the observational design to take advantage of a dynamic characteristic of the filtering error.
Parkinson's Disease—Related Spatial Covariance Pattern Identified with Resting-State Functional MRI
National Research Council Canada - National Science Library
Wu, Tao; Ma, Yilong; Zheng, Zheng; Peng, Shichun; Wu, Xiaoli; Eidelberg, David; Chan, Piu
2015-01-01
In this study, we sought to identify a disease-related spatial covariance pattern of spontaneous neural activity in Parkinson's disease using resting-state functional magnetic resonance imaging (MRI...
Space-Time Modelling of Groundwater Level Using Spartan Covariance Function
Varouchakis, Emmanouil; Hristopulos, Dionissios
2014-05-01
groundwater level increase during the wet period of 2003-2004 and a considerable drop during the dry period of 2005-2006. Both periods are associated with significant annual changes in the precipitation compared to the basin average, i.e., a 40% increase and 65% decrease, respectively. We use STRK to 'predict' the groundwater level for the two selected hydrological periods (wet period of 2003-2004 and dry period of 2005-2006) at each sampling station. The predictions are validated using the respective measured values. The novel Spartan spatiotemporal covariance function gives a mean absolute relative prediction error of 12%. This is 45% lower than the respective value obtained with the commonly used product-sum covariance function, and 31% lower than the respective value obtained with a non-separable function based on the diffusion equation (Kolovos et al. 2010). The advantage of the Spartan space-time covariance model is confirmed with statistical measures such as the root mean square standardized error (RMSSE), the modified coefficient of model efficiency, E' (Legates and McCabe, 1999) and the modified Index of Agreement, IoA'(Janssen and Heuberger, 1995). Hristopulos, D. T. and Elogne, S. N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs random fields. IEEE Transactions on Information Theory, 53, 4667-4467. Janssen, P.H.M. and Heuberger P.S.C. 1995. Calibration of process-oriented models. Ecological Modelling, 83, 55-66. Kolovos, A., Christakos, G., Hristopulos, D. T. and Serre, M. L. 2004. Methods for generating non-separable spatiotemporal covariance models with potential environmental applications. Advances in Water Resources, 27 (8), 815-830. Legates, D.R. and McCabe Jr., G.J. 1999. Evaluating the use of 'goodness-of-fit' measures in hydrologic and hydro climatic model validation. Water Resources Research, 35, 233-241. Varouchakis, E. A. and Hristopulos, D. T. 2013. Improvement of groundwater level prediction in sparsely gauged
Prediction Error During Functional and Non-Functional Action Sequences
DEFF Research Database (Denmark)
Nielbo, Kristoffer Laigaard; Sørensen, Jesper
2013-01-01
By means of the computational approach the present study investigates the difference between observation of functional behavior (i.e. actions involving necessary integration of subparts) and non-functional behavior (i.e. actions lacking necessary integration of subparts) in terms of prediction...... recurrent networks were made and the results are presented in this article. The simulations show that non-functional action sequences do indeed increase prediction error, but that context representations, such as abstract goal information, can modulate the error signal considerably. It is also shown...... that the networks are sensitive to boundaries between sequences in both functional and non-functional actions....
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)
2014-01-01
An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.
Su, Li; Daniels, Michael J
2015-05-30
In long-term follow-up studies, irregular longitudinal data are observed when individuals are assessed repeatedly over time but at uncommon and irregularly spaced time points. Modeling the covariance structure for this type of data is challenging, as it requires specification of a covariance function that is positive definite. Moreover, in certain settings, careful modeling of the covariance structure for irregular longitudinal data can be crucial in order to ensure no bias arises in the mean structure. Two common settings where this occurs are studies with 'outcome-dependent follow-up' and studies with 'ignorable missing data'. 'Outcome-dependent follow-up' occurs when individuals with a history of poor health outcomes had more follow-up measurements, and the intervals between the repeated measurements were shorter. When the follow-up time process only depends on previous outcomes, likelihood-based methods can still provide consistent estimates of the regression parameters, given that both the mean and covariance structures of the irregular longitudinal data are correctly specified and no model for the follow-up time process is required. For 'ignorable missing data', the missing data mechanism does not need to be specified, but valid likelihood-based inference requires correct specification of the covariance structure. In both cases, flexible modeling approaches for the covariance structure are essential. In this paper, we develop a flexible approach to modeling the covariance structure for irregular continuous longitudinal data using the partial autocorrelation function and the variance function. In particular, we propose semiparametric non-stationary partial autocorrelation function models, which do not suffer from complex positive definiteness restrictions like the autocorrelation function. We describe a Bayesian approach, discuss computational issues, and apply the proposed methods to CD4 count data from a pediatric AIDS clinical trial. © 2015 The Authors
A class of Matérn-like covariance functions for smooth processes on a sphere
Jeong, Jaehong
2015-02-01
© 2014 Elsevier Ltd. There have been noticeable advancements in developing parametric covariance models for spatial and spatio-temporal data with various applications to environmental problems. However, literature on covariance models for processes defined on the surface of a sphere with great circle distance as a distance metric is still sparse, due to its mathematical difficulties. It is known that the popular Matérn covariance function, with smoothness parameter greater than 0.5, is not valid for processes on the surface of a sphere with great circle distance. We introduce an approach to produce Matérn-like covariance functions for smooth processes on the surface of a sphere that are valid with great circle distance. The resulting model is isotropic and positive definite on the surface of a sphere with great circle distance, with a natural extension for nonstationarity case. We present extensive numerical comparisons of our model, with a Matérn covariance model using great circle distance as well as chordal distance. We apply our new covariance model class to sea level pressure data, known to be smooth compared to other climate variables, from the CMIP5 climate model outputs.
Directory of Open Access Journals (Sweden)
Liu Xiaogang
2013-01-01
Full Text Available When the computational point is approaching the poles, the variance and covariance formulae of the disturbing gravity gradient tensors tend to be infinite, and this is a singular problem. In order to solve the problem, the authors deduced the practical non-singular computational formulae of the first-and second-order derivatives of the Legendre functions and two kinds of spherical harmonic functions, and then constructed the nonsingular formulae of variance and covariance function of disturbing gravity gradient tensors.
A full scale approximation of covariance functions for large spatial data sets
Sang, Huiyan
2011-10-10
Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.
Wang, Fei; Song, Peter X-K; Wang, Lu
2015-12-01
Merging multiple datasets collected from studies with identical or similar scientific objectives is often undertaken in practice to increase statistical power. This article concerns the development of an effective statistical method that enables to merge multiple longitudinal datasets subject to various heterogeneous characteristics, such as different follow-up schedules and study-specific missing covariates (e.g., covariates observed in some studies but missing in other studies). The presence of study-specific missing covariates presents great statistical methodology challenge in data merging and analysis. We propose a joint estimating function approach to addressing this challenge, in which a novel nonparametric estimating function constructed via splines-based sieve approximation is utilized to bridge estimating equations from studies with missing covariates to those with fully observed covariates. Under mild regularity conditions, we show that the proposed estimator is consistent and asymptotically normal. We evaluate finite-sample performances of the proposed method through simulation studies. In comparison to the conventional multiple imputation approach, our method exhibits smaller estimation bias. We provide an illustrative data analysis using longitudinal cohorts collected in Mexico City to assess the effect of lead exposures on children's somatic growth. © 2015, The International Biometric Society.
Indefinite theta series and generalized error functions
Alexandrov, Sergei; Manschot, Jan; Pioline, Boris
2016-01-01
Theta series for lattices with indefinite signature $(n_+,n_-)$ arise in many areas of mathematics including representation theory and enumerative algebraic geometry. Their modular properties are well understood in the Lorentzian case ($n_+=1$), but have remained obscure when $n_+\\geq 2$. Using a higher-dimensional generalization of the usual (complementary) error function, discovered in an independent physics project, we construct the modular completion of a class of `conformal' holomorphic theta series ($n_+=2$). As an application, we determine the modular properties of a generalized Appell-Lerch sum attached to the lattice ${\\operatorname A}_2$, which arose in the study of rank 3 vector bundles on $\\mathbb{P}^2$. The extension of our method to $n_+>2$ is outlined.
Tay, L.; Vermunt, J.K.; Wang, C.
2013-01-01
We evaluate the item response theory with covariates (IRT-C) procedure for assessing differential item functioning (DIF) without preknowledge of anchor items (Tay, Newman, & Vermunt, 2011). This procedure begins with a fully constrained baseline model, and candidate items are tested for uniform
DEFF Research Database (Denmark)
Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne
Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI. Ba...
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Covariant approximation averaging
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
A general model for allometric covariation in botanical form and function.
Price, Charles A; Enquist, Brian J; Savage, Van M
2007-08-07
The West, Brown, and Enquist (WBE) theory for the origin of allometric scaling laws is centered on the idea that the geometry of the vascular network governs how a suite of organismal traits covary with each other and, ultimately, how they scale with organism size. This core assumption has been combined with other secondary assumptions based on physiological constraints, such as minimizing the scaling of transport and biomechanical costs while maximally filling a volume. Together, these assumptions give predictions for specific "quarter-power" scaling exponents in biology. Here we provide a strong test of the core assumption of WBE by examining how well it holds when the secondary assumptions have been relaxed. Our relaxed version of WBE predicts that allometric exponents are highly constrained and covary according to specific quantitative functions. To test this core prediction, we assembled several botanical data sets with measures of the allometry of morphological traits. A wide variety of plant taxa appear to obey the predictions of the model. Our results (i) underscore the importance of network geometry in governing the variability and central tendency of biological exponents, (ii) support the hypothesis that selection has primarily acted to minimize the scaling of hydrodynamic resistance, and (iii) suggest that additional selection pressures for alternative branching geometries govern much of the observed covariation in biological scaling exponents. Understanding how selection shapes hierarchical branching networks provides a general framework for understanding the origin and covariation of many allometric traits within a complex integrated phenotype.
Determination of Local Empirical Covariance Functions from Residual Terrain Reduced Altimeter Data
1988-11-01
quantities are estimated from a set of observations. The method of least squares collocation ( Moritz , 1980), is widely used for this purpose. The...residual observations and the local empirical covariance function. This procedure corresponds to the stepwise collocation ( Moritz , 1980), where the...blocir number) FIELD GROUP SUE-GROUP geodesy; gravity; least-squares collocation 19. ABSTRACT (Conrinue on reverse if neceeay and identify by block number
A Dynamic Time Warping based covariance function for Gaussian Processes signature identification
Silversides, Katherine L.; Melkumyan, Arman
2016-11-01
Modelling stratiform deposits requires a detailed knowledge of the stratigraphic boundaries. In Banded Iron Formation (BIF) hosted ores of the Hamersley Group in Western Australia these boundaries are often identified using marker shales. Both Gaussian Processes (GP) and Dynamic Time Warping (DTW) have been previously proposed as methods to automatically identify marker shales in natural gamma logs. However, each method has different advantages and disadvantages. We propose a DTW based covariance function for the GP that combines the flexibility of the DTW with the probabilistic framework of the GP. The three methods are tested and compared on their ability to identify two natural gamma signatures from a Marra Mamba type iron ore deposit. These tests show that while all three methods can identify boundaries, the GP with the DTW covariance function combines and balances the strengths and weaknesses of the individual methods. This method identifies more positive signatures than the GP with the standard covariance function, and has a higher accuracy for identified signatures than the DTW. The combined method can handle larger variations in the signature without requiring multiple libraries, has a probabilistic output and does not require manual cut-off selections.
Covariant nucleon wave function with S, D, and P-state components
Energy Technology Data Exchange (ETDEWEB)
Franz Gross, G. Ramalho, M. T. Pena
2012-05-01
Expressions for the nucleon wave functions in the covariant spectator theory (CST) are derived. The nucleon is described as a system with a off-mass-shell constituent quark, free to interact with an external probe, and two spectator constituent quarks on their mass shell. Integrating over the internal momentum of the on-mass-shell quark pair allows us to derive an effective nucleon wave function that can be written only in terms of the quark and diquark (quark-pair) variables. The derived nucleon wave function includes contributions from S, P and D-waves.
Energy Technology Data Exchange (ETDEWEB)
A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada
2011-03-01
We derive relations between transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDFs f_1(x) and g_1(x) as input we predict the x- and pT-dependence of all twist-2 T-even TMDs.
Post-error adaptation in adults with high functioning autism
Bogte, Hans; Flamma, Bert; van der Meere, Jaap; van Engeland, Herman
2007-01-01
Deficits in executive function (EF), i.e. function of the prefrontal cortex, may be central in the etiology of autism. One of the various aspects of EF is error detection and adjusting behavior after an error. In cognitive tests, adults normally slow down their responding on the next trial after
Vervatis, Vassilios; Testut, Charles-Emmanuel; De Mey, Pierre; Ayoub, Nadia; Chanut, Jerome; Bricaud, Clement
2014-05-01
An important factor of any Data Assimilation (DA) scheme is the estimation of the background error covariances. This study is linked to the design of a DA system, based on the ensemble Kalman Filter and the ocean model NEMO. The work is part of the Research and Development activities of LEGOS/CNRS and Mercator-Ocean French teams within the European MyOcean2 project. As a key step towards DA, we perform sensitivity experiments devoted to the evaluation of the model errors and their dynamic, primarily due to wind forcing uncertainties in a free-surface coastal configuration of the Bay of Biscay. More in details, a stochastic approach of a twin experiment is carried out, by applying spatiotemporal Gaussian perturbations in the wind forcing, as an inertial incertitude of the system. A rank histogram is performed to select a member inside the ensemble spread serving as an SSH/SST observational array. The forecast trajectories of 100 members are used to assess the background error statistics of the ocean model. An ensemble normal probability analysis depicts the linear dependence of ocean surface variables and wind perturbations in the abyssal plain. In the coastal areas and in the shelf non Gaussian behavior is revealed. Initially, the ensemble variance is characterized by a moderate increase in the periphery of eddies and in the river mouths. A rapid increase follows observed mainly in the SSH, due to a chaotic evolution across members of the eddies trajectories. The convergence of covariances presents that a few members are sufficient to depict the spatial pattern of variance, whereas a large ensemble is needed to represent errors and correlations in the domain. Artificial experiments are used to increase the ensemble spread, mainly in the coastal areas and in the shelf, by applying temporal filters. Towards that direction, stochastic processes are investigated to increase the ensemble spread, by perturbing other variables than the wind. Ensemble forecasts, driven by
Roy, Surupa; Banerjee, Tathagata
2009-06-01
A multivariate probit model for correlated binary responses given the predictors of interest has been considered. Some of the responses are subject to classification errors and hence are not directly observable. Also measurements on some of the predictors are not available; instead the measurements on its surrogate are available. However, the conditional distribution of the unobservable predictors given the surrogate is completely specified. Models are proposed taking into account either or both of these sources of errors. Likelihood-based methodologies are proposed to fit these models. To ascertain the effect of ignoring classification errors and/or measurement error on the estimates of the regression and correlation parameters, a sensitivity study is carried out through simulation. Finally, the proposed methodology is illustrated through an example.
Parkinson's disease–related spatial covariance pattern identified with resting-state functional MRI
Wu, Tao; Ma, Yilong; Zheng, Zheng; Peng, Shichun; Wu, Xiaoli; Eidelberg, David; Chan, Piu
2015-01-01
In this study, we sought to identify a disease-related spatial covariance pattern of spontaneous neural activity in Parkinson's disease using resting-state functional magnetic resonance imaging (MRI). Time-series data were acquired in 58 patients with early to moderate stage Parkinson's disease and 54 healthy controls, and analyzed by Scaled Subprofile Model Principal Component Analysis toolbox. A split-sample analysis was also performed in a derivation sample of 28 patients and 28 control subjects and validated in a prospective testing sample of 30 patients and 26 control subjects. The topographic pattern of neural activity in Parkinson's disease was characterized by decreased activity in the striatum, supplementary motor area, middle frontal gyrus, and occipital cortex, and increased activity in the thalamus, cerebellum, precuneus, superior parietal lobule, and temporal cortex. Pattern expression was elevated in the patients compared with the controls, with a high accuracy (90%) to discriminate the patients from the controls. The split-sample analysis produced a similar pattern but with a lower accuracy for group discrimination in both the derivation (80%) and the validation (73%) samples. Our results showed that resting-state functional MRI can be potentially useful for identification of Parkinson's disease–related spatial covariance patterns, and for differentiation of Parkinson's disease patients from healthy controls at an individual level. PMID:26036935
Parkinson's disease-related spatial covariance pattern identified with resting-state functional MRI.
Wu, Tao; Ma, Yilong; Zheng, Zheng; Peng, Shichun; Wu, Xiaoli; Eidelberg, David; Chan, Piu
2015-11-01
In this study, we sought to identify a disease-related spatial covariance pattern of spontaneous neural activity in Parkinson's disease using resting-state functional magnetic resonance imaging (MRI). Time-series data were acquired in 58 patients with early to moderate stage Parkinson's disease and 54 healthy controls, and analyzed by Scaled Subprofile Model Principal Component Analysis toolbox. A split-sample analysis was also performed in a derivation sample of 28 patients and 28 control subjects and validated in a prospective testing sample of 30 patients and 26 control subjects. The topographic pattern of neural activity in Parkinson's disease was characterized by decreased activity in the striatum, supplementary motor area, middle frontal gyrus, and occipital cortex, and increased activity in the thalamus, cerebellum, precuneus, superior parietal lobule, and temporal cortex. Pattern expression was elevated in the patients compared with the controls, with a high accuracy (90%) to discriminate the patients from the controls. The split-sample analysis produced a similar pattern but with a lower accuracy for group discrimination in both the derivation (80%) and the validation (73%) samples. Our results showed that resting-state functional MRI can be potentially useful for identification of Parkinson's disease-related spatial covariance patterns, and for differentiation of Parkinson's disease patients from healthy controls at an individual level.
Directory of Open Access Journals (Sweden)
Eliana de Souza
Full Text Available ABSTRACT Soil bulk density (ρb data are needed for a wide range of environmental studies. However, ρb is rarely reported in soil surveys. An alternative to obtain ρb for data-scarce regions, such as the Rio Doce basin in southeastern Brazil, is indirect estimation from less costly covariates using pedotransfer functions (PTF. This study primarily aims to develop region-specific PTFs for ρb using multiple linear regressions (MLR and random forests (RF. Secondly, it assessed the accuracy of PTFs for data grouped into soil horizons and soil classes. For that purpose, we compared the performance of PTFs compiled from the literature with those developed here. Two groups of data were evaluated as covariates: 1 readily available soil properties and 2 maps derived from a digital elevation model and MODIS satellite imagery, jointly with lithological and pedological maps. The MLR model was applied step-wise to select significant predictors and its accuracy assessed by means of cross-validation. The PTFs developed using all data estimated ρb from soil properties by MLR and RF, with R2 of 0.41 and 0.51, respectively. Alternatively, using environmental covariates, RF predicted ρb with R2 of 0.41. Grouping criteria did not lead to a significant increase in the estimates of ρb. The accuracy of the ‘regional’ PTFs developed for this study was greater than that found with the ‘compiled’ PTFs. The best PTF will be firstly used to assess soil carbon stocks and changes in the Rio Doce basin.
Parameter inference with estimated covariance matrices
Sellentin, Elena; Heavens, Alan F.
2016-02-01
When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalizing over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate t-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalization over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.
Bayesian error estimation in density-functional theory
DEFF Research Database (Denmark)
Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund
2005-01-01
We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...... for molecules and solids. Fluctuations within the ensemble can then be used to estimate errors relative to experiment on calculated quantities such as binding energies, bond lengths, and vibrational frequencies. It is demonstrated that the error bars on energy differences may vary by orders of magnitude...
Directory of Open Access Journals (Sweden)
Meyer Karin
2001-11-01
Full Text Available Abstract A random regression model for the analysis of "repeated" records in animal breeding is described which combines a random regression approach for additive genetic and other random effects with the assumption of a parametric correlation structure for within animal covariances. Both stationary and non-stationary correlation models involving a small number of parameters are considered. Heterogeneity in within animal variances is modelled through polynomial variance functions. Estimation of parameters describing the dispersion structure of such model by restricted maximum likelihood via an "average information" algorithm is outlined. An application to mature weight records of beef cow is given, and results are contrasted to those from analyses fitting sets of random regression coefficients for permanent environmental effects.
Sadat Musavi, Talie; Kattge, Jens; Mahecha, Miguel; Reichstein, Markus; Van de Weg, Marjan; Van Bodegom, Peter; Bahn, Michael
2013-04-01
In this study we analyze the correlation structure among plant traits, ecosystem functional properties, characteristics of climate, soil and vegetation at 253 FLUXNET sites. This correlation structure may provide a basis for assessing vegetation functioning and its vulnerability under climate change. Until now, analyses of the FLUXNET dataset have shown that much of the observed spatial and temporal variation of ecosystem fluxes can be explained and scaled by information on soil, climate and vegetation structure, without considering the variation in the functional characteristics of the vegetation occurring at the FLUXNET sites. Instead, these studies have used plant functional types (PFT) as a parameter representing the vegetation influence on fluxes. However, provided the variability in traits that exists within an individual PFT at different sites, we analyze in this study how traits additionally influence ecosystem functional properties. We use community mean trait values to understand how vegetation characteristics relate to ecosystem functional properties, like maximum GPP at light saturation, or photosynthetic water use efficiency. These functional properties are derived from the combination of ecosystem level flux observation and information of spatial meteorology and vegetation remote sensing covariates. In addition, we investigate whether vegetation characteristics have an influence on ecosystem fluxes when combined with climate and soil information. So far analyses of this kind were impossible due to a lack of plant trait information. But the plant trait dataset TRY has been growing for years and in combination with novel methods in machine learning. We now have the opportunity to predict plant trait values for individual sites. We will present first results focusing on the relationship of ecosystem functional properties to leaf traits like specific leaf area and leaf carbon, nitrogen and phosphorus concentration scaled to canopy level.
Automated laser trimming for ultralow error function GFF
Bernard, Pierre; Gregoire, Nathalie; Lafrance, Ghislain
2003-04-01
Gain flatness of optical amplifiers over the communication bandwidth is a key requirement of high performance optical wavelength division multiplexing (WDM) communication systems. Most often, a gain flattening filter (GFF) with a spectral response matching the inverse gain profile is incorporated within the amplifier. The chirped fiber Bragg grating (CFBG) is an attractive technology to produce GFFs, especially in cases where very low error functions are required. Error functions smaller than or equal to +/-0.1 dB for the full operating temperature range are now possible. Moreover, the systematic errors from cascaded filters are much smaller than for thin-film GFF, a factor of importance in a long chain of amplifiers. To achieve this performance level, the high-frequency ripples normally associated with CFBG-GFF have been reduced by combining state-of-the-art holographic phase masks and advanced UV-writing techniques. Lastly, to eliminate the residual low-frequency ripples and localized errors, we developed a laser annealing-trimming station. This fully automated station combines both the aging process and final trimming of the GFF refractive index profile to exactly match the required transmission spectra. The use of self-adjusting algorithms assures quick convergence of the error function within a very tight error band. The capital expenditure necessary to implement this new tool is small in relation to the gain in precision, reliability and manufacturing cycle time.
Royston, Patrick
2014-01-01
We consider how to represent sigmoid-type regression relationships in a practical and parsimonious way. A pure sigmoid relationship has an asymptote at both ends of the range of a continuous covariate. Curves with a single asymptote are also important in practice. Many smoothers, such as fractional polynomials and restricted cubic regression splines, cannot accurately represent doubly asymptotic curves. Such smoothers may struggle even with singly asymptotic curves. Our approach to modeling sigmoid relationships involves applying a preliminary scaled rank transformation to compress the tails of the observed distribution of a continuous covariate. We include a step that provides a smooth approximation to the empirical cumulative distribution function of the covariate via the scaled ranks. The procedure defines the approximate cumulative distribution transformation of the covariate. To fit the substantive model, we apply fractional polynomial regression to the outcome with the smoothed, scaled ranks as the covariate. When the resulting fractional polynomial function is monotone, we have a sigmoid function. We demonstrate several practical applications of the approximate cumulative distribution transformation while also illustrating its ability to model some unusual functional forms. We describe a command, acd, that implements it.
Yan, Yuan
2017-07-13
Gaussian likelihood inference has been studied and used extensively in both statistical theory and applications due to its simplicity. However, in practice, the assumption of Gaussianity is rarely met in the analysis of spatial data. In this paper, we study the effect of non-Gaussianity on Gaussian likelihood inference for the parameters of the Matérn covariance model. By using Monte Carlo simulations, we generate spatial data from a Tukey g-and-h random field, a flexible trans-Gaussian random field, with the Matérn covariance function, where g controls skewness and h controls tail heaviness. We use maximum likelihood based on the multivariate Gaussian distribution to estimate the parameters of the Matérn covariance function. We illustrate the effects of non-Gaussianity of the data on the estimated covariance function by means of functional boxplots. Thanks to our tailored simulation design, a comparison of the maximum likelihood estimator under both the increasing and fixed domain asymptotics for spatial data is performed. We find that the maximum likelihood estimator based on Gaussian likelihood is overall satisfying and preferable than the non-distribution-based weighted least squares estimator for data from the Tukey g-and-h random field. We also present the result for Gaussian kriging based on Matérn covariance estimates with data from the Tukey g-and-h random field and observe an overall satisfactory performance.
Effects of Refractive Errors on Visual Functional Magnetic Resonance Imaging
Directory of Open Access Journals (Sweden)
Ahmet Akça
2016-03-01
Full Text Available INTRODUCTION: The purpose of our study is to evaluate the effects of refractive errors on functional magnetic resonance imaging (fMRI of visual cortex. METHODS: We performed a prospective study. The study included 13 patients with refractive error (group 1 and 30 emetropic volunteers (group 2. Group 2 was also subgrouped as 20-32 years old (young and over 45 years old (old to analyse accomodation effect. fMRI data were acquired with a block design paradigm with 3 Tesla MR system. In both groups, images initially were acquired in normal refractive state. fMRI was performed again in both groups during refractive error. Activation areas on visual cortex were calculated as square centimeter. Total activated areas on visual cortex was compared between normal refractive state and induced/ uncorrected refractive error. RESULTS: In group 1, activation areas of visual cortex during uncorrected refractive error revealed significantly decrease compared with activation areas during corrected refractive error (p=0.001. In group 2, induced myopia resulted significant decrease in activation areas compared with normal refractive state. Decrease in activation areas were significant both in 2 and 4 diopters (D of myopia compared with normal refractive state (p=0.003, p<0.001 respectively. Both in young and old subgroup, activation areas were significantly decreased during induced myopia. We revealed no difference between young and old subgroups. DISCUSSION AND CONCLUSION: The refractive errors have a clear effect on fMRI of visual cortex. Thus, to a
Attenuation caused by infrequently updated covariates in survival analysis
DEFF Research Database (Denmark)
Andersen, Per Kragh; Liestøl, Knut
2003-01-01
Attenuation; Cox regression model; Measurement errors; Survival analysis; Time-dependent covariates......Attenuation; Cox regression model; Measurement errors; Survival analysis; Time-dependent covariates...
Parabolic cyclinder functions : examples of error bounds for asymptotic expansions
R. Vidunas; N.M. Temme (Nico)
2002-01-01
textabstractSeveral asymptotic expansions of parabolic cylinder functions are discussedand error bounds for remainders in the expansions are presented. Inparticular Poincaré-type expansions for large values of the argument$z$ and uniform expansions for large values of the parameter areconsidered.
Vannitsem, Stéphane; Lucarini, Valerio
2016-06-01
We study a simplified coupled atmosphere-ocean model using the formalism of covariant Lyapunov vectors (CLVs), which link physically-based directions of perturbations to growth/decay rates. The model is obtained via a severe truncation of quasi-geostrophic equations for the two fluids, and includes a simple yet physically meaningful representation of their dynamical/thermodynamical coupling. The model has 36 degrees of freedom, and the parameters are chosen so that a chaotic behaviour is observed. There are two positive Lyapunov exponents (LEs), sixteen negative LEs, and eighteen near-zero LEs. The presence of many near-zero LEs results from the vast time-scale separation between the characteristic time scales of the two fluids, and leads to nontrivial error growth properties in the tangent space spanned by the corresponding CLVs, which are geometrically very degenerate. Such CLVs correspond to two different classes of ocean/atmosphere coupled modes. The tangent space spanned by the CLVs corresponding to the positive and negative LEs has, instead, a non-pathological behaviour, and one can construct robust large deviations laws for the finite time LEs, thus providing a universal model for assessing predictability on long to ultra-long scales along such directions. Interestingly, the tangent space of the unstable manifold has substantial projection on both atmospheric and oceanic components. The results show the difficulties in using hyperbolicity as a conceptual framework for multiscale chaotic dynamical systems, whereas the framework of partial hyperbolicity seems better suited, possibly indicating an alternative definition for the chaotic hypothesis. They also suggest the need for an accurate analysis of error dynamics on different time scales and domains and for a careful set-up of assimilation schemes when looking at coupled atmosphere-ocean models.
Topographic correction and covariance function modelling over the non-homogenous topography
Abulaitijiang, Adili; Barzaghi, Riccardo; Baltazar Andersen, Ole; Knudsen, Per
2017-04-01
The 6 years of CryoSat-2 satellite altimetry data can be potentially used to extract the high frequency components of the Earth gravity field beyond the Global Geopotential Models (GGMs) which corresponds to a resolution of 9.2 Km at the degree 2160. Using the conventional remove-compute-restore (only considering the GGMs) technique, the theoretical assumption of homogeneity and isotropy in the Least Square Collocation (LSC) algorithm is not always satisfied in the coastal regions and mountainous regions. High resolution bathymetry data (e.g., SRTM30, correspond to the spatial resolution of around 1 Km) is used to account for the strong correlation in the short wavelength (1 10 km) gravity features with topography and bathymetry. Hence, the Topographic Correction (TC) is a critical step in the reduction of the gravity functionals (e.g., height anomaly and gravity anomaly), to comply with the theoretical assumption of LSC. Previous studies show that the terrain correction performance w.r.t. residual gravity anomalies are slightly different w.r.t the residual height anomalies over the shallow regions close to the coast (or regions including islands). And unexpectedly terrain correction using residual terrain models (RTM) are not reducing the signal, but adding additional signal when computed w.r.t. the height anomalies. This should be examined when the (sea level) height anomalies are to be reduced by TC and further the marine gravity field is derived using LSC. In this work, the TC computation (both w.r.t. the height anomalies and gravity) will be conducted in several regions (patches) around Mediterranean, Chile, islands of Indonesia where the true gravity data is available for validation. Since the variance (magnitude) of the residual height anomalies are much smaller than that of gravity anomalies, and the noise variance are significant in the altimetry products, a further (modified) covariance fitting/modelling approach dedicated to the height anomalies will be
Shape evolution of 72,74Kr with temperature in covariant density functional theory
Zhang, Wei; Niu, Yi-Fei
2017-09-01
The rich phenomena of deformations in neutron-deficient krypton isotopes, such as shape evolution with neutron number and shape coexistence, have attracted the interest of nuclear physicists for decades. It is interesting to study such shape phenomena using a novel way, e.g. by thermally exciting the nucleus. In this work, we develop the finite temperature covariant density functional theory for axially deformed nuclei with the treatment of pairing correlations by the BCS approach, and apply this approach for the study of shape evolution in 72,74Kr with increasing temperature. For 72Kr, with temperature increasing, the nucleus firstly experiences a relatively quick weakening in oblate deformation at temperature T ∼0.9 MeV, and then changes from oblate to spherical at T ∼2.1 MeV. For 74Kr, its global minimum is at quadrupole deformation β 2 ∼ ‑0.14 and abruptly changes to spherical at T∼ 1.7 MeV. The proton pairing transition occurs at critical temperature 0.6 MeV following the rule T c=0.6Δ p(0), where Δ p(0) is the proton pairing gap at zero temperature. The signatures of the above pairing transition and shape changes can be found in the specific heat curve. The single-particle level evolutions with temperature are presented. Supported by National Natural Science Foundation of China (11105042, 11305161, 11505157), Open Fund of Key Laboratory of Time and Frequency Primary Standards, CAS, and Support from Henan Administration of Foreign Experts Affairs
Sraj, Ihab
2015-10-22
This paper addresses model dimensionality reduction for Bayesian inference based on prior Gaussian fields with uncertainty in the covariance function hyper-parameters. The dimensionality reduction is traditionally achieved using the Karhunen-Loève expansion of a prior Gaussian process assuming covariance function with fixed hyper-parameters, despite the fact that these are uncertain in nature. The posterior distribution of the Karhunen-Loève coordinates is then inferred using available observations. The resulting inferred field is therefore dependent on the assumed hyper-parameters. Here, we seek to efficiently estimate both the field and covariance hyper-parameters using Bayesian inference. To this end, a generalized Karhunen-Loève expansion is derived using a coordinate transformation to account for the dependence with respect to the covariance hyper-parameters. Polynomial Chaos expansions are employed for the acceleration of the Bayesian inference using similar coordinate transformations, enabling us to avoid expanding explicitly the solution dependence on the uncertain hyper-parameters. We demonstrate the feasibility of the proposed method on a transient diffusion equation by inferring spatially-varying log-diffusivity fields from noisy data. The inferred profiles were found closer to the true profiles when including the hyper-parameters’ uncertainty in the inference formulation.
Functional multiple indicators, multiple causes measurement error models.
Tekwe, Carmen D; Zoh, Roger S; Bazer, Fuller W; Wu, Guoyao; Carroll, Raymond J
2017-05-08
Objective measures of oxygen consumption and carbon dioxide production by mammals are used to predict their energy expenditure. Since energy expenditure is not directly observable, it can be viewed as a latent construct with multiple physical indirect measures such as respiratory quotient, volumetric oxygen consumption, and volumetric carbon dioxide production. Metabolic rate is defined as the rate at which metabolism occurs in the body. Metabolic rate is also not directly observable. However, heat is produced as a result of metabolic processes within the body. Therefore, metabolic rate can be approximated by heat production plus some errors. While energy expenditure and metabolic rates are correlated, they are not equivalent. Energy expenditure results from physical function, while metabolism can occur within the body without the occurrence of physical activities. In this manuscript, we present a novel approach for studying the relationship between metabolic rate and indicators of energy expenditure. We do so by extending our previous work on MIMIC ME models to allow responses that are sparsely observed functional data, defining the sparse functional multiple indicators, multiple cause measurement error (FMIMIC ME) models. The mean curves in our proposed methodology are modeled using basis splines. A novel approach for estimating the variance of the classical measurement error based on functional principal components is presented. The model parameters are estimated using the EM algorithm and a discussion of the model's identifiability is provided. We show that the defined model is not a trivial extension of longitudinal or functional data methods, due to the presence of the latent construct. Results from its application to data collected on Zucker diabetic fatty rats are provided. Simulation results investigating the properties of our approach are also presented. © 2017, The International Biometric Society.
Covariant and background independent functional RG flow for the effective average action
Energy Technology Data Exchange (ETDEWEB)
Safari, Mahmoud; Vacca, Gian Paolo [Dipartimento di Fisica and INFN - Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy)
2016-11-23
We extend our prescription for the construction of a covariant and background-independent effective action for scalar quantum field theories to the case where momentum modes below a certain scale are suppressed by the presence of an infrared regulator. The key step is an appropriate choice of the infrared cutoff for which the Ward identity, capturing the information from single-field dependence of the ultraviolet action, continues to be exactly solvable, and therefore, in addition to covariance, manifest background independence of the effective action is guaranteed at any scale. A practical consequence is that in this framework one can adopt truncations dependent on the single total field. Furthermore we discuss the necessary and sufficient conditions for the preservation of symmetries along the renormalization group flow.
Covariant and background independent functional RG flow for the effective average action
Safari, Mahmoud; Vacca, Gian Paolo
2016-11-01
We extend our prescription for the construction of a covariant and background-independent effective action for scalar quantum field theories to the case where momentum modes below a certain scale are suppressed by the presence of an infrared regulator. The key step is an appropriate choice of the infrared cutoff for which the Ward identity, capturing the information from single-field dependence of the ultraviolet action, continues to be exactly solvable, and therefore, in addition to covariance, manifest background independence of the effective action is guaranteed at any scale. A practical consequence is that in this framework one can adopt truncations dependent on the single total field. Furthermore we discuss the necessary and sufficient conditions for the preservation of symmetries along the renormalization group flow.
Error function attack of chaos synchronization based encryption schemes.
Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu
2004-03-01
Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.
Galaxy-galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
DEFF Research Database (Denmark)
Rohde, Palle Duun; Demontis, Ditte; Castro Dias Cuyabano, Beatriz
2016-01-01
Schizophrenia is a psychiatric disorder with large personal and social costs, and understanding the genetic etiology is important. Such knowledge can be obtained by testing the association between a disease phenotype and individual genetic markers; however, such single-marker methods have limited...... power to detect genetic markers with small effects. Instead, aggregating genetic markers based on biological information might increase the power to identify sets of genetic markers of etiological significance. Several set test methods have been proposed: Here we propose a new set test derived from...... genomic best linear unbiased prediction (GBLUP), the covariance association test (CVAT). We compared the performance of CVAT to other commonly used set tests. The comparison was conducted using a simulated study population having the same genetic parameters as for schizophrenia. We found that CVAT...
Wang, Y. K.
2017-11-01
A separable form of the Gogny pairing force is implemented in tilted axis cranking covariant density functional theory for the description of rotational bands in open shell nuclei. The developed method is used to investigate the yrast sequence of 109Ag for an example. The experimental energy spectrum, angular momenta, and electromagnetic transition probabilities are well reproduced by taking into account pairing correlations with the separable pairing force. An abrupt transition of the rotational axis from the long-intermediate plane to the long-short one is obtained and discussed in detail.
Parsimonious covariate selection for a multicategory ordered response.
Hsu, Wan-Hsiang; DiRienzo, A Gregory
2017-12-01
We propose a flexible continuation ratio (CR) model for an ordinal categorical response with potentially ultrahigh dimensional data that characterizes the unique covariate effects at each response level. The CR model is the logit of the conditional discrete hazard function for each response level given covariates. We propose two modeling strategies, one that keeps the same covariate set for each hazard function but allows regression coefficients to arbitrarily change with response level, and one that allows both the set of covariates and their regression coefficients to arbitrarily change with response. Evaluating a covariate set is accomplished by using the nonparametric bootstrap to estimate prediction error and their robust standard errors that do not rely on proper model specification. To help with interpretation of the selected covariate set, we flexibly estimate the conditional cumulative distribution function given the covariates using the separate hazard function models. The goodness-of-fit of our flexible CR model is assessed with graphical and numerical methods based on the cumulative sum of residuals. Simulation results indicate the methods perform well in finite samples. An application to B-cell acute lymphocytic leukemia data is provided.
Genetic co-variance functions for live weight, feed intake, and efficiency measures in growing pigs.
Coyne, J M; Berry, D P; Matilainen, K; Sevon-Aimonen, M-L; Mantysaari, E A; Juga, J; Serenius, T; McHugh, N
2017-09-01
The objective of the present study was to estimate genetic co-variance parameters pertaining to live weight, feed intake, and 2 efficiency traits (i.e., residual feed intake and residual daily gain) in a population of pigs over a defined growing phase using Legendre polynomial equations. The data set used consisted of 51,893 live weight records and 903,436 feed intake, residual feed intake (defined as the difference between an animal's actual feed intake and its expected feed intake), and residual daily gain (defined as the difference between an animal's actual growth rate and its expected growth rate) records from 10,201 growing pigs. Genetic co-variance parameters for all traits were estimated using random regression Legendre polynomials. Daily heritability estimates for live weight ranged from 0.25 ± 0.04 (d 73) to 0.50 ± 0.03 (d 122). Low to moderate heritability estimates were evident for feed intake, ranging from 0.07 ± 0.03 (d 66) to 0.25 ± 0.02 (d 170). The estimated heritability for residual feed intake was generally lower than those of both live weight and feed intake and ranged from 0.04 ± 0.01 (d 96) to 0.17 ± 0.02 (d 159). The heritability for feed intake and residual feed intake increased in the early stages of the test period and subsequently sharply declined, coinciding with older ages. Heritability estimates for residual daily gain ranged from 0.26 ± 0.03 (d 188) to 0.42 ± 0.03 (d 101). Genetic correlations within trait were strongest between adjacent ages but weakened as the interval between ages increased; however, the genetic correlations within all traits tended to strengthen between the extremes of the trajectory. Moderate to strong genetic correlations were evident among live weight, feed intake, and the efficiency traits, particularly in the early stage of the trial period (d 66 to 86), but weakened with age. Results from this study could be implemented into the national genetic evaluation for pigs, providing comprehensive
Energy Technology Data Exchange (ETDEWEB)
Franz Gross, Alfred Stadler
2010-09-01
We present the effective range expansions for the 1S0 and 3S1 scattering phase shifts, and the relativistic deuteron wave functions that accompany our recent high precision fits (with \\chi^2/N{data} \\simeq 1) to the 2007 world np data below 350 MeV. The wave functions are expanded in a series of analytical functions (with the correct asymptotic behavior at both large and small arguments) that can be Fourier-transformed from momentum to coordinate space and are convenient to use in any application. A fortran subroutine to compute these wave functions can be obtained from the authors.
Context-Dependent Type Error Diagnosis for Functional Languages
Serrano Mena, A.; Hage, J.
2016-01-01
Customizable type error diagnosis has been proposed as a solution to achieve domain-specific type error diagnosis for embedded domain specific languages. A proven approach is to phrase type inferencing as a constraint-solving problem, so that we can manipulate the order in which constraints are
Using Lambert W function and error function to model phase change on microfluidics
Bermudez Garcia, Anderson
2014-05-01
Solidification and melting modeling on microfluidics are solved using Lambert W's function and error's functions. Models are formulated using the heat's diffusion equation. The generic posed case is the melting of a slab with time dependent surface temperature, having a micro or nano-fluid liquid phase. At the beginning the solid slab is at melting temperature. A slab's face is put and maintained at temperature greater than the melting limit and varying in time. Lambert W function and error function are applied via Maple to obtain the analytic solution evolution of the front of microfluidic-solid interface, it is analytically computed and slab's corresponding melting time is determined. It is expected to have analytical results to be useful for food engineering, cooking engineering, pharmaceutical engineering, nano-engineering and bio-medical engineering.
Directory of Open Access Journals (Sweden)
Zbigniew Staroszczyk
2014-12-01
Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors
Yuan, Mengting M; Zhang, Jin; Xue, Kai; Wu, Liyou; Deng, Ye; Deng, Jie; Hale, Lauren; Zhou, Xishu; He, Zhili; Yang, Yunfeng; Van Nostrand, Joy D; Schuur, Edward A G; Konstantinidis, Konstantinos T; Penton, Christopher R; Cole, James R; Tiedje, James M; Luo, Yiqi; Zhou, Jizhong
2018-01-01
Permafrost soil in high latitude tundra is one of the largest terrestrial carbon (C) stocks and is highly sensitive to climate warming. Understanding microbial responses to warming-induced environmental changes is critical to evaluating their influences on soil biogeochemical cycles. In this study, a functional gene array (i.e., geochip 4.2) was used to analyze the functional capacities of soil microbial communities collected from a naturally degrading permafrost region in Central Alaska. Varied thaw history was reported to be the main driver of soil and plant differences across a gradient of minimally, moderately, and extensively thawed sites. Compared with the minimally thawed site, the number of detected functional gene probes across the 15-65 cm depth profile at the moderately and extensively thawed sites decreased by 25% and 5%, while the community functional gene β-diversity increased by 34% and 45%, respectively, revealing decreased functional gene richness but increased community heterogeneity along the thaw progression. Particularly, the moderately thawed site contained microbial communities with the highest abundances of many genes involved in prokaryotic C degradation, ammonification, and nitrification processes, but lower abundances of fungal C decomposition and anaerobic-related genes. Significant correlations were observed between functional gene abundance and vascular plant primary productivity, suggesting that plant growth and species composition could be co-evolving traits together with microbial community composition. Altogether, this study reveals the complex responses of microbial functional potentials to thaw-related soil and plant changes and provides information on potential microbially mediated biogeochemical cycles in tundra ecosystems. © 2017 John Wiley & Sons Ltd.
Sparse reduced-rank regression with covariance estimation
Chen, Lisha
2014-12-08
Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.
Covariant and infrared-free graviton two-point function in de Sitter spacetime II
Pejhan, Hamed
2016-01-01
The solution to the linearized Einstein equation in de Sitter (dS) spacetime and the corresponding two-point function are explicitly written down in a gauge with two parameters `$a$' and `$b$'. The quantization procedure, independent of the choice of the coordinate system, is based on a rigorous group theoretical approach. Our result takes the form of a universal spin-two (transverse-traceless) sector and a gauge-dependent spin-zero (pure-trace) sector. Scalar equations are derived for the structure functions of each part. We show that the spin-two sector can be written as the resulting action of a second-order differential operator (the spin-two projector) on a massless minimally coupled scalar field (the spin-two structure function). The operator plays the role of a symmetric rank-$2$ polarization tensor and has a spacetime dependence. The calculated spin-two projector grows logarithmically with distance and also no dS-invariant solution for either structure functions exist. We show that the logarithmically...
Huber, David E
2006-11-01
This article provides important mathematical descriptions and computer algorithms in relation to the responding optimally with unknown sources of evidence (ROUSE) model of Huber, Shiffrin, Lyle, and Ruys (2001), which has been applied to short-term priming phenomena. In the first section, techniques for obtaining parameter confidence intervals and parameter correlations are described, which are generally applicable to any mathematical model. In the second section, a technique for producing analytic ROUSE predictions is described. Huber et al. (2001) averaged many stochastic trials to obtain stable behavior. By appropriately weighting all possible combinations of feature states, an alternative analytic version is developed, yielding asymptotic model behavior with fewer computations. The third section ties together these separate techniques, obtaining parameter confidence and correlations for the analytic version of the ROUSE model. In doing so, previously unreported behaviors of the model are revealed. In particular, complications due to local minima are discussed, in terms of both variance-covariance analyses and bootstrap sampling analyses.
A strategy for minimizing common mode human error in executing critical functions and tasks
Energy Technology Data Exchange (ETDEWEB)
Beltracchi, L. (Nuclear Regulatory Commission, Washington, DC (United States)); Lindsay, R.W. (Argonne National Lab., IL (United States))
1992-01-01
Human error in execution of critical functions and tasks can be costly. The Three Mile Island and the Chernobyl Accidents are examples of results from human error in the nuclear industry. There are similar errors that could no doubt be cited from other industries. This paper discusses a strategy to minimize common mode human error in the execution of critical functions and tasks. The strategy consists of the use of human redundancy, and also diversity in human cognitive behavior: skill-, rule-, and knowledge-based behavior. The authors contend that the use of diversity in human cognitive behavior is possible, and it minimizes common mode error.
A strategy for minimizing common mode human error in executing critical functions and tasks
Energy Technology Data Exchange (ETDEWEB)
Beltracchi, L. [Nuclear Regulatory Commission, Washington, DC (United States); Lindsay, R.W. [Argonne National Lab., IL (United States)
1992-05-01
Human error in execution of critical functions and tasks can be costly. The Three Mile Island and the Chernobyl Accidents are examples of results from human error in the nuclear industry. There are similar errors that could no doubt be cited from other industries. This paper discusses a strategy to minimize common mode human error in the execution of critical functions and tasks. The strategy consists of the use of human redundancy, and also diversity in human cognitive behavior: skill-, rule-, and knowledge-based behavior. The authors contend that the use of diversity in human cognitive behavior is possible, and it minimizes common mode error.
2011-01-01
regions such as the Campeche Bank in both the truth and the assimilative runs. These features are non-deterministic and are linked to instabilities...eddy shedding in both the truth and the assimilative runs are preceded by the presence of cyclonic frontal eddies near the Campeche Bank and the...vicinity of the Campeche Bank are shown in Fig. 21. The initial errors in the West Florida Shelf region are small and are reduced steadily in the non
Earth Observing System Covariance Realism
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Directory of Open Access Journals (Sweden)
Frieder Kleefeld
2013-01-01
Full Text Available According to some generalized correspondence principle the classical limit of a non-Hermitian quantum theory describing quantum degrees of freedom is expected to be the well known classical mechanics of classical degrees of freedom in the complex phase space, i.e., some phase space spanned by complex-valued space and momentum coordinates. As special relativity was developed by Einstein merely for real-valued space-time and four-momentum, we will try to understand how special relativity and covariance can be extended to complex-valued space-time and four-momentum. Our considerations will lead us not only to some unconventional derivation of Lorentz transformations for complex-valued velocities, but also to the non-Hermitian Klein-Gordon and Dirac equations, which are to lay the foundations of a non-Hermitian quantum theory.
Team function in obstetrics to reduce errors and improve outcomes.
Nielsen, Peter; Mann, Susan
2008-03-01
Crew resource management (CRM), adapted from aviation for the practice of medicine, offers the potential of reducing medical errors, increasing employee retention, and improving patient satisfaction. CRM, however, requires a culture that promotes teamwork and acceptance of new concepts. Leadership is needed to transform the culture, as well as to train, coach, and sustain the behavior CRM demands. Culture change can be fostered through teamwork activities that, when made part of a daily routine, provides the basis for modeling teamwork skills and sets the stage for sustained culture change. New tools are available to measure processes as well as patient and staff satisfaction.
Directory of Open Access Journals (Sweden)
A. B. Levina
2016-03-01
Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking
Tay, L.; Huang, Q.; Vermunt, J.K.
2016-01-01
In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across
Hatala, J.; Sonnentag, O.; Detto, M.; Runkle, B.; Vargas, R.; Kelly, M.; Baldocchi, D. D.
2009-12-01
Ground-based, visible light imagery has been used for different purposes in agricultural and ecological research. A series of recent studies explored the utilization of networked digital cameras to continuously monitor vegetation by taking oblique canopy images at fixed view angles and time intervals. In our contribution we combine high temporal resolution digital camera imagery, eddy-covariance, and meteorological measurements with weekly field-based hyperspectral and LAI measurements to gain new insights on temporal changes in canopy structure and functioning of two managed ecosystems in California’s Sacramento-San Joaquin River Delta: a pasture infested by the invasive perennial pepperweed (Lepidium latifolium) and a rice plantation (Oryza sativa). Specific questions we address are: a) how does year-round grazing affect pepperweed canopy development, b) is it possible to identify phenological key events of managed ecosystems (pepperweed: flowering; rice: heading) from the limited spectral information of digital camera imagery, c) is a simple greenness index derived from digital camera imagery sufficient to track leaf area index and canopy development of managed ecosystems, and d) what are the scales of temporal correlation between digital camera signals and carbon and water fluxes of managed ecosystems? Preliminary results for the pasture-pepperweed ecosystem show that year-round grazing inhibits the accumulation of dead stalks causing earlier green-up and that digital camera imagery is well suited to capture the onset of flowering and the associated decrease in photosynthetic CO2 uptake. Results from our analyses are of great relevance from both a global environmental change and land management perspective.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
Covariance Models for Hydrological Applications
Hristopulos, Dionissios
2014-05-01
This methodological contribution aims to present some new covariance models with applications in the stochastic analysis of hydrological processes. More specifically, we present explicit expressions for radially symmetric, non-differentiable, Spartan covariance functions in one, two, and three dimensions. The Spartan covariance parameters include a characteristic length, an amplitude coefficient, and a rigidity coefficient which determines the shape of the covariance function. Different expressions are obtained depending on the value of the rigidity coefficient and the dimensionality. If the value of the rigidity coefficient is much larger than one, the Spartan covariance function exhibits multiscaling. Spartan covariance models are more flexible than the classical geostatatistical models (e.g., spherical, exponential). Their non-differentiability makes them suitable for modelling the properties of geological media. We also present a family of radially symmetric, infinitely differentiable Bessel-Lommel covariance functions which are valid in any dimension. These models involve combinations of Bessel and Lommel functions. They provide a generalization of the J-Bessel covariance function, and they can be used to model smooth processes with an oscillatory decay of correlations. We discuss the dependence of the integral range of the Spartan and Bessel-Lommel covariance functions on the parameters. We point out that the dependence is not uniquely specified by the characteristic length, unlike the classical geostatistical models. Finally, we define and discuss the use of the generalized spectrum for characterizing different correlation length scales; the spectrum is defined in terms of an exponent α. We show that the spectrum values obtained for exponent values less than one can be used to discriminate between mean-square continuous but non-differentiable random fields. References [1] D. T. Hristopulos and S. Elogne, 2007. Analytic properties and covariance functions of
Error Patterns Analysis of Hearing Aid and Cochlear Implant Users as a Function of Noise.
Chun, Hyungi; Ma, Sunmi; Han, Woojae; Chun, Youngmyoung
2015-12-01
Not all impaired listeners may have the same speech perception ability although they will have similar pure-tone threshold and configuration. For this reason, the present study analyzes error patterns in the hearing-impaired compared to normal hearing (NH) listeners as a function of signal-to-noise ratio (SNR). Forty-four adults participated: 10 listeners with NH, 20 hearing aids (HA) users and 14 cochlear implants (CI) users. The Korean standardized monosyllables were presented as the stimuli in quiet and three different SNRs. Total error patterns were classified into types of substitution, omission, addition, fail, and no response, using stacked bar plots. Total error percent for the three groups significantly increased as the SNRs decreased. For error pattern analysis, the NH group showed substitution errors dominantly regardless of the SNRs compared to the other groups. Both the HA and CI groups had substitution errors that declined, while no response errors appeared as the SNRs increased. The CI group was characterized by lower substitution and higher fail errors than did the HA group. Substitutions of initial and final phonemes in the HA and CI groups were limited by place of articulation errors. However, the HA group had missed consonant place cues, such as formant transitions and stop consonant bursts, whereas the CI group usually had limited confusions of nasal consonants with low frequency characteristics. Interestingly, all three groups showed /k/ addition in the final phoneme, a trend that magnified as noise increased. The HA and CI groups had their unique error patterns even though the aided thresholds of the two groups were similar. We expect that the results of this study will focus on high error patterns in auditory training of hearing-impaired listeners, resulting in reducing those errors and improving their speech perception ability.
Fractional charge and spin errors in self-consistent Green's function theory.
Phillips, Jordan J; Kananenka, Alexei A; Zgid, Dominika
2015-05-21
We examine fractional charge and spin errors in self-consistent Green's function theory within a second-order approximation (GF2). For GF2, it is known that the summation of diagrams resulting from the self-consistent solution of the Dyson equation removes the divergences pathological to second-order Møller-Plesset (MP2) theory for strong correlations. In the language often used in density functional theory contexts, this means GF2 has a greatly reduced fractional spin error relative to MP2. The natural question then is what effect, if any, does the Dyson summation have on the fractional charge error in GF2? To this end, we generalize our previous implementation of GF2 to open-shell systems and analyze its fractional spin and charge errors. We find that like MP2, GF2 possesses only a very small fractional charge error, and consequently minimal many electron self-interaction error. This shows that GF2 improves on the critical failings of MP2, but without altering the positive features that make it desirable. Furthermore, we find that GF2 has both less fractional charge and fractional spin errors than typical hybrid density functionals as well as random phase approximation with exchange.
Royston, Patrick
2014-01-01
We consider how to represent sigmoid-type regression relationships in a practical and parsimonious way. A pure sigmoid relationship has an asymptote at both ends of the range of a continuous covariate. Curves with a single asymptote are also important in practice. Many smoothers, such as fractional polynomials and restricted cubic regression splines, cannot accurately represent doubly asymptotic curves. Such smoothers may struggle even with singly asymptotic curves. Our approach to modeling s...
Royston, P
2014-01-01
We consider how to represent sigmoid-type regression relationships in a practical and parsimonious way. A pure sigmoid relationship has an asymptote at both ends of the range of a continuous covariate. Curves with a single asymptote are also important in practice. Many smoothers, such as fractional polynomials and restricted cubic regression splines, cannot accurately represent doubly asymptotic curves. Such smoothers may struggle even with singly asymptotic curves. Our approach to modeling s...
Cheng, Jin; Yu, Kuang; Libisch, Florian; Dieterich, Johannes M; Carter, Emily A
2017-03-14
Quantum mechanical embedding theories partition a complex system into multiple spatial regions that can use different electronic structure methods within each, to optimize trade-offs between accuracy and cost. The present work incorporates accurate but expensive correlated wave function (CW) methods for a subsystem containing the phenomenon or feature of greatest interest, while self-consistently capturing quantum effects of the surroundings using fast but less accurate density functional theory (DFT) approximations. We recently proposed two embedding methods [for a review, see: Acc. Chem. Res. 2014 , 47 , 2768 ]: density functional embedding theory (DFET) and potential functional embedding theory (PFET). DFET provides a fast but non-self-consistent density-based embedding scheme, whereas PFET offers a more rigorous theoretical framework to perform fully self-consistent, variational CW/DFT calculations [as defined in part 1, CW/DFT means subsystem 1(2) is treated with CW(DFT) methods]. When originally presented, PFET was only tested at the DFT/DFT level of theory as a proof of principle within a planewave (PW) basis. Part 1 of this two-part series demonstrated that PFET can be made to work well with mixed Gaussian type orbital (GTO)/PW bases, as long as optimized GTO bases and consistent electron-ion potentials are employed throughout. Here in part 2 we conduct the first PFET calculations at the CW/DFT level and compare them to DFET and full CW benchmarks. We test the performance of PFET at the CW/DFT level for a variety of types of interactions (hydrogen bonding, metallic, and ionic). By introducing an intermediate CW/DFT embedding scheme denoted DFET/PFET, we show how PFET remedies different types of errors in DFET, serving as a more robust type of embedding theory.
Directory of Open Access Journals (Sweden)
Salih Yalcinbas
2016-01-01
Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.
Globally covering a-priori regional gravity covariance models
Directory of Open Access Journals (Sweden)
D. Arabelos
2003-01-01
Full Text Available Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation, the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-22. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`
Two soft-error mitigation techniques for functional units of DSP processors
Rohani, A.; Kerkhoff, Hans G.
This paper presents two soft-error mitigation methods for DSP processors. Considering that a DSP processor is composed of several functional units and each functional unit constitutes of a control unit, some registers and combinational logic, a unique characteristic of DSP workloads has been
Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models.
Roshani, Daem; Ghaderi, Ebrahim
2016-02-01
Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods.
Treatment Effects with Many Covariates and Heteroskedasticity
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.
The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...... then propose a new heteroskedasticity consistent standard error formula that is fully automatic and robust to both (conditional) heteroskedasticity of unknown form and the inclusion of possibly many covariates. We apply our findings to three settings: (i) parametric linear models with many covariates, (ii......) semiparametric semi-linear models with many technical regressors, and (iii) linear panel models with many fixed effects...
The effects of a nighttime nap on the error-monitoring functions during extended wakefulness.
Asaoka, Shoichi; Fukuda, Kazuhiko; Murphy, Timothy I; Abe, Takashi; Inoue, Yuichi
2012-06-01
To examine the effects of a 1-hr nighttime nap, and the associated sleep inertia, on the error-monitoring functions during extended wakefulness using the 2 event-related potential components thought to reflect error detection and emotional or motivational evaluation of the error, i.e., the error-related negativity/error-negativity (ERN/Ne) and error-positivity (Pe), respectively. Participants awakened at 07:00 the morning of the experimental day, and performed a stimulus-response compatibility (arrow-orientation) task at 21:00, 02:00, and 03:00. A cognitive task with EEG data recording was performed in a laboratory setting. Twenty young adults (mean age 21.3 ± 1.0 yr, 14 males) participated. Half of the participants took a 1-hr nap, and the others had a 1-hr awake-rest period from 01:00-02:00. Behavioral performance and amplitude of the Pe declined after midnight (i.e., 02:00 and 03:00) compared with the 21:00 task period in both groups. During the task period starting at 03:00, the participants in the awake-rest condition reported less alertness and showed fewer correct responses than those who napped. However, there were no effects of a nap on the amplitude of the ERN/Ne or Pe. Our results suggest that a 1-hr nap can alleviate the decline in subjective alertness and response accuracy during nighttime; however, error-monitoring functions, especially emotional or motivational evaluation of the error, might remain impaired by extended wakefulness even after the nap. This phenomenon could imply that night-shift workers experiencing extended wakefulness should not overestimate the positive effects of a nighttime 1-hr nap during extended wakefulness.
Leitão, Sofia; Stadler, Alfred; Peña, M. T.; Biernat, Elmar P.
2017-10-01
We use the covariant spectator theory with an effective quark-antiquark interaction, containing Lorentz scalar, pseudoscalar, and vector contributions, to calculate the masses and vertex functions of, simultaneously, heavy and heavy-light mesons. We perform least-square fits of the model parameters, including the quark masses, to the meson spectrum and systematically study the sensitivity of the parameters with respect to different sets of fitted data. We investigate the influence of the vector confining interaction by using a continuous parameter controlling its weight. We find that vector contributions to the confining interaction between 0% and about 30% lead to essentially the same agreement with the data. Similarly, the light quark masses are not very tightly constrained. In all cases, the meson mass spectra calculated with our fitted models agree very well with the experimental data. We also calculate the mesons wave functions in a partial wave representation and show how they are related to the meson vertex functions in covariant form.
Construction of secure and fast hash functions using nonbinary error-correcting codes
DEFF Research Database (Denmark)
Knudsen, Lars Ramkilde; Preneel, Bart
2002-01-01
This paper considers iterated hash functions. It proposes new constructions of fast and secure compression functions with nl-bit outputs for integers n>1 based on error-correcting codes and secure compression functions with l-bit outputs. This leads to simple and practical hash function...... constructions based on block ciphers such as the Data Encryption Standard (DES), where the key size is slightly smaller than the block size; IDEA, where the key size is twice the block size; Advanced Encryption Standard (AES), with a variable key size; and to MD4-like hash functions. Under reasonable...... assumptions about the underlying compression function and/or block cipher, it is proved that the new hash functions are collision resistant. More precisely, a lower bound is shown on the number of operations to find a collision as a function of the strength of the underlying compression function. Moreover...
Density-functional errors in ionization potential with increasing system size
Energy Technology Data Exchange (ETDEWEB)
Whittleton, Sarah R.; Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu [Chemistry and Chemical Biology, School of Natural Sciences, University of California, Merced, 5200 North Lake Road, Merced, California 95343 (United States); Johnson, Erin R., E-mail: erin.johnson@dal.ca [Chemistry and Chemical Biology, School of Natural Sciences, University of California, Merced, 5200 North Lake Road, Merced, California 95343 (United States); Department of Chemistry, Dalhousie University, 6274 Coburg Road, Halifax, Nova Scotia B3H 4R2 (Canada)
2015-05-14
This work investigates the effects of molecular size on the accuracy of density-functional ionization potentials for a set of 28 hydrocarbons, including series of alkanes, alkenes, and oligoacenes. As the system size increases, delocalization error introduces a systematic underestimation of the ionization potential, which is rationalized by considering the fractional-charge behavior of the electronic energies. The computation of the ionization potential with many density-functional approximations is not size-extensive due to excessive delocalization of the incipient positive charge. While inclusion of exact exchange reduces the observed errors, system-specific tuning of long-range corrected functionals does not generally improve accuracy. These results emphasize that good performance of a functional for small molecules is not necessarily transferable to larger systems.
A functional type a posteriori error analysis for the Ramberg-Osgood model
Bildhauer, Michael; Fuchs, Martin; Repin, Sergey
2007-01-01
We discuss the weak form of the Ramberg-Osgood equations (also known as the Norton-Hoff model) for nonlinear elastic materials and prove functional type a posteriori error estimates for the difference of the exact stress tensor and any tensor from the admissible function space. These equations are of great importance since they can be used as an approximation for elastic-perfectly plastic Hencky materials.
Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.
2012-06-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.
Covariant quantum Markovian evolutions
Holevo, A. S.
1996-04-01
Quantum Markovian master equations with generally unbounded generators, having physically relevant symmetries, such as Weyl, Galilean or boost covariance, are characterized. It is proven in particular that a fully Galilean covariant zero spin Markovian evolution reduces to the free motion perturbed by a covariant stochastic process with independent stationary increments in the classical phase space. A general form of the boost covariant Markovian master equation is discussed and a formal dilation to the Langevin equation driven by quantum Boson noises is described.
Apanasovich, Tatiyana V.
2012-03-01
We introduce a valid parametric family of cross-covariance functions for multivariate spatial random fields where each component has a covariance function from a well-celebrated Matérn class. Unlike previous attempts, our model indeed allows for various smoothnesses and rates of correlation decay for any number of vector components.We present the conditions on the parameter space that result in valid models with varying degrees of complexity. We discuss practical implementations, including reparameterizations to reflect the conditions on the parameter space and an iterative algorithm to increase the computational efficiency. We perform various Monte Carlo simulation experiments to explore the performances of our approach in terms of estimation and cokriging. The application of the proposed multivariate Matérnmodel is illustrated on two meteorological datasets: temperature/pressure over the Pacific Northwest (bivariate) and wind/temperature/pressure in Oklahoma (trivariate). In the latter case, our flexible trivariate Matérn model is valid and yields better predictive scores compared with a parsimonious model with common scale parameters. © 2012 American Statistical Association.
Energy Technology Data Exchange (ETDEWEB)
Jimenez D, H.; Cabral P, A
1991-08-15
In this work it is demonstrated that the complex magnetic susceptibility of a spin system, it can be written in terms of the complex error function. It is also made notice that this function with {alpha} = 0 it satisfies the Kramers-Kronig relationships. (Author)
Bias Errors due to Leakage Effects When Estimating Frequency Response Functions
Directory of Open Access Journals (Sweden)
Andreas Josefsson
2012-01-01
Full Text Available Frequency response functions are often utilized to characterize a system's dynamic response. For a wide range of engineering applications, it is desirable to determine frequency response functions for a system under stochastic excitation. In practice, the measurement data is contaminated by noise and some form of averaging is needed in order to obtain a consistent estimator. With Welch's method, the discrete Fourier transform is used and the data is segmented into smaller blocks so that averaging can be performed when estimating the spectrum. However, this segmentation introduces leakage effects. As a result, the estimated frequency response function suffers from both systematic (bias and random errors due to leakage. In this paper the bias error in the H1 and H2-estimate is studied and a new method is proposed to derive an approximate expression for the relative bias error at the resonance frequency with different window functions. The method is based on using a sum of real exponentials to describe the window's deterministic autocorrelation function. Simple expressions are derived for a rectangular window and a Hanning window. The theoretical expressions are verified with numerical simulations and a very good agreement is found between the results from the proposed bias expressions and the empirical results.
Ranasinghe, Duminda S.; Margraf, Johannes T.; Jin, Yifan; Bartlett, Rodney J.
2017-01-01
Though contrary to conventional wisdom, the interpretation of all occupied Kohn-Sham eigenvalues as vertical ionization potentials is justified by several formal and numerical arguments. Similarly, the performance of density functional approximations (DFAs) for fractionally charged systems has been extensively studied as a measure of one- and many-electron self-interaction errors (MSIEs). These complementary perspectives (initially recognized in ab initio dft) are shown to lead to the unifying concept that satisfying Bartlett's IP theorem in DFA's mitigates self-interaction errors. In this contribution, we show that the IP-optimized QTP functionals (reparameterization of CAM-B3LYP where all eigenvalues are approximately equal to vertical IPs) display reduced self-interaction errors in a variety of tests including the He2+ potential curve. Conversely, the MSIE-optimized rCAM-B3LYP functional also displays accurate orbital eigenvalues. It is shown that the CAM-QTP and rCAM-B3LYP functionals show improved dissociation limits, fundamental gaps and thermochemical accuracy compared to their parent functional CAM-B3LYP.
Phonetic and phonological errors in children with high functioning autism and Asperger syndrome.
Cleland, Joanne; Gibbon, Fiona E; Peppé, Sue J E; O'Hare, Anne; Rutherford, Marion
2010-02-01
This study involved a qualitative analysis of speech errors in children with autism spectrum disorders (ASDs). Participants were 69 children aged 5-13 years; 30 had high functioning autism and 39 had Asperger syndrome. On a standardized test of articulation, the minority (12%) of participants presented with standard scores below the normal range, indicating a speech delay/disorder. Although all the other children had standard scores within the normal range, a sizeable proportion (33% of those with normal standard scores) presented with a small number of errors. Overall 41% of the group produced at least some speech errors. The speech of children with ASD was characterized by mainly developmental phonological processes (gliding, cluster reduction and final consonant deletion most frequently), but non-developmental error types (such as phoneme specific nasal emission and initial consonant deletion) were found both in children identified as performing below the normal range in the standardized speech test and in those who performed within the normal range. Non-developmental distortions occurred relatively frequently in the children with ASD and previous studies of adolescents and adults with ASDs shows similar errors, suggesting that they do not resolve over time. Whether or not speech disorders are related specifically to ASD, their presence adds an additional communication and social barrier and should be diagnosed and treated as early as possible in individual children.
A functional approach to movement analysis and error identification in sports and physical education
Directory of Open Access Journals (Sweden)
Ernst-Joachim eHossner
2015-09-01
Full Text Available In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the ideal movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1 recognising constraints that define the functional structure, (2 identifying sub-actions that subserve the achievement of structure-dependent goals, (3 explicating modalities as specifics of the movement execution, and (4 assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioural shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognising functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L1 norm or even sub-linear potentials corresponding to quasinorms Lp (0machine learning methods, including methods of data approximation and regularized and sparse regression, leading to the improvement in the computational cost/accuracy trade-off. We demonstrate that on synthetic and real-life datasets PQSQ-based machine learning methods achieve orders of magnitude faster computational performance than the corresponding state-of-the-art methods, having similar or better approximation accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Errors associated with IOLMaster biometry as a function of internal ocular dimensions.
Faria-Ribeiro, Miguel; Lopes-Ferreira, Daniela; López-Gil, Norberto; Jorge, Jorge; González-Méijome, José Manuel
2014-01-01
To evaluate the error in the estimation of axial length (AL) with the IOLMaster partial coherence interferometry (PCI) biometer and obtain a correction factor that varies as a function of AL and crystalline lens thickness (LT). Optical simulations were produced for theoretical eyes using Zemax-EE software. Thirty-three combinations including eleven different AL (from 20mm to 30mm in 1mm steps) and three different LT (3.6mm, 4.2mm and 4.8mm) were used. Errors were obtained comparing the AL measured for a constant equivalent refractive index of 1.3549 and for the actual combinations of indices and intra-ocular dimensions of LT and AL in each model eye. In the range from 20mm to 30mm AL and 3.6-4.8mm LT, the instrument measurements yielded an error between -0.043mm and +0.089mm. Regression analyses for the three LT condition were combined in order to derive a correction factor as a function of the instrument measured AL for each combination of AL and LT in the theoretical eye. The assumption of a single "average" refractive index in the estimation of AL by the IOLMaster PCI biometer only induces very small errors in a wide range of combinations of ocular dimensions. Even so, the accurate estimation of those errors may help to improve accuracy of intra-ocular lens calculations through exact ray tracing, particularly in longer eyes and eyes with thicker or thinner crystalline lenses. Copyright © 2013 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.
Error-related functional connectivity of the thalamus in cocaine dependence
Directory of Open Access Journals (Sweden)
Sheng Zhang
2014-01-01
Full Text Available Error processing is a critical component of cognitive control, an executive function that has been widely implicated in substance misuse. In previous studies we showed that error related activations of the thalamus predicted relapse to drug use in cocaine addicted individuals (Luo et al., 2013. Here, we investigated whether the error-related functional connectivity of the thalamus is altered in cocaine dependent patients (PCD, n = 54 as compared to demographically matched healthy individuals (HC, n = 54. The results of a generalized psychophysiological interaction analysis showed negative thalamic connectivity with the ventral medial prefrontal cortex (vmPFC, in the area of perigenual and subgenual anterior cingulate cortex, in HC but not PCD (p < 0.05, corrected, two-sample t test. This difference in functional connectivity was not observed for task-residual signals, suggesting that it is specific to task-related processes during cognitive control. Further, the thalamic-vmPFC connectivity is positively correlated with the amount of cocaine use in the prior month for female but not for male PCD. These findings add to recent literature and provide additional evidence for circuit-level biomarkers of cocaine dependence.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Groenendijk, M.; Dolman, A.J.; Molen, van der M.K.; Leuning, R.; Arneth, A.; Delpierre, N.; Gash, J.H.C.; Lindroth, A.; Richardson, A.D.; Verbeeck, H.; Wohlfahrt, G.
2011-01-01
The vegetation component in climate models has advanced since the late 1960s from a uniform prescription of surface parameters to plant functional types (PFTs). PFTs are used in global land-surface models to provide parameter values for every model grid cell. With a simple photosynthesis model we
Reducing Systematic Errors in Oxide Species with Density Functional Theory Calculations
DEFF Research Database (Denmark)
Christensen, Rune; Hummelshøj, Jens S.; Hansen, Heine Anton
2015-01-01
for different types of alkali and alkaline earth metal oxide species has been examined. Most examined functionals result in significant overestimation of the stability of superoxide species compared to peroxides and monoxides, which can result in erroneous prediction of reaction pathways. We show that if metal...... chlorides are used as reference structures instead of metals, the systematic errors are significantly reduced and functional variations decreased. Using a metal chloride reference, where the metal atoms are in the same oxidation state as in the oxide species, will provide a computationally inexpensive......Density functional theory calculations can be used to gain valuable insight into the fundamental reaction processes in metal−oxygen systems, e.g., metal−oxygen batteries. Here, the ability of a range of different exchange-correlation functionals to reproduce experimental enthalpies of formation...
Directory of Open Access Journals (Sweden)
Md. Moyazzem Hossain
2015-02-01
Full Text Available In developing counties, efficiency of economic development has determined by the analysis of industrial production. An examination of the characteristic of industrial sector is an essential aspect of growth studies. The most of the developed countries are highly industrialized as they brief “The more industrialization, the more development”. For proper industrialization and industrial development we have to study industrial input-output relationship that leads to production analysis. For a number of reasons econometrician’s belief that industrial production is the most important component of economic development because, if domestic industrial production increases, GDP will increase, if elasticity of labor is higher, implement rates will increase and investment will increase if elasticity of capital is higher. In this regard, this paper should be helpful in suggesting the most suitable Cobb-Douglas production function to forecast the production process for some selected manufacturing industries of developing countries like Bangladesh. This paper choose the appropriate Cobb-Douglas function which gives optimal combination of inputs, that is, the combination that enables it to produce the desired level of output with minimum cost and hence with maximum profitability for some selected manufacturing industries of Bangladesh over the period 1978-79 to 2011-2012. The estimated results shows that the estimates of both capital and labor elasticity of Cobb-Douglas production function with additive errors are more efficient than those estimates of Cobb-Douglas production function with multiplicative errors.
Leskinen, Riitta; Laatikainen, Tiina; Peltonen, Markku; Levälahti, Esko; Antikainen, Riitta
2013-07-01
the functional status is one of the most important health measurements in the elderly. This study aimed to investigate the prevalence of self-reported physical and mental conditions among Finnish Second World War veterans during 1992-2004. We also aimed to study the ability of these conditions in 1992 to predict the functional status impairment in 2004 and to determine whether the worsening of symptoms or the onset of new diseases during 1992-2004 was associated with impaired basic activities of daily living (BADL) and instrumental activities of daily living (IADL) in 2004. the study population was 4,999 veterans living in Finland participating in both the Veteran Project 1992 and 2004. Logistic regression models were employed to identify predictors for impaired BADL and IADL. Analyses were conducted separately for men with and without disability and for women. the highest risk estimate for impaired BADL in 2004 was in men without disability who had a neurological disease in 1992 [odds ratios (OR): 5.78, 95% CI: 2.49-13.43], in men with disability with walking difficulties in 1992 (OR: 2.41, 95% CI: 1.79-3.25) and in women with a musculoskeletal disease in 1992 (OR: 2.39, 95% CI: 1.58-3.62). For impaired IADL, walking difficulties had the highest risk estimate in all veteran groups. mental and physical conditions, especially walking difficulties, can predict veterans' future functional impairment even 12 years in advance, and worsening of these conditions is associated with impaired ADL.
On the co-variation between form and function of adnominal possessive modifiers in Dutch and English
DEFF Research Database (Denmark)
Rijkhoff, Jan
2009-01-01
This contribution is concerned with Dutch and to a lesser extent English possessive modifiers introduced by the preposition of (Dutch van), as in a woman OF INFLUENCE or (Dutch) de auto VAN MIJN BROER (the car OF MY BROTHER) ‘my brother’s car’. The main goal of this paper is to demonstrate...... generalizations can be made about members of different form classes (e.g. adjectives and possessives) if modifiers are characterized in functional rather than formal terms. This paper is restricted to possessive modifiers of nouns that denote concrete objects....
Bergshoeff, E.; Pope, C.N.; Stelle, K.S.
1990-01-01
We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.
Karakatsanis, Konstantinos; Lalazissis, G. A.; Ring, Peter; Litvinova, Elena
2017-03-01
Spin-orbit splitting is an essential ingredient for our understanding of the shell structure in nuclei. One of the most important advantages of relativistic mean-field (RMF) models in nuclear physics is the fact that the large spin-orbit (SO) potential emerges automatically from the inclusion of Lorentz-scalar and -vector potentials in the Dirac equation. It is therefore of great importance to compare the results of such models with experimental data. We investigate the size of 2 p and 1 f splittings for the isotone chain 40Ca, 38Ar, 36S, and 34Si in the framework of various relativistic and nonrelativistic density functionals. They are compared with the results of nonrelativistic models and with recent experimental data.
ERP components on reaction errors and their functional significance: a tutorial.
Falkenstein, M; Hoormann, J; Christ, S; Hohnsbein, J
2000-01-01
Some years ago we described a negative (Ne) and a later positive (Pe) deflection in the event-related brain potentials (ERPs) of incorrect choice reactions [Falkenstein, M., Hohnsbein, J., Hoormann, J., Blanke, L., 1990. In: Brunia, C.H.M., Gaillard, A.W.K., Kok, A. (Eds.), Psychophysiological Brain Research. Tilburg Univesity Press, Tilburg, pp. 192-195. Falkenstein, M., Hohnsbein, J., Hoormann, J., 1991. Electroencephalography and Clinical Neurophysiology, 78, 447-455]. Originally we assumed the Ne to represent a correlate of error detection in the sense of a mismatch signal when representations of the actual response and the required response are compared. This hypothesis was supported by the results of a variety of experiments from our own laboratory and that of Coles [Gehring, W. J., Goss, B., Coles, M.G.H., Meyer, D.E., Donchin, E., 1993. Psychological Science 4, 385-390. Bernstein, P.S., Scheffers, M.K., Coles, M.G.H., 1995. Journal of Experimental Psychology: Human Perception and Performance 21, 1312-1322. Scheffers, M.K., Coles, M. G.H., Bernstein, P., Gehring, W.J., Donchin, E., 1996. Psychophysiology 33, 42-54]. However, new data from our laboratory and that of Vidal et al. [Vidal, F., Hasbroucq, T., Bonnet, M., 1999. Biological Psychology, 2000] revealed a small negativity similar to the Ne also after correct responses. Since the above mentioned comparison process is also required after correct responses it is conceivable that the Ne reflects this comparison process itself rather than its outcome. As to the Pe, our results suggest that this is a further error-specific component, which is independent of the Ne, and hence associated with a later aspect of error processing or post-error processing. Our new results with different age groups argue against the hypotheses that the Pe reflects conscious error processing or the post-error adjustment of response strategies. Further research is necessary to specify the functional significance of the Pe.
Fröb, Markus B; Lima, William C C
2016-01-01
We construct the graviton two-point function for a two-parameter family of linear covariant gauges in n-dimensional de Sitter space. The construction is performed via the mode-sum method in the Bunch-Davies vacuum in the Poincar\\'e patch, and a Fierz-Pauli mass term is introduced to regularize the infrared (IR) divergences. The resulting two-point function is de Sitter-invariant, and free of IR divergences in the massless limit (for a certain range of parameters) though analytic continuation with respect to the mass for the pure-gauge sector of the two-point function is necessary for this result. This general result agrees with the propagator obtained by analytic continuation from the sphere [Phys. Rev. D 34, 3670 (1986); Class. Quant. Grav. 18, 4317 (2001)]. However, if one starts with strictly zero mass theory, the IR divergences are absent only for a specific value of one of the two parameters, with the other parameter left generic. These findings agree with recent calculations in the Landau (exact) gauge ...
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Gibbons, Laura E; Crane, Paul K; Mehta, Kala M; Pedraza, Otto; Tang, Yuxiao; Manly, Jennifer J; Narasimhalu, Kaavya; Teresi, Jeanne; Jones, Richard N; Mungas, Dan
2011-04-28
Differential item functioning (DIF) occurs when a test item has different statistical properties in subgroups, controlling for the underlying ability measured by the test. DIF assessment is necessary when evaluating measurement bias in tests used across different language groups. However, other factors such as educational attainment can differ across language groups, and DIF due to these other factors may also exist. How to conduct DIF analyses in the presence of multiple, correlated factors remains largely unexplored. This study assessed DIF related to Spanish versus English language in a 44-item object naming test. Data come from a community-based sample of 1,755 Spanish- and English-speaking older adults. We compared simultaneous accounting, a new strategy for handling differences in educational attainment across language groups, with existing methods. Compared to other methods, simultaneously accounting for language- and education-related DIF yielded salient differences in some object naming scores, particularly for Spanish speakers with at least 9 years of education. Accounting for factors that vary across language groups can be important when assessing language DIF. The use of simultaneous accounting will be relevant to other cross-cultural studies in cognition and in other fields, including health-related quality of life.
LÉVY-BASED ERROR PREDICTION IN CIRCULAR SYSTEMATIC SAMPLING
Directory of Open Access Journals (Sweden)
Kristjana Ýr Jónsdóttir
2013-06-01
Full Text Available In the present paper, Lévy-based error prediction in circular systematic sampling is developed. A model-based statistical setting as in Hobolth and Jensen (2002 is used, but the assumption that the measurement function is Gaussian is relaxed. The measurement function is represented as a periodic stationary stochastic process X obtained by a kernel smoothing of a Lévy basis. The process X may have an arbitrary covariance function. The distribution of the error predictor, based on measurements in n systematic directions is derived. Statistical inference is developed for the model parameters in the case where the covariance function follows the celebrated p-order covariance model.
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...... measures and longitudinal structures, and the third involves a spatiotemporal analysis of rainfall data. The models take non-normality into account in the conventional way by means of a variance function, and the mean structure is modelled by means of a link function and a linear predictor. The models...
Covariance differences of linealy representable sequences in hilbert ...
African Journals Online (AJOL)
TThe paper introduces the concepts of covariance differences of a sequence and establishes its relationship with the covariance function. One of the main results of this paper is the criteria of linear representability of sequences in Hilbert spaces.
Covariant quantizations in plane and curved spaces
Energy Technology Data Exchange (ETDEWEB)
Assirati, J.L.M. [University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil); Gitman, D.M. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); P.N. Lebedev Physical Institute, Moscow (Russian Federation); University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil)
2017-07-15
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
Moore, Christopher; Boerner, Jeremy; Moore, Stan; Cartwright, Keith; Pointon, Timothy
2014-10-01
Many PIC simulations span many orders of magnitude in the plasma density and therefore a constant particle weight results in too few particles in regions (or time periods) of low density or too many particles when the density is high. The standard solution is to employ a reweighting scheme in which low-weight particles are merged in order to keep the number of particles per cell roughly constant while conserving mass and momentum. Unfortunately merger schemes distort a general velocity distribution function (VDF) of particles (one can conserve arbitrarily higher moments such as energy flux by merging N to M particles for N > M > 1) and often merge routines act like artificial collisions that thermalize the distribution and lead to simulation error. We will compare the accuracy of the unique reweighting scheme used in our PIC-DSMC code and common reweighting schemes (e.g. redrawing from a constructed VDF or rouletting) through two benchmarks. The first compares the time varying VDF from various merge routines to an analytic solution for relaxation of a bimodal VDF to a Maxwellian through elastic collisions. The second benchmark compares error introduced in the VDF due to merging electrons during a breakdown simulation. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under Contract DE-AC04-94AL85000.
Linear covariance analysis for gimbaled pointing systems
Christensen, Randall S.
Linear covariance analysis has been utilized in a wide variety of applications. Historically, the theory has made significant contributions to navigation system design and analysis. More recently, the theory has been extended to capture the combined effect of navigation errors and closed-loop control on the performance of the system. These advancements have made possible rapid analysis and comprehensive trade studies of complicated systems ranging from autonomous rendezvous to vehicle ascent trajectory analysis. Comprehensive trade studies are also needed in the area of gimbaled pointing systems where the information needs are different from previous applications. It is therefore the objective of this research to extend the capabilities of linear covariance theory to analyze the closed-loop navigation and control of a gimbaled pointing system. The extensions developed in this research include modifying the linear covariance equations to accommodate a wider variety of controllers. This enables the analysis of controllers common to gimbaled pointing systems, with internal states and associated dynamics as well as actuator command filtering and auxiliary controller measurements. The second extension is the extraction of power spectral density estimates from information available in linear covariance analysis. This information is especially important to gimbaled pointing systems where not just the variance but also the spectrum of the pointing error impacts the performance. The extended theory is applied to a model of a gimbaled pointing system which includes both flexible and rigid body elements as well as input disturbances, sensor errors, and actuator errors. The results of the analysis are validated by direct comparison to a Monte Carlo-based analysis approach. Once the developed linear covariance theory is validated, analysis techniques that are often prohibitory with Monte Carlo analysis are used to gain further insight into the system. These include the creation
Directory of Open Access Journals (Sweden)
Cai Ligang
2017-01-01
Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
2014-01-01
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized volatility measures that are imperfect estimates...... of actual volatility. In an empirical analysis using realized measures for the Dow Jones industrial average stocks, we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our application despite...
DEFF Research Database (Denmark)
Hansen, Peter Reinhard; Lunde, Asger
An economic time series can often be viewed as a noisy proxy for an underlying economic variable. Measurement errors will influence the dynamic properties of the observed process and may conceal the persistence of the underlying time series. In this paper we develop instrumental variable (IV......) methods for extracting information about the latent process. Our framework can be used to estimate the autocorrelation function of the latent volatility process and a key persistence parameter. Our analysis is motivated by the recent literature on realized (volatility) measures, such as the realized...... variance, that are imperfect estimates of actual volatility. In an empirical analysis using realized measures for the DJIA stocks we find the underlying volatility to be near unit root in all cases. Although standard unit root tests are asymptotically justified, we find them to be misleading in our...
Watanabe, Takashi; Kurosawa, Kenji; Yoshizawa, Makoto
A Feedback Error Learning (FEL) scheme was found to be applicable to joint angle control by Functional Electrical Stimulation (FES) in our previous study. However, the FEL-FES controller had a problem in learning of the inverse dynamics model (IDM) in some cases. In this paper, methods of applying the FEL to FES control were examined in controlling 1-DOF movement of the wrist joint stimulating 2 muscles through computer simulation under several control conditions with several subject models. The problems in applying FEL to FES controller were suggested to be in restricting stimulation intensity to positive values between the minimum and the maximum intensities and in the case of very small output values of the IDM. Learning of the IDM was greatly improved by considering the IDM output range with setting the minimum ANN output value in calculating ANN connection weight change.
Covariant diagrams for one-loop matching
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2016-10-15
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariance Applications with Kiwi
Mattoon, C. M.; Brown, D.; Elliott, J. B.
2012-05-01
The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL) is developing a new tool, named `Kiwi', that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL) and large-scale Uncertainty Quantification (UQ) studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.
Covariance Applications with Kiwi
Directory of Open Access Journals (Sweden)
Elliott J.B.
2012-05-01
Full Text Available The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL is developing a new tool, named ‘Kiwi’, that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL and large-scale Uncertainty Quantification (UQ studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.
FUNCTIONAL AND EFFECTIVE CONNECTIVITY OF VISUAL WORD RECOGNITION AND HOMOPHONE ORTHOGRAPHIC ERRORS.
Directory of Open Access Journals (Sweden)
JOAN eGUÀRDIA-OLMOS
2015-05-01
Full Text Available The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional Magnetic Resonance Imaging (fMRI, has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad-hoc spelling-related out-scanner tests: a High Spelling Skills group (HSS and a Low Spelling Skills group (LSS. During the fMRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task. Regions of Interest (ROIs and their signal values were obtained for both tasks. Based on these values, SEMs (Structural Equation Models were obtained for each group of spelling competence (HSS and LSS and task through ML (Maximum Likelihood estimation, and the model with the best fit was chosen in each case. Likewise, DCM (Dynamic Causal Models were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages.
Correction of an input function for errors introduced with automated blood sampling
Energy Technology Data Exchange (ETDEWEB)
Schlyer, D.J.; Dewey, S.L. [Brookhaven National Lab., Upton, NY (United States)
1994-05-01
Accurate kinetic modeling of PET data requires an precise arterial plasma input function. The use of automated blood sampling machines has greatly improved the accuracy but errors can be introduced by the dispersion of the radiotracer in the sampling tubing. This dispersion results from three effects. The first is the spreading of the radiotracer in the tube due to mass transfer. The second is due to the mechanical action of the peristaltic pump and can be determined experimentally from the width of a step function. The third is the adsorption of the radiotracer on the walls of the tubing during transport through the tube. This is a more insidious effect since the amount recovered from the end of the tube can be significantly different than that introduced into the tubing. We have measured the simple mass transport using [{sup 18}F]fluoride in water which we have shown to be quantitatively recovered with no interaction with the tubing walls. We have also carried out experiments with several radiotracers including [{sup 18}F]Haloperidol, [{sup 11}C]L-deprenyl, [{sup 18}]N-methylspiroperidol ([{sup 18}F]NMS) and [{sup 11}C]buprenorphine. In all cases there was some retention of the radiotracer by untreated silicone tubing. The amount retained in the tubing ranged from 6% for L-deprenyl to 30% for NMS. The retention of the radiotracer was essentially eliminated after pretreatment with the relevant unlabeled compound. For example less am 2% of the [{sup 18}F]NMS was retained in tubing treated with unlabelled NMS. Similar results were obtained with baboon plasma although the amount retained in the untreated tubing was less in all cases. From these results it is possible to apply a mathematical correction to the measured input function to account for mechanical dispersion and to apply a chemical passivation to the tubing to reduce the dispersion due to adsorption of the radiotracer on the tubing walls.
Bao, Zhenkun; Li, Xiaolong; Luo, Xiangyang
2017-01-01
Extracting informative statistic features is the most essential technical issue of steganalysis. Among various steganalysis methods, probability density function (PDF) and characteristic function (CF) moments are two important types of features due to the excellent ability for distinguishing the cover images from the stego ones. The two types of features are quite similar in definition. The only difference is that the PDF moments are computed in the spatial domain, while the CF moments are computed in the Fourier-transformed domain. Then, the comparison between PDF and CF moments is an interesting question of steganalysis. Several theoretical results have been derived, and CF moments are proved better than PDF moments in some cases. However, in the log prediction error wavelet subband of wavelet decomposition, some experiments show that the result is opposite and lacks a rigorous explanation. To solve this problem, a comparison result based on the rigorous proof is presented: the first-order PDF moment is proved better than the CF moment, while the second-order CF moment is better than the PDF moment. It tries to open the theoretical discussion on steganalysis and the question of finding suitable statistical features.
Hart, Heledd; Lim, Lena; Mehta, Mitul A.; Curtis, Charles; Xu, Xiaohui; Breen, Gerome; Simmons, Andrew; Mirza, Kah; Rubia, Katya
2018-01-01
Childhood maltreatment is associated with error hypersensitivity. We examined the effect of childhood abuse and abuse-by-gene (5-HTTLPR, MAOA) interaction on functional brain connectivity during error processing in medication/drug-free adolescents. Functional connectivity was compared, using generalized psychophysiological interaction (gPPI) analysis of functional magnetic resonance imaging (fMRI) data, between 22 age- and gender-matched medication-naïve and substance abuse-free adolescents exposed to severe childhood abuse and 27 healthy controls, while they performed an individually adjusted tracking stop-signal task, designed to elicit 50% inhibition failures. During inhibition failures, abused participants relative to healthy controls exhibited reduced connectivity between right and left putamen, bilateral caudate and anterior cingulate cortex (ACC), and between right supplementary motor area (SMA) and right inferior and dorsolateral prefrontal cortex. Abuse-related connectivity abnormalities were associated with longer abuse duration. No group differences in connectivity were observed for successful inhibition. The findings suggest that childhood abuse is associated with decreased functional connectivity in fronto-cingulo-striatal networks during error processing. Furthermore that the severity of connectivity abnormalities increases with abuse duration. Reduced connectivity of error detection networks in maltreated individuals may be linked to constant monitoring of errors in order to avoid mistakes which, in abusive contexts, are often associated with harsh punishment. PMID:29434543
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Item Discrimination and Type I Error in the Detection of Differential Item Functioning
Li, Yanju; Brooks, Gordon P.; Johanson, George A.
2012-01-01
In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel-Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the…
Learning from your mistakes: The functional value of spontaneous error monitoring in aphasia
Directory of Open Access Journals (Sweden)
Erica L. Middleton
2014-04-01
Ex. 4.\t(T = umbrella “umbelella, umbrella”: Phonological error; DetCorr We used mixed effects logistic regression to assess whether the log odds of changing from error to correct was predicted by monitoring status of the error (DetCorr vs. NoDet; DetNoCorr vs. NoDet; whether the monitoring benefit interacted with direction of change (forward, backward; and whether effects varied by error type. Figure 1 (top shows that the proportion accuracy change was higher for DetCorr, relative to NoDet, consistent with a monitoring benefit. The difference in log odds was significant for semantic errors in both directions (forward: coeff. = -1.73; z= -7.78; p < .001; backward: coeff = -0.92; z= -3.60; p < .001, and for phonological errors in both directions (forward: coeff. = -0.74; z= -2.73; p=.006; backward : coeff. = -.76; z = -2.73; p = .006. The difference between DetNoCorr and NoDet was not significant in any condition. Figure 1 (bottom shows that for Semantic errors, there was a directional asymmetry favoring the Forward condition (interaction: coeff. = .79; z = 2.32; p = .02. Phonological errors, in contrast, produced comparable effects in Forward and Backward direction. The results demonstrated a benefit for errors that were detected and corrected. This monitoring benefit was present in both the forward and backward direction, supporting the Strength hypothesis. Of greatest interest, the monitoring benefit for Semantic errors was greater in the forward than backward direction, indicating a role for learning.
Directory of Open Access Journals (Sweden)
Eduardo Shiguero Sakaguti
2003-08-01
Full Text Available As estimativas de máxima verossimilhança restrita (REML, das variâncias e das covariâncias genéticas aditivas e residuais, do peso ao nascimento e dos pesos ajustados aos 120, 205, 240, 365, 420 e 550 dias de idade foram empregadas para determinar funções de covariâncias (CFs do crescimento de 41.415 bovinos da raça Tabapuã, nascidos entre 1975 e 1997 e criados em regime de pastagem. A estimação das CFs mostrou-se bastante útil, pois, além de avaliar covariâncias entre qualquer par de idades, a análise das autofunções, associada aos autovalores das matrizes de coeficientes das CFs, revelou que as curvas de crescimento dos animais podem ser rapidamente alteradas pela seleção. Fatores como o estresse provocado pelo desmame, o ganho compensatório e a seleção de animais, nos períodos finais, provocaram várias mudanças na trajetória das (covariâncias genéticas, fazendo com que apenas as CFs de ordens de ajuste mais complexas estimassem valores mais próximos das estimativas da REML. Entretanto, nessas funções de alta ordem de ajuste, os polinômios de Legendre tenderam a descrever ondulações nas trajetórias das variâncias, nas extremidades do período, o que parece não ter uma razão biológica coerente.Restricted maximum likelihood (REML estimates of additive and residual variances and covariances for birth weight and adjusted weights at 120, 205, 240, 365, 420 e 550 days of age were used to estimate growth covariance functions (CFs of Tabapuã beef calves. Data were observed on 41,415 animals born from 1975 to 1997 and raised under pasture conditions. Estimation of CFs is a very useful tool to analyze beef cattle growth. It was possible to estimate covariance between any pair of ages and the analyses of eigenfunctions associated with the eigenvalues of coefficients matrix of CFs showed that the growth curves of Tabapuã calves could be easily changed by selection. Weaning stress, compensatory growth and selection
Directory of Open Access Journals (Sweden)
Zhenhe eZhou
2013-09-01
Full Text Available Internet addiction disorder (IAD is an impulse disorder or at least related to impulse control disorder. Deficits in executive functioning, including response monitoring, have been proposed as a hallmark feature of impulse control disorders. The error-related negativity (ERN reflects individual’s ability to monitor behavior. Since IAD belongs to a compulsive-impulsive spectrum disorder, theoretically, it should present response monitoring functional deficit characteristics of some disorders, such as substance dependence, ADHD or alcohol abuse, testing with an Erikson flanker task. Up to now, no studies on response monitoring functional deficit in IAD were reported. The purpose of the present study was to examine whether IAD displays response monitoring functional deficit characteristics in a modified Erikson flanker task.23 subjects were recruited as IAD group. 23 matched age, gender and education healthy persons were recruited as control group. All participants completed the modified Erikson flanker task while measured with event-related potentials (ERPs. IAD group made more total error rates than did controls (P < 0.01; Reactive times for total error responses in IAD group were shorter than did controls (P < 0.01. The mean ERN amplitudes of total error response conditions at frontal electrode sites and at central electrode sites of IAD group were reduced compared with control group (all P < 0.01. These results revealed that IAD displays response monitoring functional deficit characteristics and shares ERN characteristics of compulsive-impulsive spectrum disorder.
Fusi, F.; Congedo, P. M.
2016-03-01
In this work, a strategy is developed to deal with the error affecting the objective functions in uncertainty-based optimization. We refer to the problems where the objective functions are the statistics of a quantity of interest computed by an uncertainty quantification technique that propagates some uncertainties of the input variables through the system under consideration. In real problems, the statistics are computed by a numerical method and therefore they are affected by a certain level of error, depending on the chosen accuracy. The errors on the objective function can be interpreted with the abstraction of a bounding box around the nominal estimation in the objective functions space. In addition, in some cases the uncertainty quantification methods providing the objective functions also supply the possibility of adaptive refinement to reduce the error bounding box. The novel method relies on the exchange of information between the outer loop based on the optimization algorithm and the inner uncertainty quantification loop. In particular, in the inner uncertainty quantification loop, a control is performed to decide whether a refinement of the bounding box for the current design is appropriate or not. In single-objective problems, the current bounding box is compared to the current optimal design. In multi-objective problems, the decision is based on the comparison of the error bounding box of the current design and the current Pareto front. With this strategy, fewer computations are made for clearly dominated solutions and an accurate estimate of the objective function is provided for the interesting, non-dominated solutions. The results presented in this work prove that the proposed method improves the efficiency of the global loop, while preserving the accuracy of the final Pareto front.
Griffin, Brian M.; Larson, Vincent E.
2016-11-01
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simple warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.
Schiefer, Ulrich; Kraus, Christina; Baumbach, Peter; Ungewiß, Judith; Michels, Ralf
2016-10-14
All over the world, refractive errors are among the most frequently occuring treatable distur - bances of visual function. Ametropias have a prevalence of nearly 70% among adults in Germany and are thus of great epidemiologic and socio-economic relevance. In the light of their own clinical experience, the authors review pertinent articles retrieved by a selective literature search employing the terms "ametropia, "anisometropia," "refraction," "visual acuity," and epidemiology." In 2011, only 31% of persons over age 16 in Germany did not use any kind of visual aid; 63.4% wore eyeglasses and 5.3% wore contact lenses. Refractive errors were the most common reason for consulting an ophthalmologist, accounting for 21.1% of all outpatient visits. A pinhole aperture (stenopeic slit) is a suitable instrument for the basic diagnostic evaluation of impaired visual function due to optical factors. Spherical refractive errors (myopia and hyperopia), cylindrical refractive errors (astigmatism), unequal refractive errors in the two eyes (anisometropia), and the typical optical disturbance of old age (presbyopia) cause specific functional limitations and can be detected by a physician who does not need to be an ophthalmologist. Simple functional tests can be used in everyday clinical practice to determine quickly, easily, and safely whether the patient is suffering from a benign and easily correctable type of visual impairment, or whether there are other, more serious underlying causes.
Li, Will X. Y.; Cui, Ke; Zhang, Wei
2017-04-01
Cognitive neural prosthesis is a manmade device which can be used to restore or compensate for lost human cognitive modalities. The generalized Laguerre-Volterra (GLV) network serves as a robust mathematical underpinning for the development of such prosthetic instrument. In this paper, a hardware implementation scheme of Gauss error function for the GLV network targeting reconfigurable platforms is reported. Numerical approximations are formulated which transform the computation of nonelementary function into combinational operations of elementary functions, and memory-intensive look-up table (LUT) based approaches can therefore be circumvented. The computational precision can be made adjustable with the utilization of an error compensation scheme, which is proposed based on the experimental observation of the mathematical characteristics of the error trajectory. The precision can be further customizable by exploiting the run-time characteristics of the reconfigurable system. Compared to the polynomial expansion based implementation scheme, the utilization of slice LUTs, occupied slices, and DSP48E1s on a Xilinx XC6VLX240T field-programmable gate array has decreased by 94.2%, 94.1%, and 90.0%, respectively. While compared to the look-up table based scheme, 1.0 ×1017 bits of storage can be spared under the maximum allowable error of 1.0 ×10-3 . The proposed implementation scheme can be employed in the study of large-scale neural ensemble activity and in the design and development of neural prosthetic device.
Covariant Magnetic Connection Hypersurfaces
Pegoraro, F
2016-01-01
In the single fluid, nonrelativistic, ideal-Magnetohydrodynamic (MHD) plasma description magnetic field lines play a fundamental role by defining dynamically preserved "magnetic connections" between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D {\\it magnetic connection hypersurfaces} in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when ${\\bf E} \\cdot {\\bf B} = 0$.
Taha, Haitham; Ibrahim, Raphiq; Khateb, Asaid
2014-01-01
The dominant error types were investigated as a function of phonological processing (PP) deficit severity in four groups of impaired readers. For this aim, an error analysis paradigm distinguishing between four error types was used. The findings revealed that the different types of impaired readers were characterized by differing predominant error…
Covariate analysis of bivariate survival data
Energy Technology Data Exchange (ETDEWEB)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
Covariance analysis for evaluating head trackers
Kang, Donghoon
2017-10-01
Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.
Error correction, co-integration and import demand function for Nigeria
African Journals Online (AJOL)
The objective of this study is to determine empirically Import Demand equation in Nigeria using Error Correction and Cointegration techniques. All the variables employed in this study were found stationary at first difference using Augmented Dickey-Fuller (ADF) and Phillip-Perron (PP) unit root test. Empirical evidence from ...
Warner, James E.; Diaz, Manuel I.; Aquino, Wilkins; Bonnet, Marc
2014-09-01
This work focuses on the identification of heterogeneous linear elastic moduli in the context of frequency-domain, coupled acoustic-structure interaction (ASI), using either solid displacement or fluid pressure measurement data. The approach postulates the inverse problem as an optimization problem where the solution is obtained by minimizing a modified error in constitutive equation (MECE) functional. The latter measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses, while incorporating the measurement data as additional quadratic error terms. We demonstrate two strategies for selecting the MECE weighting coefficient to produce regularized solutions to the ill-posed identification problem: 1) the discrepancy principle of Morozov, and 2) an error-balance approach that selects the weight parameter as the minimizer of another functional involving the ECE and the data misfit. Numerical results demonstrate that the proposed methodology can successfully recover elastic parameters in 2D and 3D ASI systems from response measurements taken in either the solid or fluid subdomains. Furthermore, both regularization strategies are shown to produce accurate reconstructions when the measurement data is polluted with noise. The discrepancy principle is shown to produce nearly optimal solutions, while the error-balance approach, although not optimal, remains effective and does not need a priori information on the noise level.
Directory of Open Access Journals (Sweden)
Severino Cavalcante de Sousa Júnior
2010-05-01
Full Text Available Foram utilizados 35.732 registros de peso do nascimento aos 660 dias de idade de 8.458 animais da raça Tabapuã para estimar funções de covariância utilizando modelos de regressão aleatória sobre polinômios de Legendre. Os modelos incluíram: como aleatórios, os efeitos genético aditivo direto, materno, de ambiente permanente de animal e materno; como fixos, os efeitos de grupo de contemporâneo; como covariáveis, a idade do animal à pesagem e a idade da vaca ao parto (linear e quadrática; e sobre a idade à pesagem, polinômio ortogonal de Legendre (regressão cúbica foi considerado para modelar a curva média da população. O resíduo foi modelado considerando sete classes de variância e os modelos foram comparados pelos critérios de informação Bayesiano de Schwarz e Akaike. O melhor modelo apresentou ordens 4, 3, 6, 3 para os efeitos genético aditivo direto e materno, de ambiente permanente de animal e materno, respectivamente. As estimativas de covariância e herdabilidades, obtidas utilizando modelo bicaracter, e de regressão aleatória foram semelhantes. As estimativas de herdabilidade para o efeito genético aditivo direto, obtidas com o modelo de regressão aleatória, aumentaram do nascimento (0,15 aos 660 dias de idade (0,45. Maiores estimativas de herdabilidade materna foram obtidas para pesos medidos logo após o nascimento. As correlações genéticas variaram de moderadas a altas e diminuíram com o aumento da distância entre as pesagens. A seleção para maiores pesos em qualquer idade promove maior ganho de peso do nascimento aos 660 dias de idade.In order to estimate covariance functions by using random regression models on Legendre polynomials, 35,732 weight records from birth to 660 days of age of 8,458 animals of Tabapuã cattle were used. The models included: as random effects, direct additive genetic effect, maternal effect, and animal and maternal permanent environmental effets; contemporary groups
Essex, Marilyn J.; Shirtcliff, Elizabeth A.; Burk, Linnea R.; Ruttle, Paula L.; Klein, Marjorie H.; Slattery, Marcia J.; Kalin, Ned H.; Armstrong, Jeffrey M.
2012-01-01
The hypothalamic-pituitary-adrenal (HPA) axis is a primary mechanism in the allostatic process through which early life stress (ELS) contributes to disease. Studies of the influence of ELS on children’s HPA axis functioning have yielded inconsistent findings. To address this issue, the present study considers multiple types of ELS (maternal depression, paternal depression, and family expressed anger), mental health symptoms, and two components of HPA functioning (trait-like and epoch-specific activity) in a long-term prospective community study of 357 children. ELS was assessed during the infancy and preschool periods; mental health symptoms and cortisol were assessed at child ages 9, 11, 13, and 15 years. A 3-level hierarchical linear model addressed questions regarding the influences of ELS on HPA functioning and its co-variation with mental health symptoms. ELS influenced trait-like cortisol level and slope, with both hyper- and hypo-arousal evident depending on type of ELS. Further, type(s) of ELS influenced co-variation of epoch-specific HPA functioning and mental health symptoms, with a tighter coupling of HPA alterations with symptom severity among children exposed previously to ELS. Results highlight the importance of examining multiple types of ELS and dynamic HPA functioning in order to capture the allostatic process unfolding across the transition into adolescence. PMID:22018080
Eric H. Wharton; Tiberius Cunia
1987-01-01
Proceedings of a workshop co-sponsored by the USDA Forest Service, the State University of New York, and the Society of American Foresters. Presented were papers on the methodology of sample tree selection, tree biomass measurement, construction of biomass tables and estimation of their error, and combining the error of biomass tables with that of the sample plots or...
Deriving covariant holographic entanglement
Energy Technology Data Exchange (ETDEWEB)
Dong, Xi [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Lewkowycz, Aitor [Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Rangamani, Mukund [Center for Quantum Mathematics and Physics (QMAP), Department of Physics, University of California, Davis, CA 95616 (United States)
2016-11-07
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Hansen, M; Haugland, M K
2001-01-01
Adaptive restriction rules based on fuzzy logic have been developed to eliminate errors and to increase stimulation safety in the foot-drop correction application, specifically when using adaptive logic networks to provide a stimulation control signal based on neural activity recorded from peripheral sensory nerve branches. The fuzzy rules were designed to increase flexibility and offer easier customization, compared to earlier versions of restriction rules. The rules developed quantified the duration of swing and stance phases into states of accepting or rejecting new transitions, based on the cyclic nature of gait and statistics on the current gait patterns. The rules were easy to custom design for a specific application, using linguistic terms to model the actions of the rules. The rules were tested using pre-recorded gait data processed through a gait event detector and proved to reduce detection delay and the number of errors, compared to conventional rules.
Directory of Open Access Journals (Sweden)
Fábio Luiz Buranelo Toral
2009-11-01
Full Text Available Este trabalho foi realizado com o objetivo de avaliar a utilização de diferentes estruturas de variância residual para estimação de funções de covariância para o peso de bovinos da raça Canchim. As funções de covariância foram estimadas pelo método da Máxima Verossimilhança Restrita em um modelo animal com os efeitos fixos de grupo de contemporâneos (ano e mês de nascimento e sexo, idade da vaca ao parto como covariável (efeitos linear e quadrático e da trajetória média de crescimento, enquanto os efeitos aleatórios considerados foram os efeitos genéticos aditivos direto e materno, de ambiente permanente individual e materno e residual. Foram utilizadas diversas estruturas para a variância residual: funções de variâncias de ordem linear até quíntica e 1, 5, 10, 15 ou 20 classes de idades. A utilização de variância residual homogênea não foi adequada. A utilização da função de variância residual quártica e a divisão da variância residual em 20 classes proporcionaram os melhores ajustes, e a divisão em classes foi mais eficiente que a utilização de funções. As estimativas de herdabilidade direta se situaram entre 0,16 e 0,25 na maioria das idades consideradas e as maiores estimativas foram obtidas próximo aos 360 dias de idade e no final do período estudado. Em geral, as estimativas de herdabilidade direta foram semelhantes para os modelos com variância residual homogênea, função de variância residual quártica ou com 20 classes de idade. A melhor descrição das variâncias residuais para o peso em diversas idades de bovinos da raça Canchim foi a que considerou 20 classes heterogêneas. Entretanto, como existem classes com variâncias semelhantes, é possível agrupar algumas delas e reduzir o número de parâmetros estimados.This study was carried out to evaluate the use of different residual variance structures to estimate covariance functions for weight of Canchim beef cattle. The
General Galilei Covariant Gaussian Maps
Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo
2017-09-01
We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].
Directory of Open Access Journals (Sweden)
Kleber Régis Santoro
2005-12-01
Full Text Available Este trabalho foi realizado com os objetivos de avaliar diferentes modelos de regressão aleatória, compostos por polinômios de Legendre, utilizados na descrição de efeitos genéticos e ambientais sobre observações do tipo peso-idade e identificar o mais adequado. Analisaram-se dados de peso-idade de bovinos Nelore, nascidos e criados no estado de Pernambuco, com pesagens ao nascimento e em intervalos de, aproximadamente, 90 dias até 720 dias de idade. Foram avaliados seis diferentes modelos de regressão aleatória, com comportamento de graus 3, 4 e 5 para os efeitos genético aditivo direto e de ambiente permanente, e dois tipos de comportamento para os erros (um homogêneo e outro heterogêneo com três classes. Utilizou-se o critério de informação de Akaike no julgamento do melhor modelo. O modelo mais adequado foi o de grau 5 com erros homogêneos. O comportamento predito pelo modelo para as correlações genéticas e fenotípicas foram baixos entre idades menores e maiores, altas e aproximadamente constantes para entre idades maiores. A covariância genética aditiva foi crescente com a idade. A herdabilidade esteve de baixa à média até aproximadamente 60 dias, sendo alta para as demais idades, ficando entre 0,50 e 0,60.Random regression models using Legendre polynomials were used to describe the growth curve of Nelore cattle weighted every three months from birth to 720 days of age, in Pernambuco state, northeastern Brazil. Six different random regression models using Legendre polynomials of three, four, and five degrees to model additive genetic and permanent environmental effects, under homogeneous and heterogeneous residual variances with three classes were evaluated. According to the Akaike's information criteria, the five degree Legendre polynomial with homogeneous error was the best fitting model. Genetic and phenotypic correlations were low between weights at early and late stages of the weighting period, and high and
Directory of Open Access Journals (Sweden)
Jan Valdman
2009-01-01
Full Text Available We consider a Poisson boundary value problem and its functional a posteriori error estimate derived by S. Repin in 1999. The estimate majorizes the H1 seminorm of the error of the discrete solution computed by FEM method and contains a free ux variable from the H(div space. In order to keep the estimate sharp, a procedure for the minimization of the majorant term with respect to the ux variable is introduced, computing the free ux variable from a global linear system of equations. Since the linear system is symmetric and positive definite, few iterations of a conjugate gradient method with a geometrical multigrid preconditioner are applied. Numerical techniques are demonstated on one benchmark example with a smooth solution on a unit square domain including the computation of the approximate value of the constant in Friedrichs' inequality.
Quaid, Patrick; Simpson, Trefford
2013-01-01
Approximately one in ten students aged 6 to 16 in Ontario (Canada) school boards have an individual education plan (IEP) in place due to various learning disabilities, many of which are specific to reading difficulties. The relationship between reading (specifically objectively determined reading speed and eye movement data), refractive error, and binocular vision related clinical measurements remain elusive. One hundred patients were examined in this study (50 IEP and 50 controls, age range 6 to 16 years). IEP patients were referred by three local school boards, with controls being recruited from the routine clinic population (non-IEP patients in the same age group). A comprehensive eye examination was performed on all subjects, in addition to a full binocular vision work-up and cycloplegic refraction. In addition to the cycloplegic refractive error, the following binocular vision related data was also acquired: vergence facility, vergence amplitudes, accommodative facility, accommodative amplitudes, near point of convergence, stereopsis, and a standardized symptom scoring scale. Both the IEP and control groups were also examined using the Visagraph III system, which permits recording of the following reading parameters objectively: (i) reading speed, both raw values and values compared to grade normative data, and (ii) the number of eye movements made per 100 words read. Comprehension was assessed via a questionnaire administered at the end of the reading task, with each subject requiring 80% or greater comprehension. The IEP group had significantly greater hyperopia compared to the control group on cycloplegic examination. Vergence facility was significantly correlated to (i) reading speed, (ii) number of eye movements made when reading, and (iii) a standardized symptom scoring system. Vergence facility was also significantly reduced in the IEP group versus controls. Significant differences in several other binocular vision related scores were also found. This
DEFF Research Database (Denmark)
Tscherning, Carl Christian
2015-01-01
The method of Least-Squares Collocation (LSC) may be used for the modeling of the anomalous gravity potential (T) and for the computation (prediction) of quantities related to T by a linear functional. Errors may also be estimated. However, when using an isotropic covariance function or equivalen...
Ryu, Duchwan
2010-09-28
We consider nonparametric regression analysis in a generalized linear model (GLM) framework for data with covariates that are the subject-specific random effects of longitudinal measurements. The usual assumption that the effects of the longitudinal covariate processes are linear in the GLM may be unrealistic and if this happens it can cast doubt on the inference of observed covariate effects. Allowing the regression functions to be unknown, we propose to apply Bayesian nonparametric methods including cubic smoothing splines or P-splines for the possible nonlinearity and use an additive model in this complex setting. To improve computational efficiency, we propose the use of data-augmentation schemes. The approach allows flexible covariance structures for the random effects and within-subject measurement errors of the longitudinal processes. The posterior model space is explored through a Markov chain Monte Carlo (MCMC) sampler. The proposed methods are illustrated and compared to other approaches, the "naive" approach and the regression calibration, via simulations and by an application that investigates the relationship between obesity in adulthood and childhood growth curves. © 2010, The International Biometric Society.
DEFF Research Database (Denmark)
Wellendorff, Jess; Lundgård, Keld Troen; Møgelhøj, Andreas
2012-01-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfit......A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding...
Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons
Directory of Open Access Journals (Sweden)
Steve Yaeli
2010-10-01
Full Text Available Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales.
Energy Technology Data Exchange (ETDEWEB)
Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu
2017-04-01
We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated under three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
1994-01-01
Radon measure on the Borel sets of Rk, G is a gaussian function with range in [0, V], the 2 ([nklnnl - In61/ symbol * stands for the convolution...I[fn,1] where A is a signed Radon measure on the Borel sets of Rh, G is a gaussian function with range [0, V], the Iemp[fn,i] • Iemp[fi] symbol ...Teoriya Veroyatnostei Ee Primenenzya, 26(3):543- [651 G. Pisier. Remarques sur un resultat non publiý 564, 1981. de B. Maurey. In Centre de Mathematique
Analysis of error in TOMS total ozone as a function of orbit and attitude parameters
Gregg, W. W.; Ardanuy, P. E.; Braun, W. C.; Vallette, B. J.; Bhartia, P. K.; Ray, S. N.
1991-01-01
Computer simulations of orbital scenarios were performed to examine the effects of orbital altitude, equator crossing time, attitude uncertainty, and orbital eccentricity on ozone observations by future satellites. These effects were assessed by determining changes in solar and viewing geometry and earth daytime coverage loss. The importance of these changes on ozone retrieval was determined by simulating uncertainties in the TOMS ozone retrieval algorithm. The major findings are as follows: (1) Drift of equator crossing time from local noon would have the largest effect on the quality of ozone derived from TOMS. The most significant effect of this drift is the loss of earth daytime coverage in the winter hemisphere. The loss in coverage increases from 1 degree latitude for + or - 1 hour from noon, 6 degrees for + or - 3 hours from noon, to 53 degrees for + or - 6 hours from noon. An additional effect is the increase in ozone retrieval errors due to high solar zenith angles. (2) To maintain contiguous earth coverage, the maximum scan angle of the sensor must be increased with decreasing orbital altitude. The maximum scan angle required for full coverage at the equator varies from 60 degrees at 600 km altitude to 45 degrees at 1200 km. This produces an increase in spacecraft zenith angle, theta, which decreases the ozone retrieval accuracy. The range in theta was approximately 72 degrees for 600 km to approximately 57 degrees at 1200 km. (3) The effect of elliptical orbits is to create gaps in coverage along the subsatellite track. An elliptical orbit with a 200 km perigee and 1200 km apogee produced a maximum earth coverage gap of about 45 km at the perigee at nadir. (4) An attitude uncertainty of 0.1 degree in each axis (pitch, roll, yaw) produced a maximum scan angle to view the pole, and maximum solar zenith angle).
mBEEF-vdW: Robust fitting of error estimation density functionals
DEFF Research Database (Denmark)
Lundgård, Keld Troen; Wellendorff, Jess; Voss, Johannes
2016-01-01
We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces...
Parametric Covariance Model for Horizon-Based Optical Navigation
Hikes, Jacob; Liounis, Andrew J.; Christian, John A.
2016-01-01
This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.
Type-Safe Compilation of Covariant Specialization: A Practical Case
1995-11-01
modify the semantics of languages that use covariant specialization in order to improve their type safety. We demonstrate our technique using O2, a...not affect the semantics of those computations without type errors. Furthermore, the new semantics of the previously ill-typed computations is defined
Analysis of Covariance and Randomized Block Design with Heterogeneous Slopes.
Klockars, Alan J.; Beretvas, S. Natasha
2001-01-01
Compared the Type I error rate and the power to detect differences in slopes and additive treatment effects of analysis of covariance (ANCOVA) and randomized block designs through a Monte Carlo simulation. Results show that the more powerful option in almost all simulations for tests of both slope and means was ANCOVA. (SLD)
Skylab water balance error analysis
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
Dummy covariates in CUB models
Directory of Open Access Journals (Sweden)
Maria Iannario
2013-05-01
Full Text Available In this paper we discuss the use of dummy variables as sensible covariates in a class of statistical models which aim at explaining the subjects’ preferences with respect to several items. After a brief introduction to CUB models, the work considers statistical interpretations of dummy covariates. Then, a simulation study is performed to evaluate the power discrimination of an asymptotic test among sub-populations. Some empirical evidences and concluding remarks end the paper.
Herbst, Michael; Bornemann, Ludger; Graf, Alexander; Welp, Gerd; Vereecken, Harry; Amelung, Wulf
2010-05-01
Soil heterotrophic respiration fluxes at field scale exhibit substantial spatial variability. Chamber-based measurements of respiration fluxes were carried out within a 40x180 m bare soil plot. Soil temperatures were measured simultaneouslyto the flux measuremnts. Further, we used measurements of total soil organic carbon content, apparent electrical conductivity as well as mid-infrared spectroscopy- based carbon fractions as co-variates. Futher, basic soil properties like e.gtexture were determined as co-variates. After computing correlation coefficients, a stepwise multiple linear regression procedure was used to spatially predict bare soil respiration from the co-variates. In particular the soil carbon fractions and the apparent electrical conductivity show a certain, even though limited, predictive potential. In a fist step we applied external drift kriging to determine the improvement of using co-variates in an estimation procedure in comparison to ordinary kriging. The relative improvement using the co-variates in terms of the root mean square error was moderate with about 12%. In a second step we applied simulated annealing to perform stochastic simulations conditioned with external drift kriging to generate more realistic spatial patterns of heterotrophic respiration at plot scale. The conditional stochastic simulations revealed a significantly improved reproduction of the probability density function and the semivariogram of the original point data.
Directory of Open Access Journals (Sweden)
Bárcena José A
2008-12-01
Full Text Available Abstract Background Annotation of protein-coding genes is a key step in sequencing projects. Protein functions are mainly assigned on the basis of the amino acid sequence alone by searching of homologous proteins. However, fully automated annotation processes often lead to wrong prediction of protein functions, and therefore time-intensive manual curation is often essential. Here we describe a fast and reliable way to correct function annotation in sequencing projects, focusing on surface proteomes. We use a proteomics approach, previously proven to be very powerful for identifying new vaccine candidates against Gram-positive pathogens. It consists of shaving the surface of intact cells with two proteases, the specific cleavage-site trypsin and the unspecific proteinase K, followed by LC/MS/MS analysis of the resulting peptides. The identified proteins are contrasted by computational analysis and their sequences are inspected to correct possible errors in function prediction. Results When applied to the zoonotic pathogen Streptococcus suis, of which two strains have been recently sequenced and annotated, we identified a set of surface proteins without cytoplasmic contamination: all the proteins identified had exporting or retention signals towards the outside and/or the cell surface, and viability of protease-treated cells was not affected. The combination of both experimental evidences and computational methods allowed us to determine that two of these proteins are putative extracellular new adhesins that had been previously attributed a wrong cytoplasmic function. One of them is a putative component of the pilus of this bacterium. Conclusion We illustrate the complementary nature of laboratory-based and computational methods to examine in concert the localization of a set of proteins in the cell, and demonstrate the utility of this proteomics-based strategy to experimentally correct function annotation errors in sequencing projects. This
Visualization and assessment of spatio-temporal covariance properties
Huang, Huang
2017-11-23
Spatio-temporal covariances are important for describing the spatio-temporal variability of underlying random fields in geostatistical data. For second-order stationary random fields, there exist subclasses of covariance functions that assume a simpler spatio-temporal dependence structure with separability and full symmetry. However, it is challenging to visualize and assess separability and full symmetry from spatio-temporal observations. In this work, we propose a functional data analysis approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use functional boxplots to visualize the functional median and the variability of the test functions, where the extent of departure from zero at all temporal lags indicates the degree of non-separability or asymmetry. We also develop a rank-based nonparametric testing procedure for assessing the significance of the non-separability or asymmetry. Essentially, the proposed methods only require the analysis of temporal covariance functions. Thus, a major advantage over existing approaches is that there is no need to estimate any covariance matrix for selected spatio-temporal lags. The performances of the proposed methods are examined by simulations with various commonly used spatio-temporal covariance models. To illustrate our methods in practical applications, we apply it to real datasets, including weather station data and climate model outputs.
Khanesar, Mojtaba Ahmadieh; Kayacan, Erdal; Reyhanoglu, Mahmut; Kaynak, Okyay
2015-04-01
A novel type-2 fuzzy membership function (MF) in the form of an ellipse has recently been proposed in literature, the parameters of which that represent uncertainties are de-coupled from its parameters that determine the center and the support. This property has enabled the proposers to make an analytical comparison of the noise rejection capabilities of type-1 fuzzy logic systems with its type-2 counterparts. In this paper, a sliding mode control theory-based learning algorithm is proposed for an interval type-2 fuzzy logic system which benefits from elliptic type-2 fuzzy MFs. The learning is based on the feedback error learning method and not only the stability of the learning is proved but also the stability of the overall system is shown by adding an additional component to the control scheme to ensure robustness. In order to test the efficiency and efficacy of the proposed learning and the control algorithm, the trajectory tracking problem of a magnetic rigid spacecraft is studied. The simulations results show that the proposed control algorithm gives better performance results in terms of a smaller steady state error and a faster transient response as compared to conventional control algorithms.
Quantile Regression With Measurement Error
Wei, Ying
2009-08-27
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.
Directory of Open Access Journals (Sweden)
Francisco Resquín
2016-07-01
Full Text Available Hybrid robotic systems represent a novel research field, where functional electrical stimulation (FES is combined with a robotic device for rehabilitation of motor impairment. Under this approach, the design of robust FES controllers still remains an open challenge. In this work, we aimed at developing a learning FES controller to assist in the performance of reaching movements in a simple hybrid robotic system setting. We implemented a Feedback Error Learning (FEL control strategy consisting of a feedback PID controller and a feedforward controller based on a neural network. A passive exoskeleton complemented the FES controller by compensating the effects of gravity. We carried out experiments with healthy subjects to validate the performance of the system. Results show that the FEL control strategy is able to adjust the FES intensity to track the desired trajectory accurately without the need of a previous mathematical model.
Everard, Eoin M; Harrison, Andrew J; Lyons, Mark
2017-05-01
Everard, EM, Harrison, AJ, and Lyons, M. Examining the relationship between the functional movement screen and the landing error scoring system in an active, male collegiate population. J Strength Cond Res 31(5): 1265-1272, 2017-In recent years, there has been an increasing focus on movement screening as the principal aspect of preparticipation testing. Two of the most common movement screening tools are the Functional Movement Screen (FMS) and the Landing Error Scoring System (LESS). Several studies have examined the reliability and validity of these tools, but so far, there have been no studies comparing the results of these 2 screening tools against each other. Therefore, the purpose of this study was to determine the relationship between FMS scores and LESS scores. Ninety-eight male college athletes actively competing in sport (Gaelic games, soccer, athletics, boxing/mixed martial arts, Olympic weightlifting) participated in the study and performed the FMS and LESS screens. Both the 21-point and 100-point scoring systems were used to score the FMS. Spearman's correlation coefficients were used to determine the relationship between the 2 screening scores. The results showed a significant moderate correlation between FMS and LESS scores (rho 100 and 21 point = -0.528; -0.487; p < 0.001). In addition, r values of 0.26 and 0.23 indicate a poor shared variance between the 2 screens. The results indicate that performing well in one of the screens does not necessarily equate to performing well in the other. This has practical implications as it highlights that both screens may assess different movement patterns and should not be used as a substitute for each other.
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.
1988-01-01
The last decade has been a period of rapid development in the implementation of covariance-matrix methodology in nuclear data research. This paper offers some perspective on the progress which has been made, on some of the unresolved problems, and on the potential yet to be realized. These discussions address a variety of issues related to the development of nuclear data. Topics examined are: the importance of designing and conducting experiments so that error information is conveniently generated; the procedures for identifying error sources and quantifying their magnitudes and correlations; the combination of errors; the importance of consistent and well-characterized measurement standards; the role of covariances in data parameterization (fitting); the estimation of covariances for values calculated from mathematical models; the identification of abnormalities in covariance matrices and the analysis of their consequences; the problems encountered in representing covariance information in evaluated files; the role of covariances in the weighting of diverse data sets; the comparison of various evaluations; the influence of primary-data covariance in the analysis of covariances for derived quantities (sensitivity); and the role of covariances in the merging of the diverse nuclear data information. 226 refs., 2 tabs.
Energy Technology Data Exchange (ETDEWEB)
Pang, Yang [Columbia Univ., New York, NY (United States)]|[Brookhaven National Labs., Upton, NY (United States)
1997-09-22
Many phenomenological models for relativistic heavy ion collisions share a common framework - the relativistic Boltzmann equations. Within this framework, a nucleus-nucleus collision is described by the evolution of phase-space distributions of several species of particles. The equations can be effectively solved with the cascade algorithm by sampling each phase-space distribution with points, i.e. {delta}-functions, and by treating the interaction terms as collisions of these points. In between collisions, each point travels on a straight line trajectory. In most implementations of the cascade algorithm, each physical particle, e.g. a hadron or a quark, is often represented by one point. Thus, the cross-section for a collision of two points is just the cross-section of the physical particles, which can be quite large compared to the local density of the system. For an ultra-relativistic nucleus-nucleus collision, this could lead to a large violation of the Lorentz invariance. By using the invariance property of the Boltzmann equation under a scale transformation, a Lorentz invariant cascade algorithm can be obtained. The General Cascade Program - GCP - is a tool for solving the relativistic Boltzmann equation with any number of particle species and very general interactions with the cascade algorithm.
Govindan, Siva Shangari; Agamuthu, P
2014-10-01
Waste management can be regarded as a cross-cutting environmental 'mega-issue'. Sound waste management practices support the provision of basic needs for general health, such as clean air, clean water and safe supply of food. In addition, climate change mitigation efforts can be achieved through reduction of greenhouse gas emissions from waste management operations, such as landfills. Landfills generate landfill gas, especially methane, as a result of anaerobic degradation of the degradable components of municipal solid waste. Evaluating the mode of generation and collection of landfill gas has posted a challenge over time. Scientifically, landfill gas generation rates are presently estimated using numerical models. In this study the Intergovernmental Panel on Climate Change's Waste Model is used to estimate the methane generated from a Malaysian sanitary landfill. Key parameters of the model, which are the decay rate and degradable organic carbon, are analysed in two different approaches; the bulk waste approach and waste composition approach. The model is later validated using error function analysis and optimum decay rate, and degradable organic carbon for both approaches were also obtained. The best fitting values for the bulk waste approach are a decay rate of 0.08 y(-1) and degradable organic carbon value of 0.12; and for the waste composition approach the decay rate was found to be 0.09 y(-1) and degradable organic carbon value of 0.08. From this validation exercise, the estimated error was reduced by 81% and 69% for the bulk waste and waste composition approach, respectively. In conclusion, this type of modelling could constitute a sensible starting point for landfills to introduce careful planning for efficient gas recovery in individual landfills. © The Author(s) 2014.
Covariant Description of Isothermic Surfaces
Tafel, J.
2016-12-01
We present a covariant formulation of the Gauss-Weingarten equations and the Gauss-Mainardi-Codazzi equations for surfaces in 3-dimensional curved spaces. We derive a coordinate invariant condition on the first and second fundamental form which is locally necessary and sufficient for the surface to be isothermic. We show how to construct isothermic coordinates.
Noisy covariance matrices and portfolio optimization II
Pafka, Szilárd; Kondor, Imre
2003-03-01
Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the
... for You Agency for Healthcare Research and Quality: Medical Errors and Patient Safety Centers for Disease Control and ... Quality Chasm Series National Coordinating Council for Medication Error Reporting and Prevention ... Devices Radiation-Emitting Products Vaccines, Blood & Biologics Animal & ...
Graphing survival curve estimates for time-dependent covariates.
Schultz, Lonni R; Peterson, Edward L; Breslau, Naomi
2002-01-01
Graphical representation of statistical results is often used to assist readers in the interpretation of the findings. This is especially true for survival analysis where there is an interest in explaining the patterns of survival over time for specific covariates. For fixed categorical covariates, such as a group membership indicator, Kaplan-Meier estimates (1958) can be used to display the curves. For time-dependent covariates this method may not be adequate. Simon and Makuch (1984) proposed a technique that evaluates the covariate status of the individuals remaining at risk at each event time. The method takes into account the change in an individual's covariate status over time. The survival computations are the same as the Kaplan-Meier method, in that the conditional survival estimates are the function of the ratio of the number of events to the number at risk at each event time. The difference between the two methods is that the individuals at risk within each level defined by the covariate is not fixed at time 0 in the Simon and Makuch method as it is with the Kaplan-Meier method. Examples of how the two methods can differ for time dependent covariates in Cox proportional hazards regression analysis are presented.
Directory of Open Access Journals (Sweden)
Takashi Watanabe
2010-01-01
Full Text Available Feedback error-learning (FEL controller that consists of a proportional-integral-derivative (PID controller and an artificial neural network (ANN had applicability to functional electrical stimulation (FES. Because of the integral (reset windup, however, delay or overshoot sometimes occurred in feedback FES control, which was considered to cause inappropriate ANN learning and to limit the feasibility of the FEL controller for FES to controlling 1-DOF movements stimulating 2 muscles. In this paper, an FEL-FES controller was developed applying antireset windup (ARW scheme that worked based on total controller output. The FEL-FES controller with the ARW was examined in controlling 2-DOF movements of the wrist joint stimulating 4 muscles through computer simulation. The developed FEL-FES controller was found to realize appropriately inverse dynamics model and to have a possibility of being used as an open-loop controller. The developed controller would be effective in multiple DOF movement control stimulating several muscles.
Watanabe, Takashi; Fukushima, Keisuke
2011-03-01
The Feedback Error Learning controller was found to be applicable to functional electrical stimulation control of wrist joint movements in control with subjects and computer simulation tests in our previous studies. However, sinusoidal trajectories were only used for the target joint angles and the artificial neural network (ANN) was trained for each trajectory. In this study, focusing on two-point reaching movement, target trajectories were generated by the minimum jerk model. In computer simulation tests, ANNs trained with different number of target trajectories under the same total number of control iterations (50 control trials) were compared. The inverse dynamics model (IDM) of the controlled limb realized by the trained ANN decreased the output power of the feedback controller and improved tracking performance to unlearned target trajectories. The IDM performed most effectively when target trajectory was changed every one control trial during ANN training. © 2011, Copyright the Authors. Artificial Organs © 2011, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Geise, Robert
2017-07-01
Any measurement of an electrical quantity, e.g. in network or spectrum analysis, is influenced by noise inducing a measurement uncertainty, the statistical quantification of which is rarely discussed in literature. A measurement uncertainty in such a context means a measurement error that is associated with a given probability, e.g. one standard deviation. The measurement uncertainty mainly depends on the signal-to-noise-ratio (SNR), but additionally can be influenced by the acquisition stage of the measurement setup. The analytical treatment of noise is hardly feasible as the physical nature of a noise vector needs to account for a certain magnitude and phase in a combined probability function. However, in a previous work a closed-form analytical solution for the uncertainties of amplitude and phase measurements depending on the SNR has been derived and validated. The derived formula turned out to be a good representation of the measured reality, though several approximations had to be made for the sake of an analytical expression. This contribution gives a physical interpretation on the approximations made and discusses the results in the context of the acquisition of measurement data.
Hoede, C.; Li, Z.
2001-01-01
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,
Harshbarger, Nicole D; Anderson, Barton E; Lam, Kenneth C
2017-07-21
To evaluate associations between the Functional Movement Screen (FMS), Star Excursion Balance Test (SEBT), and Balance Error Scoring System (BESS) scores. Correlational. College athletic training facilities. Fifty-two intercollegiate athletes (men = 36 and women = 16) representing 8 sports and cleared for unrestricted sport participation. Participants completed the FMS, SEBT, and BESS, in random order, during 1 testing session. Testing order was randomized to control for fatigue and learning effects. Composite and item scores for the FMS, SEBT, and BESS. A fair, negative correlation was found between FMS asymmetry and SEBT composite (r = -0.31, P = 0.03) scores. Fair, positive correlations were reported for FMS rotary stability task and SEBT anterior (r = 0.37-0.41, P ≤ 0.007) and posteromedial (r = 0.31, P = 0.03) reaches. Fair, negative correlations were reported for FMS deep squat and BESS single-leg firm (r = -0.33, P = 0.02), double-leg foam (r = -0.34, P = 0.02) and tandem foam (r = -0.40, P = 0.003), FMS inline lunge and BESS single-leg firm (r = -0.39, P = 0.004), FMS trunk stability pushup and tandem foam (r = -0.31, P = 0.025), and FMS composite and BESS single-leg firm (r = -0.37, P = 0.007). Little-to-no correlations were reported for remaining comparisons. Results indicate that each instrument provides distinct information about function, with only small areas of overlap. Associations between the FMS asymmetry score and SEBT composite score may indicate a relationship between movement asymmetry and postural stability. Associations between the FMS deep squat and BESS foam tasks may be related to underlying neuromuscular control factors.
Inferring Meta-covariates in Classification
Harris, Keith; McMillan, Lisa; Girolami, Mark
This paper develops an alternative method for gene selection that combines model based clustering and binary classification. By averaging the covariates within the clusters obtained from model based clustering, we define “meta-covariates” and use them to build a probit regression model, thereby selecting clusters of similarly behaving genes, aiding interpretation. This simultaneous learning task is accomplished by an EM algorithm that optimises a single likelihood function which rewards good performance at both classification and clustering. We explore the performance of our methodology on a well known leukaemia dataset and use the Gene Ontology to interpret our results.
Structural and Maturational Covariance in Early Childhood Brain Development.
Geng, Xiujuan; Li, Gang; Lu, Zhaohua; Gao, Wei; Wang, Li; Shen, Dinggang; Zhu, Hongtu; Gilmore, John H
2017-03-01
Brain structural covariance networks (SCNs) composed of regions with correlated variation are altered in neuropsychiatric disease and change with age. Little is known about the development of SCNs in early childhood, a period of rapid cortical growth. We investigated the development of structural and maturational covariance networks, including default, dorsal attention, primary visual and sensorimotor networks in a longitudinal population of 118 children after birth to 2 years old and compared them with intrinsic functional connectivity networks. We found that structural covariance of all networks exhibit strong correlations mostly limited to their seed regions. By Age 2, default and dorsal attention structural networks are much less distributed compared with their functional maps. The maturational covariance maps, however, revealed significant couplings in rates of change between distributed regions, which partially recapitulate their functional networks. The structural and maturational covariance of the primary visual and sensorimotor networks shows similar patterns to the corresponding functional networks. Results indicate that functional networks are in place prior to structural networks, that correlated structural patterns in adult may arise in part from coordinated cortical maturation, and that regional co-activation in functional networks may guide and refine the maturation of SCNs over childhood development. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and
Brady, Christopher J; Villanti, Andrea C; Gandhi, Monica; Friedman, David S; Keay, Lisa
2012-10-01
To evaluate patient-reported outcome measures with the use of ready-made spectacles (RMS) and custom spectacles (CS) in an adult population in India with uncorrected refractive error (URE). Prospective, double-masked, randomized trial with 1-month follow-up. A total of 363 adults aged 18 to 45 years with ≥1 diopter (D) of URE (RMS, n = 183; CS, n = 180). All participants received complete refraction and were randomized to receive CS (full sphero-cylindrical correction) or RMS based on the spherical equivalent for the eye with lower refractive error but limited to the powers in the RMS inventory. Visual function and quality of life (VFQoL) instrument and participant satisfaction. Rasch scores for VFQoL increased from 1.14 to 4.37 logits in the RMS group and from 1.11 to 4.72 logits in the CS group: respective mean changes of 3.23 (95% confidence interval [CI], 2.90-3.56) vs. 3.61 (95% CI, 3.34-3.88), respectively. Mean patient satisfaction also increased by 1.83 points (95% CI, 1.60-2.06) on a 5-point Likert scale in the RMS group and by 2.04 points (95% CI, 1.83-2.24) in the CS group. In bivariate analyses, CS was not associated with increased VFQoL or patient satisfaction compared with the RMS group. In the full multivariable linear regression, the CS group had greater improvement when compared with those receiving RMS (+0.45 logits; 95% CI, 0.02-0.88), and subjects with astigmatism >2.00 D had significantly less improvement (-0.99 logits; 95% CI, -1.68 to -0.30) after controlling for demographic and vision-related characteristics. In multivariable analysis, increased change in patient satisfaction was related to demographic and optical characteristics, but not spectacle group. Ready-made spectacles produce large but slightly smaller improvements in VFQoL and similar satisfaction with vision at 1-month follow-up when compared with CS. Ready-made spectacles are suitable for the majority of individuals with URE in our study population, although those with high
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
Pietrogrande, Maria Chiara; Dondi, Francesco; Ciogli, Alessia; Gasparrini, Francesco; Piccin, Antonella; Serafini, Mauro
2010-06-25
In this study, a comparative investigation was performed of HPLC Ascentis (2.7 microm particles) columns based on fused-core particle technology and Acquity (1.7 microm particles) columns requiring UPLC instruments, in comparison with Chromolith RP-18e columns. The study was carried out on mother and vegetal tinctures of Passiflora incarnata L. on one single or two coupled columns. The fundamental attributions of the chromatographic profiles are evaluated using a chemometric procedure, based on the AutoCovariance Function (ACVF). Different chromatographic systems are compared in terms of their separation parameters, i.e., number of total chemical components (m(tot)), separation efficiency (sigma), peak capacity (n(c)), overlap degree of peaks and peak purity. The obtained results show the improvements achieved by HPLC columns with narrow size particles in terms of total analysis time and chromatographic efficiency: comparable performance are achieved by Ascentis (2.7 microm particle) column and Acquity (1.7 microm particle) column requiring UPLC instruments. The ACVF plot is proposed as a simplified tool describing the chromatographic fingerprint to be used for evaluating and comparing chemical composition of plant extracts by using the parameters D% - relative abundance of the deterministic component - and c(EACF) - similarity index computed on ACVF. Copyright 2010 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Gunter eSpöck
2015-05-01
Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.
Frame covariant nonminimal multifield inflation
Karamitsos, Sotirios; Pilaftsis, Apostolos
2018-02-01
We introduce a frame-covariant formalism for inflation of scalar-curvature theories by adopting a differential geometric approach which treats the scalar fields as coordinates living on a field-space manifold. This ensures that our description of inflation is both conformally and reparameterization covariant. Our formulation gives rise to extensions of the usual Hubble and potential slow-roll parameters to generalized fully frame-covariant forms, which allow us to provide manifestly frame-invariant predictions for cosmological observables, such as the tensor-to-scalar ratio r, the spectral indices nR and nT, their runnings αR and αT, the non-Gaussianity parameter fNL, and the isocurvature fraction βiso. We examine the role of the field space curvature in the generation and transfer of isocurvature modes, and we investigate the effect of boundary conditions for the scalar fields at the end of inflation on the observable inflationary quantities. We explore the stability of the trajectories with respect to the boundary conditions by using a suitable sensitivity parameter. To illustrate our approach, we first analyze a simple minimal two-field scenario before studying a more realistic nonminimal model inspired by Higgs inflation. We find that isocurvature effects are greatly enhanced in the latter scenario and must be taken into account for certain values in the parameter space such that the model is properly normalized to the observed scalar power spectrum PR. Finally, we outline how our frame-covariant approach may be extended beyond the tree-level approximation through the Vilkovisky-De Witt formalism, which we generalize to take into account conformal transformations, thereby leading to a fully frame-invariant effective action at the one-loop level.
Frame covariant nonminimal multifield inflation
Directory of Open Access Journals (Sweden)
Sotirios Karamitsos
2018-02-01
Full Text Available We introduce a frame-covariant formalism for inflation of scalar-curvature theories by adopting a differential geometric approach which treats the scalar fields as coordinates living on a field-space manifold. This ensures that our description of inflation is both conformally and reparameterization covariant. Our formulation gives rise to extensions of the usual Hubble and potential slow-roll parameters to generalized fully frame-covariant forms, which allow us to provide manifestly frame-invariant predictions for cosmological observables, such as the tensor-to-scalar ratio r, the spectral indices nR and nT, their runnings αR and αT, the non-Gaussianity parameter fNL, and the isocurvature fraction βiso. We examine the role of the field space curvature in the generation and transfer of isocurvature modes, and we investigate the effect of boundary conditions for the scalar fields at the end of inflation on the observable inflationary quantities. We explore the stability of the trajectories with respect to the boundary conditions by using a suitable sensitivity parameter. To illustrate our approach, we first analyze a simple minimal two-field scenario before studying a more realistic nonminimal model inspired by Higgs inflation. We find that isocurvature effects are greatly enhanced in the latter scenario and must be taken into account for certain values in the parameter space such that the model is properly normalized to the observed scalar power spectrum PR. Finally, we outline how our frame-covariant approach may be extended beyond the tree-level approximation through the Vilkovisky–De Witt formalism, which we generalize to take into account conformal transformations, thereby leading to a fully frame-invariant effective action at the one-loop level.
Szekeres models: a covariant approach
Apostolopoulos, Pantelis S.
2017-05-01
We exploit the 1 + 1 + 2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an average scale length can be defined covariantly which satisfies a 2d equation of motion driven from the effective gravitational mass (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field E ab . We show that the quasi-symmetric property of the Szekeres models is justified through the existence of 3 independent intrinsic Killing vector fields (IKVFs). In addition the notions of the apparent and absolute apparent horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express Sachs’ optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.
Network-level structural covariance in the developing brain.
Zielinski, Brandon A; Gennatas, Efstathios D; Zhou, Juan; Seeley, William W
2010-10-19
Intrinsic or resting state functional connectivity MRI and structural covariance MRI have begun to reveal the adult human brain's multiple network architectures. How and when these networks emerge during development remains unclear, but understanding ontogeny could shed light on network function and dysfunction. In this study, we applied structural covariance MRI techniques to 300 children in four age categories (early childhood, 5-8 y; late childhood, 8.5-11 y; early adolescence, 12-14 y; late adolescence, 16-18 y) to characterize gray matter structural relationships between cortical nodes that make up large-scale functional networks. Network nodes identified from eight widely replicated functional intrinsic connectivity networks served as seed regions to map whole-brain structural covariance patterns in each age group. In general, structural covariance in the youngest age group was limited to seed and contralateral homologous regions. Networks derived using primary sensory and motor cortex seeds were already well-developed in early childhood but expanded in early adolescence before pruning to a more restricted topology resembling adult intrinsic connectivity network patterns. In contrast, language, social-emotional, and other cognitive networks were relatively undeveloped in younger age groups and showed increasingly distributed topology in older children. The so-called default-mode network provided a notable exception, following a developmental trajectory more similar to the primary sensorimotor systems. Relationships between functional maturation and structural covariance networks topology warrant future exploration.
Banerjee, Biswanath; Walsh, Timothy F.; Aquino, Wilkins; Bonnet, Marc
2012-01-01
This paper presents the formulation and implementation of an Error in Constitutive Equations (ECE) method suitable for large-scale inverse identification of linear elastic material properties in the context of steady-state elastodynamics. In ECE-based methods, the inverse problem is postulated as an optimization problem in which the cost functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses. Furthermore, in a more recent modality of this methodology introduced by Feissel and Allix (2007), referred to as the Modified ECE (MECE), the measured data is incorporated into the formulation as a quadratic penalty term. We show that a simple and efficient continuation scheme for the penalty term, suggested by the theory of quadratic penalty methods, can significantly accelerate the convergence of the MECE algorithm. Furthermore, a (block) successive over-relaxation (SOR) technique is introduced, enabling the use of existing parallel finite element codes with minimal modification to solve the coupled system of equations that arises from the optimality conditions in MECE methods. Our numerical results demonstrate that the proposed methodology can successfully reconstruct the spatial distribution of elastic material parameters from partial and noisy measurements in as few as ten iterations in a 2D example and fifty in a 3D example. We show (through numerical experiments) that the proposed continuation scheme can improve the rate of convergence of MECE methods by at least an order of magnitude versus the alternative of using a fixed penalty parameter. Furthermore, the proposed block SOR strategy coupled with existing parallel solvers produces a computationally efficient MECE method that can be used for large scale materials identification problems, as demonstrated on a 3D example involving about 400,000 unknown moduli. Finally, our numerical results suggest that the proposed MECE
Eggink, Hendriekje; Kuiper, Anouk; Peall, Kathryn J.; Contarino, Maria Fiorella; Bosch, Annet M.; Post, Bart; Sival, Deborah A.; Tijssen, Marina A. J.; de Koning, Tom J.
2014-01-01
Background: Inborn errors of metabolism (IEM) form an important cause of movement disorders in children. The impact of metabolic diseases and concordant movement disorders upon children's health-related quality of life (HRQOL) and its physical and psychosocial domains of functioning has never been
Eggink, Hendriekje; Kuiper, Anouk; Peall, Kathryn J.; Contarino, Maria Fiorella; Bosch, Annet M.; Post, Bart; Sival, Deborah A.; Tijssen, Marina A. J.; de Koning, Tom J.
2014-01-01
Inborn errors of metabolism (IEM) form an important cause of movement disorders in children. The impact of metabolic diseases and concordant movement disorders upon children's health-related quality of life (HRQOL) and its physical and psychosocial domains of functioning has never been investigated.
Sachse, Karoline A.; Haag, Nicole
2017-01-01
Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…
McMahon, Camilla M.; Henderson, Heather A.
2014-01-01
Error-monitoring, or the ability to recognize one's mistakes and implement behavioral changes to prevent further mistakes, may be impaired in individuals with Autism Spectrum Disorder (ASD). Children and adolescents (ages 9-19) with ASD (n = 42) and typical development (n = 42) completed two face processing tasks that required discrimination of either the gender or affect of standardized face stimuli. Post-error slowing and the difference in Error-Related Negativity amplitude between correct and incorrect responses (ERNdiff) were used to index error-monitoring ability. Overall, ERNdiff increased with age. On the Gender Task, individuals with ASD had a smaller ERNdiff than individuals with typical development; however, on the Affect Task, there were no significant diagnostic group differences on ERNdiff. Individuals with ASD may have ERN amplitudes similar to those observed in individuals with typical development in more social contexts compared to less social contexts due to greater consequences for errors, more effortful processing, and/or reduced processing efficiency in these contexts. Across all participants, more post-error slowing on the Affect Task was associated with better social cognitive skills. PMID:25066088
Directory of Open Access Journals (Sweden)
Daniel Bartz
Full Text Available Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
A note on covariant dynamical semigroups
Holevo, A. S.
1993-04-01
It is shown that in the standard representation of the generator of a norm continuous dynamical semigroup, which is covariant with respect to a unitary representation of an amenable group, the completely positive part can always be chosen covariant and the Hamiltonian commuting with the representation. The structure of the generator of a translation covariant dynamical semigroup is described.
Covariant gauges at finite temperature
Landshoff, P V; Rebhan, A
1992-01-01
A prescription is presented for real-time finite-temperature perturbation theory in covariant gauges, in which only the two physical degrees of freedom of the gauge-field propagator acquire thermal parts. The propagators for the unphysical degrees of freedom of the gauge field, and for the Faddeev-Popov ghost field, are independent of temperature. This prescription is applied to the calculation of the one-loop gluon self-energy and the two-loop interaction pressure, and is found to be simpler...
Computational protein design quantifies structural constraints on amino acid covariation.
Directory of Open Access Journals (Sweden)
Noah Ollikainen
Full Text Available Amino acid covariation, where the identities of amino acids at different sequence positions are correlated, is a hallmark of naturally occurring proteins. This covariation can arise from multiple factors, including selective pressures for maintaining protein structure, requirements imposed by a specific function, or from phylogenetic sampling bias. Here we employed flexible backbone computational protein design to quantify the extent to which protein structure has constrained amino acid covariation for 40 diverse protein domains. We find significant similarities between the amino acid covariation in alignments of natural protein sequences and sequences optimized for their structures by computational protein design methods. These results indicate that the structural constraints imposed by protein architecture play a dominant role in shaping amino acid covariation and that computational protein design methods can capture these effects. We also find that the similarity between natural and designed covariation is sensitive to the magnitude and mechanism of backbone flexibility used in computational protein design. Our results thus highlight the necessity of including backbone flexibility to correctly model precise details of correlated amino acid changes and give insights into the pressures underlying these correlations.
African Journals Online (AJOL)
QuickSilver
Studies in the USA have shown that medical error is the 8th most common cause of death.2,3. The most common causes of medical error are:- administration of the wrong medication or wrong dose of the correct medication, using the wrong route of administration, giving a treatment to the wrong patient or at the wrong time.4 ...
Vogel, Curtis R; Tyler, Glenn A; Wittich, Donald J
2014-07-01
We introduce a framework for modeling, analysis, and simulation of aero-optics wavefront aberrations that is based on spatial-temporal covariance matrices extracted from wavefront sensor measurements. Within this framework, we present a quasi-homogeneous structure function to analyze nonhomogeneous, mildly anisotropic spatial random processes, and we use this structure function to show that phase aberrations arising in aero-optics are, for an important range of operating parameters, locally Kolmogorov. This strongly suggests that the d5/3 power law for adaptive optics (AO) deformable mirror fitting error, where d denotes actuator separation, holds for certain important aero-optics scenarios. This framework also allows us to compute bounds on AO servo lag error and predictive control error. In addition, it provides us with the means to accurately simulate AO systems for the mitigation of aero-effects, and it may provide insight into underlying physical processes associated with turbulent flow. The techniques introduced here are demonstrated using data obtained from the Airborne Aero-Optics Laboratory.
Local and omnibus goodness-of-fit tests in classical measurement error models
Ma, Yanyuan
2010-09-14
We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.
Covariance Evaluation Methodology for Neutron Cross Sections
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
Shimazu, Chisato; Hoshino, Satoshi; Furukawa, Taiji
2013-08-01
We constructed an integrated personal identification workflow chart using both bar code reading and an all in-one laboratory information system. The information system not only handles test data but also the information needed for patient guidance in the laboratory department. The reception terminals at the entrance, displays for patient guidance and patient identification tools at blood-sampling booths are all controlled by the information system. The number of patient identification errors was greatly reduced by the system. However, identification errors have not been abolished in the ultrasound department. After re-evaluation of the patient identification process in this department, we recognized that the major reason for the errors came from excessive identification workflow. Ordinarily, an ultrasound test requires patient identification 3 times, because 3 different systems are required during the entire test process, i.e. ultrasound modality system, laboratory information system and a system for producing reports. We are trying to connect the 3 different systems to develop a one-time identification workflow, but it is not a simple task and has not been completed yet. Utilization of the laboratory information system is effective, but is not yet perfect for patient identification. The most fundamental procedure for patient identification is to ask a person's name even today. Everyday checks in the ordinary workflow and everyone's participation in safety-management activity are important for the prevention of patient identification errors.
Pagirsky, Matthew S.; Koriakin, Taylor A.; Avitia, Maria; Costa, Michael; Marchis, Lavinia; Maykel, Cheryl; Sassu, Kari; Bray, Melissa A.; Pan, Xingyu
2017-01-01
A large body of research has documented the relationship between attention-deficit hyperactivity disorder (ADHD) and reading difficulties in children; however, there have been no studies to date that have examined errors made by students with ADHD and reading difficulties. The present study sought to determine whether the kinds of achievement…
Adjoint-Based Forecast Error Sensitivity Diagnostics in Data Assimilation
Langland, R.; Daescu, D.
2016-12-01
We present an up-to-date review of the adjoint-data assimilation system (DAS) approach to evaluate the forecast sensitivity to error covariance parameters and provide guidance to flow-dependent adaptive covariance tuning (ACT) procedures. New applications of the forecast sensitivity to observation error covariance (FSR) are investigated including the sensitivity to observation error correlations and a priori first-order assessment to the error correlation impact on the forecasts. Issues related to ambiguities in the a posteriori estimation to the observation error covariance (R) and background error covariance (B) are discussed. A synergistic framework to adaptive covariance tuning is considered that combines R-estimates derived from a posteriori covariance diagnosis and FSR derivative information. The evaluation of the forecast sensitivity to the innovation-weight coefficients is introduced as a computationally-feasible approach to account for the characteristics of both R- and B-parameters and perform direct tuning of the DAS gain operator (K). Theoretical aspects are discussed and recent results are provided with the adjoint versions of the Naval Research Laboratory Atmospheric Variational Data Assimilation System-Accelerated Representer (NAVDAS-AR).
Congdon, Nathan; Wang, Yunfei; Song, Yue; Choi, Kai; Zhang, Mingzhi; Zhou, Zhongxia; Xie, Zhenling; Li, Liping; Liu, Xueyu; Sharma, Abhishek; Wu, Bin; Lam, Dennis S C
2008-07-01
To evaluate visual acuity, visual function, and prevalence of refractive error among Chinese secondary-school children in a cross-sectional school-based study. Uncorrected, presenting, and best corrected visual acuity, cycloplegic autorefraction with refinement, and self-reported visual function were assessed in a random, cluster sample of rural secondary school students in Xichang, China. Among the 1892 subjects (97.3% of the consenting children, 84.7% of the total sample), mean age was 14.7 +/- 0.8 years, 51.2% were female, and 26.4% were wearing glasses. The proportion of children with uncorrected, presenting, and corrected visual disability (visual disability when tested without correction, 98.7% was due to refractive error, while only 53.8% (414/770) of these children had appropriate correction. The girls had significantly (P visual disability and myopia visual function (ANOVA trend test, P Visual disability in this population was common, highly correctable, and frequently uncorrected. The impact of refractive error on self-reported visual function was significant. Strategies and studies to understand and remove barriers to spectacle wear are needed.
A comparison of predictors of the error of weather forecasts
Directory of Open Access Journals (Sweden)
M. S. Roulston
2005-01-01
Full Text Available Three different potential predictors of forecast error - ensemble spread, mean errors of recent forecasts and the local gradient of the predicted field - were compared. The comparison was performed using the forecasts of 500hPa geopotential and 2-m temperature of the ECMWF ensemble prediction system at lead times of 96, 168 and 240h, over North America for each day in 2004. Ensemble spread was found to be the best overall predictor of absolute forecast error. The mean absolute error of recent forecasts (past 30 days was found to contain some information, however, and the local gradient of the geopotential also provided some information about the error in the prediction of this variable. Ensemble spatial error covariance and the mean spatial error covariance of recent forecasts (past 30 days were also compared as predictors of actual spatial error covariance. Both were found to provide some predictive information, although the ensemble error covariance was found to provide substantially more information for both variables tested at all three lead times. The results of the study suggest that past errors and local field gradients should not be ignored as predictors of forecast error as they can be computed cheaply from single forecasts when an ensemble is not available. Alternatively, in some cases, they could be used to supplement the information about forecast error provided by an ensemble to provide a better prediction of forecast skill.
Directory of Open Access Journals (Sweden)
I PUTU EKA IRAWAN
2014-01-01
Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.
Directory of Open Access Journals (Sweden)
I PUTU EKA IRAWAN
2013-11-01
Full Text Available Principal Component Regression is a method to overcome multicollinearity techniques by combining principal component analysis with regression analysis. The calculation of classical principal component analysis is based on the regular covariance matrix. The covariance matrix is optimal if the data originated from a multivariate normal distribution, but is very sensitive to the presence of outliers. Alternatives are used to overcome this problem the method of Least Median Square-Minimum Covariance Determinant (LMS-MCD. The purpose of this research is to conduct a comparison between Principal Component Regression (RKU and Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD in dealing with outliers. In this study, Method of Least Median Square - Minimum Covariance Determinant (LMS-MCD has a bias and mean square error (MSE is smaller than the parameter RKU. Based on the difference of parameter estimators, still have a test that has a difference of parameter estimators method LMS-MCD greater than RKU method.
Covariance-enhanced discriminant analysis.
Xu, Peirong; Zhu, J I; Zhu, Lixing; Li, Y I
Linear discriminant analysis has been widely used to characterize or separate multiple classes via linear combinations of features. However, the high dimensionality of features from modern biological experiments defies traditional discriminant analysis techniques. Possible interfeature correlations present additional challenges and are often underused in modelling. In this paper, by incorporating possible interfeature correlations, we propose a covariance-enhanced discriminant analysis method that simultaneously and consistently selects informative features and identifies the corresponding discriminable classes. Under mild regularity conditions, we show that the method can achieve consistent parameter estimation and model selection, and can attain an asymptotically optimal misclassification rate. Extensive simulations have verified the utility of the method, which we apply to a renal transplantation trial.
... halos around bright lights, squinting, headaches, or eye strain. Glasses or contact lenses can usually correct refractive errors. Laser eye surgery may also be a possibility. NIH: National Eye ...
Energy Technology Data Exchange (ETDEWEB)
Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp [Department of Physics, University of Tokyo, Tokyo 113-0033 (Japan)
2014-05-01
The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.
Covariant non-commutative space–time
Directory of Open Access Journals (Sweden)
Jonathan J. Heckman
2015-05-01
Full Text Available We introduce a covariant non-commutative deformation of 3+1-dimensional conformal field theory. The deformation introduces a short-distance scale ℓp, and thus breaks scale invariance, but preserves all space–time isometries. The non-commutative algebra is defined on space–times with non-zero constant curvature, i.e. dS4 or AdS4. The construction makes essential use of the representation of CFT tensor operators as polynomials in an auxiliary polarization tensor. The polarization tensor takes active part in the non-commutative algebra, which for dS4 takes the form of so(5,1, while for AdS4 it assembles into so(4,2. The structure of the non-commutative correlation functions hints that the deformed theory contains gravitational interactions and a Regge-like trajectory of higher spin excitations.
Directory of Open Access Journals (Sweden)
Sivananaithaperumal Sudalaiandi
2014-06-01
Full Text Available This paper presents an automatic tuning of multivariable Fractional-Order Proportional, Integral and Derivative controller (FO-PID parameters using Covariance Matrix Adaptation Evolution Strategy (CMAES algorithm. Decoupled multivariable FO-PI and FO-PID controller structures are considered. Oustaloup integer order approximation is used for the fractional integrals and derivatives. For validation, two Multi-Input Multi- Output (MIMO distillation columns described byWood and Berry and Ogunnaike and Ray are considered for the design of multivariable FO-PID controller. Optimal FO-PID controller is designed by minimizing Integral Absolute Error (IAE as objective function. The results of previously reported PI/PID controller are considered for comparison purposes. Simulation results reveal that the performance of FOPI and FO-PID controller is better than integer order PI/PID controller in terms of IAE. Also, CMAES algorithm is suitable for the design of FO-PI / FO-PID controller.
Sozda, Christopher N; Larson, Michael J; Kaufman, David A S; Schmalfuss, Ilona M; Perlstein, William M
2011-10-01
Continuous monitoring of one's performance is invaluable for guiding behavior towards successful goal attainment by identifying deficits and strategically adjusting responses when performance is inadequate. In the present study, we exploited the advantages of event-related functional magnetic resonance imaging (fMRI) to examine brain activity associated with error-related processing after severe traumatic brain injury (sTBI). fMRI and behavioral data were acquired while 10 sTBI participants and 12 neurologically-healthy controls performed a task-switching cued-Stroop task. fMRI data were analyzed using a random-effects whole-brain voxel-wise general linear model and planned linear contrasts. Behaviorally, sTBI patients showed greater error-rate interference than neurologically-normal controls. fMRI data revealed that, compared to controls, sTBI patients showed greater magnitude error-related activation in the anterior cingulate cortex (ACC) and an increase in the overall spatial extent of error-related activation across cortical and subcortical regions. Implications for future research and potential limitations in conducting fMRI research in neurologically-impaired populations are discussed, as well as some potential benefits of employing multimodal imaging (e.g., fMRI and event-related potentials) of cognitive control processes in TBI. Copyright Â© 2011 Elsevier B.V. All rights reserved.
Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif
2012-04-01
Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.
Different Approaches to Covariate Inclusion in the Mixture Rasch Model
Li, Tongyun; Jiao, Hong; Macready, George B.
2016-01-01
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Proton-proton virtual bremsstrahlung in a relativistic covariant model
Martinus, GH; Scholten, O; Tjon, J
1999-01-01
Lepton-pair production (virtual bremsstrahlung) in proton-proton scattering is investigated using a relativistic covariant model. The effects of negative-energy slates and two-body currents are studied. These are shown to have large effects in some particular structure functions, even at the
DEFF Research Database (Denmark)
He, Peng; Eriksson, Frank; Scheike, Thomas H.
2016-01-01
function by fitting the Cox model for the censoring distribution and using the predictive probability for each individual. Our simulation study shows that the covariate-adjusted weight estimator is basically unbiased when the censoring time depends on the covariates, and the covariate-adjusted weight...
Extreme eigenvalues of sample covariance and correlation matrices
DEFF Research Database (Denmark)
Heiny, Johannes
of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals......This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a p-dimensional heavy-tailed time series when p converges to infinity together with the sample size n. We generalize the growth rates of p existing in the literature. Assuming a regular variation condition with tail index
Extreme eigenvalues of sample covariance and correlation matrices
DEFF Research Database (Denmark)
Heiny, Johannes
2017-01-01
dimension of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals......This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a $p$-dimensional heavy-tailed time series when $p$ converges to infinity together with the sample size $n$. We generalize the growth rates of $p$ existing in the literature. Assuming a regular variation condition with tail index $\\alpha
Piecewise exponential survival trees with time-dependent covariates.
Huang, X; Chen, S; Soong, S J
1998-12-01
Survival trees methods are nonparametric alternatives to the semiparametric Cox regression in survival analysis. In this paper, a tree-based method for censored survival data with time-dependent covariates is proposed. The proposed method assumes a very general model for the hazard function and is fully nonparametric. The recursive partitioning algorithm uses the likelihood estimation procedure to grow trees under a piecewise exponential structure that handles time-dependent covariates in a parallel way to time-independent covariates. In general, the estimated hazard at a node gives the risk for a group of individuals during a specific time period. Both cross-validation and bootstrap resampling techniques are implemented in the tree selection procedure. The performance of the proposed survival trees method is shown to be good through simulation and application to real data.
Extreme eigenvalues of sample covariance and correlation matrices
DEFF Research Database (Denmark)
Heiny, Johannes
2017-01-01
This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... that the extreme eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high...... dimension of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals...
Extreme eigenvalues of sample covariance and correlation matrices
DEFF Research Database (Denmark)
Heiny, Johannes
This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high dimension...... of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals...
Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter
Luo, Xiaodong
2011-12-01
A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used in the Kalman filter. By design, the H∞ filter is more robust than the Kalman filter, in the sense that the estimation error in the H∞ filter in general has a finite growth rate with respect to the uncertainties in assimilation, except for a special case that corresponds to the Kalman filter. The original form of the H∞ filter contains global constraints in time, which may be inconvenient for sequential data assimilation problems. Therefore a variant is introduced that solves some time-local constraints instead, and hence it is called the time-local H∞ filter (TLHF). By analogy to the ensemble Kalman filter (EnKF), the concept of ensemble time-local H∞ filter (EnTLHF) is also proposed. The general form of the EnTLHF is outlined, and some of its special cases are discussed. In particular, it is shown that an EnKF with certain covariance inflation is essentially an EnTLHF. In this sense, the EnTLHF provides a general framework for conducting covariance inflation in the EnKF-based methods. Some numerical examples are used to assess the relative robustness of the TLHF–EnTLHF in comparison with the corresponding KF–EnKF method.
Modeling and Forecasting (Un)Reliable Realized Covariances for More Reliable Financial Decisions
DEFF Research Database (Denmark)
Bollerslev, Tim; Patton, Andrew J.; Quaedvlieg, Rogier
We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases into the c......We propose a new framework for modeling and forecasting common financial risks based on (un)reliable realized covariance measures constructed from high-frequency intraday data. Our new approach explicitly incorporates the effect of measurement errors and time-varying attenuation biases...... of forecasting models....
Energy Technology Data Exchange (ETDEWEB)
Leitão, Sofia, E-mail: sofia.leitao@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Stadler, Alfred, E-mail: stadler@uevora.pt [Departamento de Física, Universidade de Évora, 7000-671 Évora (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Peña, M.T., E-mail: teresa.pena@tecnico.ulisboa.pt [Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Biernat, Elmar P., E-mail: elmar.biernat@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)
2017-01-10
The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.
Haem, Elham; Harling, Kajsa; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Karlsson, Mats O
2017-02-01
One important aim in population pharmacokinetics (PK) and pharmacodynamics is identification and quantification of the relationships between the parameters and covariates. Lasso has been suggested as a technique for simultaneous estimation and covariate selection. In linear regression, it has been shown that Lasso possesses no oracle properties, which means it asymptotically performs as though the true underlying model was given in advance. Adaptive Lasso (ALasso) with appropriate initial weights is claimed to possess oracle properties; however, it can lead to poor predictive performance when there is multicollinearity between covariates. This simulation study implemented a new version of ALasso, called adjusted ALasso (AALasso), to take into account the ratio of the standard error of the maximum likelihood (ML) estimator to the ML coefficient as the initial weight in ALasso to deal with multicollinearity in non-linear mixed-effect models. The performance of AALasso was compared with that of ALasso and Lasso. PK data was simulated in four set-ups from a one-compartment bolus input model. Covariates were created by sampling from a multivariate standard normal distribution with no, low (0.2), moderate (0.5) or high (0.7) correlation. The true covariates influenced only clearance at different magnitudes. AALasso, ALasso and Lasso were compared in terms of mean absolute prediction error and error of the estimated covariate coefficient. The results show that AALasso performed better in small data sets, even in those in which a high correlation existed between covariates. This makes AALasso a promising method for covariate selection in nonlinear mixed-effect models.
Directory of Open Access Journals (Sweden)
Md. Sayedur Rahman
2015-01-01
Full Text Available Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb2+, Cu2+, Fe2+, and Zn2+ onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models. The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment.
Rahman, Md. Sayedur; Sathasivam, Kathiresan V.
2015-01-01
Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb2+, Cu2+, Fe2+, and Zn2+ onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich) and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models). The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment. PMID:26295032
Rahman, Md Sayedur; Sathasivam, Kathiresan V
2015-01-01
Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb(2+), Cu(2+), Fe(2+), and Zn(2+) onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich) and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models). The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment.
Gini covariance matrix and its affine equivariant version
Weatherall, Lauren Anne
Gini's mean difference (GMD) and its derivatives such as Gini index have been widely used as alternative measures of variability over one century in many research fields especially in finance, economics and social welfare. In this dissertation, we generalize the univariate GMD to the multivariate case and propose a new covariance matrix so called the Gini covariance matrix (GCM). The extension is natural, which is based on the covariance representation of GMD with the notion of multivariate spatial rank function. In order to gain the affine equivariance property for GCM, we utilize the transformation-retransformation (TR) technique and obtain TR version GCM that turns out to be a symmetrized M-functional. Indeed, both GCMs are symmetrized approaches based on the difference of two independent variables without reference of a location, hence avoiding some arbitrary definition of location for non-symmetric distributions. We study the properties of both GCMs. They possess the so-called independence property, which is highly important, for example, in independent component analysis. Influence functions of two GCMs are derived to assess their robustness. They are found to be more robust than the regular covariance matrix but less robust than Tyler and Dumbgen M-functional. Under elliptical distributions, the relationship between the scatter parameter and the two GCM are obtained. With this relationship, principal component analysis (PCA) based on GCM is possible. Estimation of two GCMs is presented. We study asymptotical behavior of the estimators. √n-consistency and asymptotical normality of estimators are established. Asymptotic relative efficiency (ARE) of TR-GCM estimator with respect to sample covariance matrix is compared to that of Tyler and Dumbgen M-estimators. With little loss on efficiency (UCI machine learning repository. Relying on some graphical and numerical summaries, Gini-based PCA demonstrates its competitive performance.
Measurement error in longitudinal film badge data
Energy Technology Data Exchange (ETDEWEB)
Marsh, J.L
2002-04-01
The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is mainly substantiated by the Atomic Bomb Survivor study, the Ankylosing Spondylitis Patients study, and studies of various other patients irradiated for therapeutic purposes. The carcinogenic relationship is believed to be a linear or quadratic function of dose but the risk estimates differ widely for the different studies. Previous cohort studies of the Sellafield workforce have used the cumulative annual exposure data for their risk estimates. The current 1:4 matched case-control study also uses the individual worker's film badge data, the majority of which has been unavailable in computerised form. The results from the 1:4 matched (on dates of birth and employment, sex and industrial status) case-control study are compared and contrasted with those for a 1:4 nested (within the worker cohort and matched on the same factors) case-control study using annual doses. The data consist of 186 cases and 744 controls from the work forces of four BNFL sites: Springfields, Sellafield, Capenhurst and Chapelcross. Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study
Competing risks and time-dependent covariates
DEFF Research Database (Denmark)
Cortese, Giuliana; Andersen, Per K
2010-01-01
Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates, as classi......Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates......, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates....... In a multi-state framework, a first approach uses internal covariates to define additional (intermediate) transient states in the competing risks model. Another approach is to apply the landmark analysis as described by van Houwelingen [Scandinavian Journal of Statistics 2007, 34, 70-85] in order to study...
General Covariance from the Quantum Renormalization Group
Shyam, Vasudev
2016-01-01
The Quantum renormalization group (QRG) is a realisation of holography through a coarse graining prescription that maps the beta functions of a quantum field theory thought to live on the `boundary' of some space to holographic actions in the `bulk' of this space. A consistency condition will be proposed that translates into general covariance of the gravitational theory in the $D + 1$ dimensional bulk. This emerges from the application of the QRG on a planar matrix field theory living on the $D$ dimensional boundary. This will be a particular form of the Wess--Zumino consistency condition that the generating functional of the boundary theory needs to satisfy. In the bulk, this condition forces the Poisson bracket algebra of the scalar and vector constraints of the dual gravitational theory to close in a very specific manner, namely, the manner in which the corresponding constraints of general relativity do. A number of features of the gravitational theory will be fixed as a consequence of this form of the Po...
Matérn-based nonstationary cross-covariance models for global processes
Jun, Mikyoung
2014-07-01
Many spatial processes in environmental applications, such as climate variables and climate model errors on a global scale, exhibit complex nonstationary dependence structure, in not only their marginal covariance but also their cross-covariance. Flexible cross-covariance models for processes on a global scale are critical for an accurate description of each spatial process as well as the cross-dependences between them and also for improved predictions. We propose various ways to produce cross-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters vary over space, coupled with a differential operators approach for modeling large-scale nonstationarity. We compare their performance to the performance of some existing models in terms of the aic and spatial predictions in two applications: joint modeling of surface temperature and precipitation, and joint modeling of errors in climate model ensembles. © 2014 Elsevier Inc.
Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul
2006-09-01
Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.
Directory of Open Access Journals (Sweden)
Louveaux J
2006-01-01
Full Text Available With increasing bandwidths and decreasing loop lengths, crosstalk becomes the main impairment in VDSL systems. For downstream communication, crosstalk precompensation techniques have been designed to cope with this issue by using the collocation of the transmitters. These techniques naturally need an accurate estimation of the crosstalk channel impulse responses. We investigate the issue of tracking these channels. Due to the lack of coordination between the receivers, and because the amplitude levels of the remaining interference from crosstalk after precompensation are very low, blind estimation schemes are inefficient in this case. So some part of the upstream or downstream bit rate needs to be used to help the estimation. In this paper, we design a new algorithm to try to limit the bandwidth used for the estimation purpose by exploiting the collocation at the transmitter side. The principle is to use feedback from the receiver to the transmitter instead of using pilots in the downstream signal. It is justified by computing the Cramer-Rao lower bound on the estimation error variance and showing that, for the levels of power in consideration, and for a given bit rate used to help the estimation, this bound is effectively lower for the proposed scheme. A simple algorithm based on the maximum likelihood is proposed. Its performance is analyzed in detail and is compared to a classical scheme using pilot symbols. Finally, an improved but more complex version is proposed to approach the performance bound.
Dirac oscillator in a Galilean covariant non-commutative space
Energy Technology Data Exchange (ETDEWEB)
Melo, G.R. de [Universidade Federal do Reconcavo da Bahia, BA (Brazil); Montigny, M. [University of Alberta (Canada); Pompeia, P.J. [Instituto de Fomento e Coordecacao Industrial, Sao Jose dos Campos, SP (Brazil); Santos, Esdras S. [Universidade Federal da Bahia, Salvador (Brazil)
2013-07-01
Full text: Even though Galilean kinematics is only an approximation of the relativistic kinematics, the structure of Galilean kinematics is more intricate than relativistic kinematics. For instance, the Galilean algebra admits a nontrivial central extension and projective representations, whereas the Poincare algebra does not. It is possible to construct representations of the Galilei algebra with three possible methods: (1) directly from the Galilei algebra, (2) from contractions of the Poincare algebra with the same space-time dimension, or (3) from the Poincare algebra in a space-time with one additional dimension. In this paper, we follow the third approach, which we refer to as 'Galilean covariance' because the equations are Lorentz covariant in the extended manifold. These equations become Galilean invariant after projection to the lower dimension. Our motivation is that this covariant approach provides one more unifying feature of field theory models. Indeed, particle physics (with Poincare kinematics) and condensed matter physics (with Galilean kinematics) share many tools of quantum field theory (e.g. gauge invariance, spontaneous symmetry breaking, Goldstone bosons), but the Galilean kinematics does not admit a metric structure. However, since the Galilean Lie algebra is a subalgebra of the Poincare Lie algebra if one more space-like dimension is added, we can achieve 'Galilean covariance' with a metric in an extended manifold; that makes non-relativistic models look similar to Lorentz-covariant relativistic models. In this context we study the Galilei covariant five-dimensional formulation applied to Galilean Dirac oscillator in a non-commutative situation, with space-space and momentum-momentum non-commutativity. The wave equation is obtained via a 'Galilean covariant' approach, which consists in projecting the covariant motion equations from a (4, l)-dimensional manifold with light-cone coordinates, to a (3, l
Directory of Open Access Journals (Sweden)
Tania Dehesh
2015-01-01
Full Text Available Background. Univariate meta-analysis (UM procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS method as a multivariate meta-analysis approach. Methods. We evaluated the efficiency of four new approaches including zero correlation (ZC, common correlation (CC, estimated correlation (EC, and multivariate multilevel correlation (MMC on the estimation bias, mean square error (MSE, and 95% probability coverage of the confidence interval (CI in the synthesis of Cox proportional hazard models coefficients in a simulation study. Result. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. Conclusion. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.
An alternative covariance estimator to investigate genetic heterogeneity in populations.
Heslot, Nicolas; Jannink, Jean-Luc
2015-11-26
For genomic prediction and genome-wide association studies (GWAS) using mixed models, covariance between individuals is estimated using molecular markers. Based on the properties of mixed models, using available molecular data for prediction is optimal if this covariance is known. Under this assumption, adding individuals to the analysis should never be detrimental. However, some empirical studies showed that increasing training population size decreased prediction accuracy. Recently, results from theoretical models indicated that even if marker density is high and the genetic architecture of traits is controlled by many loci with small additive effects, the covariance between individuals, which depends on relationships at causal loci, is not always well estimated by the whole-genome kinship. We propose an alternative covariance estimator named K-kernel, to account for potential genetic heterogeneity between populations that is characterized by a lack of genetic correlation, and to limit the information flow between a priori unknown populations in a trait-specific manner. This is similar to a multi-trait model and parameters are estimated by REML and, in extreme cases, it can allow for an independent genetic architecture between populations. As such, K-kernel is useful to study the problem of the design of training populations. K-kernel was compared to other covariance estimators or kernels to examine its fit to the data, cross-validated accuracy and suitability for GWAS on several datasets. It provides a significantly better fit to the data than the genomic best linear unbiased prediction model and, in some cases it performs better than other kernels such as the Gaussian kernel, as shown by an empirical null distribution. In GWAS simulations, alternative kernels control type I errors as well as or better than the classical whole-genome kinship and increase statistical power. No or small gains were observed in cross-validated prediction accuracy. This alternative
Covariate Balancing through Naturally Occurring Strata.
Alemi, Farrokh; ElRafey, Amr; Avramovic, Ivan
2016-12-14
To provide an alternative to propensity scoring (PS) for the common situation where there are interacting covariates. We used 1.3 million assessments of residents of the United States Veterans Affairs nursing homes, collected from January 1, 2000, through October 9, 2012. In stratified covariate balancing (SCB), data are divided into naturally occurring strata, where each stratum is an observed combination of the covariates. Within each stratum, cases with, and controls without, the target event are counted; controls are weighted to be as frequent as cases. This weighting procedure guarantees that covariates, or combination of covariates, are balanced, meaning they occur at the same rate among cases and controls. Finally, impact of the target event is calculated in the weighted data. We compare the performance of SCB, logistic regression (LR), and propensity scoring (PS) in simulated and real data. We examined the calibration of SCB and PS in predicting 6-month mortality from inability to eat, controlling for age, gender, and nine other disabilities for 296,051 residents in Veterans Affairs nursing homes. We also performed a simulation study, where outcomes were randomly generated from treatment, 10 covariates, and increasing number of covariate interactions. The accuracy of SCB, PS, and LR in recovering the simulated treatment effect was reported. In simulated environment, as the number of interactions among the covariates increased, SCB and properly specified LR remained accurate but pairwise LR and pairwise PS, the most common applications of these tools, performed poorly. In real data, application of SCB was practical. SCB was better calibrated than linear PS, the most common method of PS. In environments where covariates interact, SCB is practical and more accurate than common methods of applying LR and PS. © Health Research and Educational Trust.
Bayesian source term determination with unknown covariance of measurements
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Isaac, Lisa M.; And Others
1993-01-01
Assessed multiple aspects of cognitive performance, medication planning ability, and medication compliance in 20 elderly outpatients. Findings suggest that aspects of attention/concentration, visual and verbal memory, and motor function which are untapped by simple mental status assessment are related to medication access, planning, and compliance…
Effects of Correlated Errors on the Analysis of Space Geodetic Data
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
Dobson, F; Hinman, R S; Hall, M; Marshall, C J; Sayer, T; Anderson, C; Newcomb, N; Stratford, P W; Bennell, K L
2017-11-01
To estimate the reliability and measurement error of performance-based tests of physical function recommended by the Osteoarthritis Research Society International (OARSI) in people with hip and/or knee osteoarthritis (OA). Prospective repeated measures between independent raters within a session and within-rater over a week interval. Relative reliability was estimated for 51 people with hip and/or knee OA (mean age 64.5 years, standard deviation (SD) 6.21 years; 47% females; 36 (70%) primary knee OA) on the 30s Chair Stand Test (30sCST), 40m Fast-Paced Walk Test (40mFPWT), 11-Stair Climb Test (11-step SCT), Timed Up and Go (TUG), Six-Minute Walk Test (6MWT), 10m Fast-Paced Walk Test (10mFPWT) and 20s Stair Climb Test (20sSCT) using intra-class correlation coefficients (ICC). Absolute reliability was calculated using standard error of measurement (SEM) and minimal detectable change (MDC). Measurement error was acceptable (SEM tests. Between-rater reliability was: optimal (ICC > 0.9, lower 1-sided 95% CI > 0.7) for the 40mFPWT, 6MWT and 10mFPWT; sufficient (ICC >0.8, lower 1-sided 95% CI > 0.7) for 30sCST, 20sSCT; unacceptable (lower 1-side 95% CI reliability was optimal for 40mFPWT, and 6MWT; sufficient for 30sCST and 10mFPWT and unacceptable for 11-step SCT, TUG and 20sSCT. The 30sCST, 40mFPWT, 6MWT and 10mFPWT, demonstrated, at minimum, acceptable levels of both between and within-rater reliability and measurement error. All tests demonstrated sufficiently small measurement error indicating they are adequate for measuring change over time in individuals with knee/hip OA. Copyright © 2017 Osteoarthritis Research Society International. All rights reserved.
Earth Observing System Covariance Realism Updates
Ojeda Romero, Juan A.; Miguel, Fred
2017-01-01
This presentation will be given at the International Earth Science Constellation Mission Operations Working Group meetings June 13-15, 2017 to discuss the Earth Observing System Covariance Realism updates.
Cortisol covariation within parents of young children: Moderation by relationship aggression.
Saxbe, Darby E; Adam, Emma K; Schetter, Christine Dunkel; Guardino, Christine M; Simon, Clarissa; McKinney, Chelsea O; Shalowitz, Madeleine U
2015-12-01
Covariation in diurnal cortisol has been observed in several studies of cohabiting couples. In two such studies (Liu et al., 2013; Saxbe and Repetti, 2010), relationship distress was associated with stronger within-couple correlations, suggesting that couples' physiological linkage with each other may indicate problematic dyadic functioning. Although intimate partner aggression has been associated with dysregulation in women's diurnal cortisol, it has not yet been tested as a moderator of within-couple covariation. This study reports on a diverse sample of 122 parents who sampled salivary cortisol on matched days for two years following the birth of an infant. Partners showed strong positive cortisol covariation. In couples with higher levels of partner-perpetrated aggression reported by women at one year postpartum, both women and men had a flatter diurnal decrease in cortisol and stronger correlations with partners' cortisol sampled at the same timepoints. In other words, relationship aggression was linked both with indices of suboptimal cortisol rhythms in both members of the couples and with stronger within-couple covariation coefficients. These results persisted when relationship satisfaction and demographic covariates were included in the model. During some of the sampling days, some women were pregnant with a subsequent child, but pregnancy did not significantly moderate cortisol levels or within-couple covariation. The findings suggest that couples experiencing relationship aggression have both suboptimal neuroendocrine profiles and stronger covariation. Cortisol covariation is an understudied phenomenon with potential implications for couples' relationship functioning and physical health. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of covariance with incomplete data via semiparametric model transformations.
Grigoletto, M; Akritas, M G
1999-12-01
We propose a method for fitting semiparametric models such as the proportional hazards (PH), additive risks (AR), and proportional odds (PO) models. Each of these semiparametric models implies that some transformation of the conditional cumulative hazard function (at each t) depends linearly on the covariates. The proposed method is based on nonparametric estimation of the conditional cumulative hazard function, forming a weighted average over a range of t-values, and subsequent use of least squares to estimate the parameters suggested by each model. An approximation to the optimal weight function is given. This allows semiparametric models to be fitted even in incomplete data cases where the partial likelihood fails (e.g., left censoring, right truncation). However, the main advantage of this method rests in the fact that neither the interpretation of the parameters nor the validity of the analysis depend on the appropriateness of the PH or any of the other semiparametric models. In fact, we propose an integrated method for data analysis where the role of the various semiparametric models is to suggest the best fitting transformation. A single continuous covariate and several categorical covariates (factors) are allowed. Simulation studies indicate that the test statistics and confidence intervals have good small-sample performance. A real data set is analyzed.
Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-12-08
In neutron multiplicity counting one may fit a curve by minimizing an objective function, χ$2\\atop{n}$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W^{-1} is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$2\\atop{n}$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.
Field intercomparison of four methane gas analyzers suitable for eddy covariance flux measurements
Directory of Open Access Journals (Sweden)
O. Peltola
2013-06-01
Full Text Available Performances of four methane gas analyzers suitable for eddy covariance measurements are assessed. The assessment and comparison was performed by analyzing eddy covariance data obtained during summer 2010 (1 April to 26 October at a pristine fen, Siikaneva, Southern Finland. High methane fluxes with pronounced seasonality have been measured at this fen. The four participating methane gas analyzers are commercially available closed-path units TGA-100A (Campbell Scientific Inc., USA, RMT-200 (Los Gatos Research, USA, G1301-f (Picarro Inc., USA and an early prototype open-path unit Prototype-7700 (LI-COR Biosciences, USA. The RMT-200 functioned most reliably throughout the measurement campaign, during low and high flux periods. Methane fluxes from RMT-200 and G1301-f had the smallest random errors and the fluxes agree remarkably well throughout the measurement campaign. Cospectra and power spectra calculated from RMT-200 and G1301-f data agree well with corresponding temperature spectra during a high flux period. None of the gas analyzers showed statistically significant diurnal variation for methane flux. Prototype-7700 functioned only for a short period of time, over one month, in the beginning of the measurement campaign during low flux period, and thus, its overall accuracy and season-long performance were not assessed. The open-path gas analyzer is a practical choice for measurement sites in remote locations due to its low power demand, whereas for G1301-f methane measurements interference from water vapor is straightforward to correct since the instrument measures both gases simultaneously. In any case, if only the performance in this intercomparison is considered, RMT-200 performed the best and is the recommended choice if a new fast response methane gas analyzer is needed.
Field intercomparison of four methane gas analysers suitable for eddy covariance flux measurements
Peltola, O.; Mammarella, I.; Haapanala, S.; Burba, G.; Vesala, T.
2012-12-01
Performances of four methane gas analyzers suitable for eddy covariance measurements are assessed. The assessment and comparison was performed by analyzing eddy covariance data obtained during summer 2010 (1 April to 26 October) at a pristine fen, Siikaneva, Southern Finland. High methane fluxes with pronounced seasonality have been measured at this fen. The four participating methane gas analyzers are commercially available closed-path units TGA-100A (Campbell Scientific Inc., USA), RMT-200 (Los Gatos Research, USA), G1301-f (Picarro Inc., USA) and an early prototype open-path unit Prototype-7700 (LI-COR Biosciences, USA). The RMT-200 functioned most reliably throughout the measurement campaign, during low and high flux periods. Methane fluxes from RMT-200 and G1301-f had the smallest random errors and the fluxes agree remarkably well throughout the measurement campaign. Cospectra and power spectra calculated from RMT-200 and G1301-f data agree well with corresponding temperature spectra during a high flux period. None of the gas analysers showed statistically significant diurnal variation for methane flux. Prototype-7700 functioned only for a short period of time, over one month, in the beginning of the measurement campaign during low flux period, and thus, its overall accuracy and long-term performance were not assessed. Prototype-7700 is a practical choice for measurement sites in remote locations due to its low power demand, however if only the performance in this intercomparison is considered, RMT-200 performed the best and is the recommended choice if a new fast response methane gas analyser is needed.
Field intercomparison of four methane gas analyzers suitable for eddy covariance flux measurements
Peltola, O.; Mammarella, I.; Haapanala, S.; Burba, G.; Vesala, T.
2013-06-01
Performances of four methane gas analyzers suitable for eddy covariance measurements are assessed. The assessment and comparison was performed by analyzing eddy covariance data obtained during summer 2010 (1 April to 26 October) at a pristine fen, Siikaneva, Southern Finland. High methane fluxes with pronounced seasonality have been measured at this fen. The four participating methane gas analyzers are commercially available closed-path units TGA-100A (Campbell Scientific Inc., USA), RMT-200 (Los Gatos Research, USA), G1301-f (Picarro Inc., USA) and an early prototype open-path unit Prototype-7700 (LI-COR Biosciences, USA). The RMT-200 functioned most reliably throughout the measurement campaign, during low and high flux periods. Methane fluxes from RMT-200 and G1301-f had the smallest random errors and the fluxes agree remarkably well throughout the measurement campaign. Cospectra and power spectra calculated from RMT-200 and G1301-f data agree well with corresponding temperature spectra during a high flux period. None of the gas analyzers showed statistically significant diurnal variation for methane flux. Prototype-7700 functioned only for a short period of time, over one month, in the beginning of the measurement campaign during low flux period, and thus, its overall accuracy and season-long performance were not assessed. The open-path gas analyzer is a practical choice for measurement sites in remote locations due to its low power demand, whereas for G1301-f methane measurements interference from water vapor is straightforward to correct since the instrument measures both gases simultaneously. In any case, if only the performance in this intercomparison is considered, RMT-200 performed the best and is the recommended choice if a new fast response methane gas analyzer is needed.
Directory of Open Access Journals (Sweden)
Cheng-Hong Yang
Full Text Available BACKGROUND: Determining the complex relationship between diseases, polymorphisms in human genes and environmental factors is challenging. Multifactor dimensionality reduction (MDR has proven capable of effectively detecting statistical patterns of epistasis. However, MDR has its weakness in accurately assigning multi-locus genotypes to either high-risk and low-risk groups, and does generally not provide accurate error rates when the case and control data sets are imbalanced. Consequently, results for classification error rates and odds ratios (OR may provide surprising values in that the true positive (TP value is often small. METHODOLOGY/PRINCIPAL FINDINGS: To address this problem, we introduce a classifier function based on the ratio between the percentage of cases in case data and the percentage of controls in control data to improve MDR (MDR-ER for multi-locus genotypes to be classified correctly into high-risk and low-risk groups. In this study, a real data set with different ratios of cases to controls (1:4 was obtained from the mitochondrial D-loop of chronic dialysis patients in order to test MDR-ER. The TP and TN values were collected from all tests to analyze to what degree MDR-ER performed better than MDR. CONCLUSIONS/SIGNIFICANCE: Results showed that MDR-ER can be successfully used to detect the complex associations in imbalanced data sets.
Yang, Cheng-Hong; Lin, Yu-Da; Chuang, Li-Yeh; Chen, Jin-Bor; Chang, Hsueh-Wei
2013-01-01
Determining the complex relationship between diseases, polymorphisms in human genes and environmental factors is challenging. Multifactor dimensionality reduction (MDR) has proven capable of effectively detecting statistical patterns of epistasis. However, MDR has its weakness in accurately assigning multi-locus genotypes to either high-risk and low-risk groups, and does generally not provide accurate error rates when the case and control data sets are imbalanced. Consequently, results for classification error rates and odds ratios (OR) may provide surprising values in that the true positive (TP) value is often small. To address this problem, we introduce a classifier function based on the ratio between the percentage of cases in case data and the percentage of controls in control data to improve MDR (MDR-ER) for multi-locus genotypes to be classified correctly into high-risk and low-risk groups. In this study, a real data set with different ratios of cases to controls (1:4) was obtained from the mitochondrial D-loop of chronic dialysis patients in order to test MDR-ER. The TP and TN values were collected from all tests to analyze to what degree MDR-ER performed better than MDR. Results showed that MDR-ER can be successfully used to detect the complex associations in imbalanced data sets.
Castrillon, Julio
2015-11-10
We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic parameters of the model are filtered out thus enabling the estimation of the covariance parameters to be decoupled from the deterministic component. Moreover, the multi-level covariance matrix of the contrasts exhibit fast decay that is dependent on the smoothness of the covariance function. Due to the fast decay of the multi-level covariance matrix coefficients only a small set is computed with a level dependent criterion. We demonstrate our approach on problems of up to 512,000 observations with a Matérn covariance function and highly irregular placements of the observations. In addition, these problems are numerically unstable and hard to solve with traditional methods.
The valuation error in the compound values
Directory of Open Access Journals (Sweden)
Marina Ciuna
2013-08-01
Full Text Available In appraising the “valore di trasformazione” the valuation error is composed by the error on market value and the error on construction cost. In appraising the “valore complementare” the valuation error is composed by the error on market value of complex real property and on market value of the residual part. The final error is a function of the partial errors and it can be studied using estimative and market ratios. The application of the compounds values to real estate appraisal misleads unacceptable errors if carried out with the expertise.
Energy Technology Data Exchange (ETDEWEB)
Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B_{0} is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB_{0}/B_{0}, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2
A sparse Ising model with covariates.
Cheng, Jie; Levina, Elizaveta; Wang, Pei; Zhu, Ji
2014-12-01
There has been a lot of work fitting Ising models to multivariate binary data in order to understand the conditional dependency relationships between the variables. However, additional covariates are frequently recorded together with the binary data, and may influence the dependence relationships. Motivated by such a dataset on genomic instability collected from tumor samples of several types, we propose a sparse covariate dependent Ising model to study both the conditional dependency within the binary data and its relationship with the additional covariates. This results in subject-specific Ising models, where the subject's covariates influence the strength of association between the genes. As in all exploratory data analysis, interpretability of results is important, and we use ℓ1 penalties to induce sparsity in the fitted graphs and in the number of selected covariates. Two algorithms to fit the model are proposed and compared on a set of simulated data, and asymptotic results are established. The results on the tumor dataset and their biological significance are discussed in detail. © 2014, The International Biometric Society.
Estimation of Fuzzy Measures Using Covariance Matrices in Gaussian Mixtures
Directory of Open Access Journals (Sweden)
Nishchal K. Verma
2012-01-01
Full Text Available This paper presents a novel computational approach for estimating fuzzy measures directly from Gaussian mixtures model (GMM. The mixture components of GMM provide the membership functions for the input-output fuzzy sets. By treating consequent part as a function of fuzzy measures, we derived its coefficients from the covariance matrices found directly from GMM and the defuzzified output constructed from both the premise and consequent parts of the nonadditive fuzzy rules that takes the form of Choquet integral. The computational burden involved with the solution of λ-measure is minimized using Q-measure. The fuzzy model whose fuzzy measures were computed using covariance matrices found in GMM has been successfully applied on two benchmark problems and one real-time electric load data of Indian utility. The performance of the resulting model for many experimental studies including the above-mentioned application is found to be better and comparable to recent available fuzzy models. The main contribution of this paper is the estimation of fuzzy measures efficiently and directly from covariance matrices found in GMM, avoiding the computational burden greatly while learning them iteratively and solving polynomial equations of order of the number of input-output variables.
Visual Representations Of Non-Separable Spatiotemporal Covariance Models
Kolovos, A.; Christakos, G.; Hristopulos, D. T.; Serre, M. L.
2003-12-01
Natural processes that relate to climatic variability (such as air circulation, air-water and air-soil energy exchanges) contain inherently stochastic components. Spatiotemporal random fields are frequently employed to model such processes and deal with the uncertainty involved. Covariance functions are statistical tools that are used to express correlations between process values across space and time. This work focuses on a review and visual representation of a series of useful covariance models that have been introduced in the Modern Spatiotemporal Geostatistics literature. Some of their important features are examined and their application can significantly improve the interpretation of space/time correlations that affect the long-term climatic evolution both on a local or a global scale.
Scale-covariant theory of gravitation and astrophysical applications
Canuto, V.; Adams, P. J.; Hsieh, S.-H.; Tsiang, E.
1977-01-01
A scale-covariant theory of gravitation is presented which is characterized by a set of equations that are complete only after a choice of the scale function is made. Special attention is given to gauge conditions and units which allow gravitational phenomena to be described in atomic units. The generalized gravitational-field equations are derived by performing a direct scale transformation, by extending Riemannian geometry to Weyl geometry through the introduction of the notion of cotensors, and from a variation principle. Modified conservation laws are provided, a set of dynamical equations is obtained, and astrophysical consequences are considered. The theory is applied to examine certain homogeneous cosmological solutions, perihelion shifts, light deflections, secular variations of planetary orbital elements, stellar structure equations for a star in quasi-static equilibrium, and the past thermal history of earth. The possible relation of the scale-covariant theory to gauge field theories and their predictions of cosmological constants is discussed.
Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2016-01-01
Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.
Error analysis for mesospheric temperature profiling by absorptive occultation sensors
Directory of Open Access Journals (Sweden)
M. J. Rieder
Full Text Available An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50–100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf. This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS – Sun Monitor and Atmospheric Sounder and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD with 0.03% and silicon diodes (SD with 0.1% (unattenuated intensity measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50–100 km we find temperature to be retrieved to better than 0.3 K (DD / 1 K (SD accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with
ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters
Litvinenko, Alexander
2016-10-25
In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)
Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library
Energy Technology Data Exchange (ETDEWEB)
Oblozinsky,P.; Oblozinsky,P.; Mattoon,C.M.; Herman,M.; Mughabghab,S.F.; Pigni,M.T.; Talou,P.; Hale,G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G
2009-09-28
Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10{sup -5} eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: {sup 23}Na and {sup 55}Mn where more detailed evaluation was done; improvements in major structural materials {sup 52}Cr, {sup 56}Fe and {sup 58}Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for {sup 23}Na and {sup 56}Fe. LANL contributed improved covariance data for {sup 235}U and {sup 239}Pu including prompt neutron fission spectra and completely new evaluation for {sup 240}Pu. New R-matrix evaluation for {sup 16}O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.
Dreano, Denis
2015-04-27
A statistical model is proposed to filter satellite-derived chlorophyll concentration from the Red Sea, and to predict future chlorophyll concentrations. The seasonal trend is first estimated after filling missing chlorophyll data using an Empirical Orthogonal Function (EOF)-based algorithm (Data Interpolation EOF). The anomalies are then modeled as a stationary Gaussian process. A method proposed by Gneiting (2002) is used to construct positive-definite space-time covariance models for this process. After choosing an appropriate statistical model and identifying its parameters, Kriging is applied in the space-time domain to make a one step ahead prediction of the anomalies. The latter serves as the prediction model of a reduced-order Kalman filter, which is applied to assimilate and predict future chlorophyll concentrations. The proposed method decreases the root mean square (RMS) prediction error by about 11% compared with the seasonal average.
Forecasting Covariance Matrices: A Mixed Frequency Approach
DEFF Research Database (Denmark)
Halbleib, Roxana; Voev, Valeri
This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... matrix dynamics. Our empirical results show that the new mixing approach provides superior forecasts compared to multivariate volatility specifications using single sources of information....
Supergauge Field Theory of Covariant Heterotic Strings
Michio, KAKU; Physics Department, Osaka University : Physics Department, City College of the City University of New York
1986-01-01
We present the gauge covariant second quantized field theory for free heterotic strings, which is leading candidate for a unified theory of all known particles. Our action is invariant under the semi-direct product of the super Virasoro and the Kac-Moody E_8×E_8 or Spin(32)/Z_2 group. We derive the covariant action by path integrals in the same way that Feynman originally derived the Schrodinger equation. By adding an infinite number of auxiliary fields, we can also make the action explicitly...
Construction and use of gene expression covariation matrix
Directory of Open Access Journals (Sweden)
Bellis Michel
2009-07-01
strings of symbols. Conclusion This new method, applied to four different large data sets, has allowed us to construct distinct covariation matrices with similar properties. We have also developed a technique to translate these covariation networks into graphical 3D representations and found that the local assignation of the probe sets was conserved across the four chip set models used which encompass three different species (humans, mice, and rats. The application of adapted clustering methods succeeded in delineating six conserved functional regions that we characterized using Gene Ontology information.
Activities on covariance estimation in Japanese Nuclear Data Committee
Energy Technology Data Exchange (ETDEWEB)
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-03-01
Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)
Carroll, Raymond J.
2010-05-01
This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.
Covariant Deformation Quantization of Free Fields
Harrivel, Dikanaina
2006-01-01
We define covariantly a deformation of a given algebra, then we will see how it can be related to a deformation quantization of a class of observables in Quantum Field Theory. Then we will investigate the operator order related to this deformation quantization.
Observed Score Linear Equating with Covariates
Branberg, Kenny; Wiberg, Marie
2011-01-01
This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Optimal covariate designs theory and applications
Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar
2015-01-01
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...
(co)variances for growth and efficiency
African Journals Online (AJOL)
42, 295. CANTET, R.J.C., KRESS, D.D., ANDERSON, D.C., DOORNBOS, D.E., BURFENING, P.J. &. BLACKWELL, R.L., 1988. Direct and maternal variances and covariances and maternal phenotypic effects on preweaning growth of beef cattle. J. Anim. Sci. 66, 648. CUNNINGHAM. E.P., MOON, R.A. & GJEDREN, T., 1970.
Covariant perturbation theory and chiral superpropagators
Ecker, G
1972-01-01
The authors use a covariant formulation of perturbation theory for the non-linear chiral invariant pion model to define chiral superpropagators leading to S-matrix elements which are independent of the choice of the pion field coordinates. The relation to the standard definition of chiral superpropagators is discussed. (11 refs).
Galilean Covariance and the Gravitational Field
Ulhoa, S. C.; Khanna, F. C.; Santana, A.E.
2009-01-01
The paper is concerned with the development of a gravitational field theory having locally a covariant version of the Galilei group. We show that this Galilean gravity can be used to study the advance of perihelion of a planet, following in parallel with the result of the (relativistic) theory of general relativity in the post-Newtonian approximation.
On translation-covariant quantum Markov equations
Holevo, A. S.
1995-04-01
The structure of quantum Markov control equations with unbounded generators and covariant with respect to 1) irreducible representation of the Weyl CCR on R^d and 2) representation of the group of R^d, is completely described via non-commutative Levy-Khinchin-type formulae. The existence and uniqueness of solutions for such equations is briefly discussed.
Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose
2017-01-01
Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.
Zeltner, Nina A; Huemer, Martina; Baumgartner, Matthias R; Landolt, Markus A
2014-10-25
In recent decades, considerable progress in diagnosis and treatment of patients with intoxication-type inborn errors of metabolism (IT-IEM) such as urea cycle disorders (UCD), organic acidurias (OA), maple syrup urine disease (MSUD), or tyrosinemia type 1 (TYR 1) has resulted in a growing group of long-term survivors. However, IT-IEM still require intense patient and caregiver effort in terms of strict dietetic and pharmacological treatment, and the threat of metabolic crises is always present. Furthermore, crises can affect the central nervous system (CNS), leading to cognitive, behavioural and psychiatric sequelae. Consequently, the well-being of the patients warrants consideration from both a medical and a psychosocial viewpoint by assessing health-related quality of life (HrQoL), psychological adjustment, and adaptive functioning. To date, an overview of findings on these topics for IT-IEM is lacking. We therefore aimed to systematically review the research on HrQoL, psychological adjustment, and adaptive functioning in patients with IT-IEM. Relevant databases were searched with predefined keywords. Study selection was conducted in two steps based on predefined criteria. Two independent reviewers completed the selection and data extraction. Eleven articles met the inclusion criteria. Studies were of varying methodological quality and used different assessment measures. Findings on HrQoL were inconsistent, with some showing lower and others showing higher or equal HrQoL for IT-IEM patients compared to norms. Findings on psychological adjustment and adaptive functioning were more consistent, showing mostly either no difference or worse adjustment of IT-IEM patients compared to norms. Single medical risk factors for HrQoL, psychological adjustment, or adaptive functioning have been addressed, while psychosocial risk factors have not been addressed. Data on HrQoL, psychological adjustment, and adaptive functioning for IT-IEM are sparse. Studies are inconsistent in
Unravelling Lorentz Covariance and the Spacetime Formalism
Directory of Open Access Journals (Sweden)
Cahill R. T.
2008-10-01
Full Text Available We report the discovery of an exact mapping from Galilean time and space coordinates to Minkowski spacetime coordinates, showing that Lorentz covariance and the space-time construct are consistent with the existence of a dynamical 3-space, and absolute motion. We illustrate this mapping first with the standard theory of sound, as vibrations of a medium, which itself may be undergoing fluid motion, and which is covariant under Galilean coordinate transformations. By introducing a different non-physical class of space and time coordinates it may be cast into a form that is covariant under Lorentz transformations wherein the speed of sound is now the invariant speed. If this latter formalism were taken as fundamental and complete we would be lead to the introduction of a pseudo-Riemannian spacetime description of sound, with a metric characterised by an invariant speed of sound. This analysis is an allegory for the development of 20th century physics, but where the Lorentz covariant Maxwell equations were constructed first, and the Galilean form was later constructed by Hertz, but ignored. It is shown that the Lorentz covariance of the Maxwell equations only occurs because of the use of non-physical space and time coordinates. The use of this class of coordinates has confounded 20th century physics, and resulted in the existence of a allowing dynamical 3-space being overlooked. The discovery of the dynamics of this 3-space has lead to the derivation of an extended gravity theory as a quantum effect, and confirmed by numerous experiments and observations
Unravelling Lorentz Covariance and the Spacetime Formalism
Directory of Open Access Journals (Sweden)
Cahill R. T.
2008-10-01
Full Text Available We report the discovery of an exact mapping from Galilean time and space coordinates to Minkowski spacetime coordinates, showing that Lorentz covariance and the space- time construct are consistent with the existence of a dynamical 3-space, and “absolute motion”. We illustrate this mapping first with the standard theory of sound, as vibra- tions of a medium, which itself may be undergoing fluid motion, and which is covari- ant under Galilean coordinate transformations. By introducing a different non-physical class of space and time coordinates it may be cast into a form that is covariant under “Lorentz transformations” wherein the speed of sound is now the “invariant speed”. If this latter formalism were taken as fundamental and complete we would be lead to the introduction of a pseudo-Riemannian “spacetime” description of sound, with a metric characterised by an “invariant speed of sound”. This analysis is an allegory for the development of 20th century physics, but where the Lorentz covariant Maxwell equa- tions were constructed first, and the Galilean form was later constructed by Hertz, but ignored. It is shown that the Lorentz covariance of the Maxwell equations only occurs because of the use of non-physical space and time coordinates. The use of this class of coordinates has confounded 20th century physics, and resulted in the existence of a “flowing” dynamical 3-space being overlooked. The discovery of the dynamics of this 3-space has lead to the derivation of an extended gravity theory as a quantum effect, and confirmed by numerous experiments and observations
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2004-01-01
This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....
Error Propagation in a System Model
Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)
2015-01-01
Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.
Measurement error in longitudinal film badge data
Marsh, J L
2002-01-01
Initial logistic regressions turned up some surprising contradictory results which led to a re-sampling of Sellafield mortality controls without the date of employment matching factor. It is suggested that over matching is the cause of the contradictory results. Comparisons of the two measurements of radiation exposure suggest a strongly linear relationship with non-Normal errors. A method has been developed using the technique of Regression Calibration to deal with these in a case-control study context, and applied to this Sellafield study. The classical measurement error model is that of a simple linear regression with unobservable variables. Information about the covariates is available only through error-prone measurements, usually with an additive structure. Ignoring errors has been shown to result in biased regression coefficients, reduced power of hypothesis tests and increased variability of parameter estimates. Radiation is known to be a causal factor for certain types of leukaemia. This link is main...
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang
2017-09-27
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
Kolovos, A.; Christakos, G.; Hristopulos, D. T.; Serre, M. L.
2004-08-01
Environmental processes (e.g., groundwater contaminants, air pollution patterns, air-water and air-soil energy exchanges) are characterized by variability and uncertainty. Spatiotemporal random fields are used to represent correlations between fluctuations in the composite space-time domain. Modelling the effects of fluctuations with suitable covariance functions can improve our ability to characterize and predict space-time variations in various natural systems (e.g., environmental media, long-term climatic evolutions on local/global scales, and human exposure to pollutants). The goal of this work is to present the reader with various methods for constructing space-time covariance models. In this context, we provide a mathematical exposition and visual representations of several theoretical covariance models. These include non-separable (in space and time) covariance models derived from physical laws (i.e., differential equations and dynamic rules), spectral functions, and generalized random fields. It is also shown that non-separability is often a direct result of the physical laws that govern the process. The proposed methods can generate covariance models for homogeneous/stationary as well as for non-homogeneous/non-stationary environmental processes across space and time. We investigate several properties (short-range and asymptotic behavior, shape of the covariance function etc.) of these models and present plots of the space-time dependence for various parameter values.
The impact of imprecisely measured covariates on estimating gene-environment interactions
Directory of Open Access Journals (Sweden)
Gilthorpe Mark S
2006-05-01
Full Text Available Abstract Background The effects of measurement error in epidemiological exposures and confounders on estimated effects of exposure are well described, but the effects on estimates for gene-environment interactions has received rather less attention. In particular, the effects of confounder measurement error on gene-environment interactions are unknown. Methods We investigate these effects using simulated data and illustrate our results with a practical example in nutrition epidemiology. Results We show that the interaction regression coefficient is unchanged by confounder measurement error under certain conditions, but biased by exposure measurement error. We also confirm that confounder measurement error can lead to estimated effects of exposure biased either towards or away from the null, depending on the correlation structure, with associated effects on type II errors. Conclusion Whilst measurement error in confounders does not lead to bias in interaction coefficients, it may still lead to bias in the estimated effects of exposure. There may still be cost implications for epidemiological studies that need to calibrate all error-prone covariates against a valid reference, in addition to the exposure, to reduce the effects of confounder measurement error.
Weighted mean method for eddy covariance flux measurement
Kim, W.; Cho, J.; Seo, H.; Oki, T.
2013-12-01
The study to monitor the exchange of energy, water vapor and carbon dioxide between the atmosphere and terrestrial ecosystem has been carried out with eddy covariance method throughout the world. The monitored exchange quantity, named flux F , is conventionally determined by a mean of 1 hr or 30 min interval because no technique have been fortified to directly measure a momentary F itself at an instant of time. Therefore, the posterior analysis with this sampling should be paid attention to those spatial or temporal averaging and summation in the consideration of the sampling uncertainty. In particular, the averaging calcurated by arithmetic mean Fa might be inappropriate because the sample F used in this averaging has nonidentical inherent quality within one another according to different micrometeorological and ecophysiological conditions while those are observed under the same instruments. To overcome this issue, we propose the weighted mean Fw using a relative sampling error estimated by a sampling F and its error, and introduce Fw performance tested with EC measurements for 3 years at tangerine orchard.
A covariance NMR toolbox for MATLAB and OCTAVE.
Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David
2011-03-01
The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE. Copyright © 2010 Elsevier Inc. All rights reserved.
Covariant holography of a tachyonic accelerating universe
Energy Technology Data Exchange (ETDEWEB)
Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)
2014-08-15
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)
Covariance and the hierarchy of frame bundles
Estabrook, Frank B.
1987-01-01
This is an essay on the general concept of covariance, and its connection with the structure of the nested set of higher frame bundles over a differentiable manifold. Examples of covariant geometric objects include not only linear tensor fields, densities and forms, but affinity fields, sectors and sector forms, higher order frame fields, etc., often having nonlinear transformation rules and Lie derivatives. The intrinsic, or invariant, sets of forms that arise on frame bundles satisfy the graded Cartan-Maurer structure equations of an infinite Lie algebra. Reduction of these gives invariant structure equations for Lie pseudogroups, and for G-structures of various orders. Some new results are introduced for prolongation of structure equations, and for treatment of Riemannian geometry with higher-order moving frames. The use of invariant form equations for nonlinear field physics is implicitly advocated.
Angel, Yoseline
2016-10-25
Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multitemporal data and spatio-Temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-Temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.
Rast, Philippe; Hofer, Scott M
2014-03-01
We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power greater than or equal to .80 was nonlinear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with growth curve reliability, and parameter values that are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e., first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. (c) 2014 APA, all rights reserved.
Angel, Yoseline
2016-09-26
Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multi-temporal data and spatio-temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.
Angel, Yoseline; Houborg, Rasmus; McCabe, Matthew F.
2016-10-01
Hyperspectral remote sensing images are usually affected by atmospheric conditions such as clouds and their shadows, which represents a contamination of reflectance data and complicates the extraction of biophysical variables to monitor phenological cycles of crops. This paper explores a cloud removal approach based on reflectance prediction using multitemporal data and spatio-temporal statistical models. In particular, a covariance model that captures the behavior of spatial and temporal components in data simultaneously (i.e. non-separable) is considered. Eight weekly images collected from the Hyperion hyper-spectrometer instrument over an agricultural region of Saudi Arabia were used to reconstruct a scene with the presence of cloudy affected pixels over a center-pivot crop. A subset of reflectance values of cloud-free pixels from 50 bands in the spectral range from 426.82 to 884.7 nm at each date, were used as input to fit a parametric family of non-separable and stationary spatio-temporal covariance functions. Applying simple kriging as an interpolator, cloud affected pixels were replaced by cloud-free predicted values per band, obtaining their respective predicted spectral profiles at the same time. An exercise of reconstructing simulated cloudy pixels in a different swath was conducted to assess the model accuracy, achieving root mean square error (RMSE) values per band less than or equal to 3%. The spatial coherence of the results was also checked through absolute error distribution maps demonstrating their consistency.
Torsion and geometrostasis in covariant superstrings
Energy Technology Data Exchange (ETDEWEB)
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.
Batch Covariance Relaxation (BCR) Adaptive Processing.
1981-08-01
techniques dictates the need for processing flexibility which may be met most easily by a digital mechanization. The effort conducted addresses the...essential aspects of Batch Covariance Relaxation (BCR) adaptive processing applied to a digital adaptive array processing. In contrast to dynamic... libarary , RADAR:LIB. An extensive explanation as to how to use these programs is given. It is shown how the output of each is used as part of the input for
Covariance Kernels from Bayesian Generative Models
Seeger, Matthias
2002-01-01
We propose the framework of mutual information kernels for learning covariance kernels, as used in Support Vector machines and Gaussian process classifiers, from unlabeled task data using Bayesian techniques. We describe an implementation of this framework which uses variational Bayesian mixtures of factor analyzers in order to attack classification problems in high-dimensional spaces where labeled data is sparse, but unlabeled data is abundant.
ANL Critical Assembly Covariance Matrix Generation
Energy Technology Data Exchange (ETDEWEB)
McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)
2014-01-15
This report discusses the generation of a covariance matrix for selected critical assemblies that were carried out by Argonne National Laboratory (ANL) using four critical facilities-all of which are now decommissioned. The four different ANL critical facilities are: ZPR-3 located at ANL-West (now Idaho National Laboratory- INL), ZPR-6 and ZPR-9 located at ANL-East (Illinois) and ZPPr located at ANL-West.
Linear Covariance Analysis for a Lunar Lander
Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael
2017-01-01
A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.
On conservativity of covariant dynamical semigroups
Holevo, A. S.
1993-10-01
The notion of form-generator of a dynamical semigroup is introduced and used to give a criterion for the conservativity (preservation of the identity) of covariant dynamical semigroups. It allows to reduce the problem of construction of conservative dynamical semigroups to familiar problems of non-explosion for Markov processes and construction of a contraction semigroup in a Hilbert space. Some new classes of unbounded generators, related to the Levy-Khinchin formula, are described.
Covariance tracking: architecture optimizations for embedded systems
Romero, Andrés; Lacassagne, Lionel; Gouiffès, Michèle; Zahraee, Ali Hassan
2014-12-01
Covariance matching techniques have recently grown in interest due to their good performances for object retrieval, detection, and tracking. By mixing color and texture information in a compact representation, it can be applied to various kinds of objects (textured or not, rigid or not). Unfortunately, the original version requires heavy computations and is difficult to execute in real time on embedded systems. This article presents a review on different versions of the algorithm and its various applications; our aim is to describe the most crucial challenges and particularities that appeared when implementing and optimizing the covariance matching algorithm on a variety of desktop processors and on low-power processors suitable for embedded systems. An application of texture classification is used to compare different versions of the region descriptor. Then a comprehensive study is made to reach a higher level of performance on multi-core CPU architectures by comparing different ways to structure the information, using single instruction, multiple data (SIMD) instructions and advanced loop transformations. The execution time is reduced significantly on two dual-core CPU architectures for embedded computing: ARM Cortex-A9 and Cortex-A15 and Intel Penryn-M U9300 and Haswell-M 4650U. According to our experiments on covariance tracking, it is possible to reach a speedup greater than ×2 on both ARM and Intel architectures, when compared to the original algorithm, leading to real-time execution.
Development of covariance capabilities in EMPIRE code
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Everard, Eoin; Lyons, Mark; Harrison, Andrew J
2017-06-15
To examine the association of injury with the Functional Movement Screen (FMS) and Landing Error Scoring System (LESS) in military recruits undergoing an intensive 16-week training block. Prospective cohort study. One hundred and thirty-two entry-level male soldiers (18-25years) were tested using the FMS and LESS. The participants underwent an intensive 16-week training program with injury data recorded daily. Chi-squared statistics were used to examine associations between injury risk and (1) poor LESS scores, (2) any score of 1 on the FMS and (3) composite FMS score of ≤14. A composite FMS score of ≤14 was not a significant predictor of injury. LESS scores of >5 and having a score of 1 on any FMS test were significantly associated with injury. LESS scores had greater relative risk, sensitivity and specificity (2.2 (95% CI=1.48-3.34); 71% and 87% respectively) than scores of 1 on the FMS (relative risk=1.32 (95% CI=1.0-1.7); sensitivity=50% and specificity=76%). There was no association between composite FMS score and injury but LESS scores and scores of 1 in the FMS test were significantly associated with injury in varying degrees. LESS scores had a much better association with injury than both any scores of 1 on the FMS and a combination of LESS scores and scores of 1 on the FMS. Furthermore, the LESS provides comparable information related to injury risk as other well-established markers associated with injury such as age, muscular strength and previous injury. Copyright © 2017. Published by Elsevier Ltd.
Performance, postmodernity and errors
DEFF Research Database (Denmark)
Harder, Peter
2013-01-01
with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....
Error tracking in a clinical biochemistry laboratory
DEFF Research Database (Denmark)
Szecsi, Pal Bela; Ødum, Lars
2009-01-01
BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were...... classified according to function, and errors were classified as pre-analytical, analytical, post-analytical, or service-related, and then further divided into descriptive subgroups. Samples were taken from hospital wards (38.6%), outpatient clinics (25.7%), general practitioners (29.4%), and other hospitals....... RESULTS: A total of 1189 errors were reported in 1151 reports during the first year, corresponding to an error rate of 1 error for every 142 patients, or 1 per 1223 tests. The majority of events were due to human errors (82.6%), and only a few (4.3%) were the result of technical errors. Most of the errors...
Source Coding in Networks with Covariance Distortion Constraints
DEFF Research Database (Denmark)
Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt
2016-01-01
-distortion function (RDF). We then study the special cases and applications of this result. We show that two well-studied source coding problems, i.e. remote vector Gaussian Wyner-Ziv problems with mean-squared error and mutual information constraints are in fact special cases of our results. Finally, we apply our...
Bauer, Susanne N; Nowak, Heike; Keller, Frank; Kallarackal, Jose; Hajirezaei, Mohamad-Reza; Komor, Ewald
2014-09-01
Sieve tube sap was obtained from Tanacetum by aphid stylectomy and from Ricinus after apical bud decapitation. The amino acids in sieve tube sap were analyzed and compared with those from leaves. Arginine and lysine accumulated in the sieve tube sap of Tanacetum more than 10-fold compared to the leaf extracts and they were, together with asparagine and serine, preferably selected into the sieve tube sap, whereas glycine, methionine/tryptophan and γ-amino butyric acid were partially or completely excluded. The two basic amino acids also showed a close covariation in sieve tube sap. The acidic amino acids also grouped together, but antagonistic to the other amino acids. The accumulation ratios between sieve tube sap and leaf extracts were smaller in Ricinus than in Tanacetum. Arginine, histidine, lysine and glutamine were enriched and preferentially loaded into the phloem, together with isoleucine and valine. In contrast, glycine and methionine/tryptophan were partially and γ-amino butyric acid almost completely excluded from sieve tube sap. The covariation analysis grouped arginine together with several neutral amino acids. The acidic amino acids were loaded under competition with neutral amino acids. It is concluded from comparison with the substrate specificities of already characterized plant amino acid transporters, that an AtCAT1-like transporter functions in phloem loading of basic amino acids, whereas a transporter like AtGAT1 is absent in phloem. Although Tanacetum and Ricinus have different minor vein architecture, their phloem loading specificities for amino acids are relatively similar. © 2014 Scandinavian Plant Physiology Society.
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Performance Analysis of Tyler's Covariance Estimator
Soloveychik, Ilya; Wiesel, Ami
2015-01-01
This paper analyzes the performance of Tyler's M-estimator of the scatter matrix in elliptical populations. We focus on the non-asymptotic setting and derive the estimation error bounds depending on the number of samples n and the dimension p. We show that under quite mild conditions the squared Frobenius norm of the error of the inverse estimator decays like p^2/n with high probability.
A Monte Carlo error analysis program for near-Mars, finite-burn, orbital transfer maneuvers
Green, R. N.; Hoffman, L. H.; Young, G. R.
1972-01-01
A computer program was developed which performs an error analysis of a minimum-fuel, finite-thrust, transfer maneuver between two Keplerian orbits in the vicinity of Mars. The method of analysis is the Monte Carlo approach where each off-nominal initial orbit is targeted to the desired final orbit. The errors in the initial orbit are described by two covariance matrices of state deviations and tracking errors. The function of the program is to relate these errors to the resulting errors in the final orbit. The equations of motion for the transfer trajectory are those of a spacecraft maneuvering with constant thrust and mass-flow rate in the neighborhood of a single body. The thrust vector is allowed to rotate in a plane with a constant pitch rate. The transfer trajectory is characterized by six control parameters and the final orbit is defined, or partially defined, by the desired target parameters. The program is applicable to the deboost maneuver (hyperbola to ellipse), orbital trim maneuver (ellipse to ellipse), fly-by maneuver (hyperbola to hyperbola), escape maneuvers (ellipse to hyperbola), and deorbit maneuver.
Moderating the covariance between family member's substance use behavior.
Verhulst, Brad; Eaves, Lindon J; Neale, Michael C
2014-07-01
Twin and family studies implicitly assume that the covariation between family members remains constant across differences in age between the members of the family. However, age-specificity in gene expression for shared environmental factors could generate higher correlations between family members who are more similar in age. Cohort effects (cohort × genotype or cohort × common environment) could have the same effects, and both potentially reduce effect sizes estimated in genome-wide association studies where the subjects are heterogeneous in age. In this paper we describe a model in which the covariance between twins and non-twin siblings is moderated as a function of age difference. We describe the details of the model and simulate data using a variety of different parameter values to demonstrate that model fitting returns unbiased parameter estimates. Power analyses are then conducted to estimate the sample sizes required to detect the effects of moderation in a design of twins and siblings. Finally, the model is applied to data on cigarette smoking. We find that (1) the model effectively recovers the simulated parameters, (2) the power is relatively low and therefore requires large sample sizes before small to moderate effect sizes can be found reliably, and (3) the genetic covariance between siblings for smoking behavior decays very rapidly. Result 3 implies that, e.g., genome-wide studies of smoking behavior that use individuals assessed at different ages, or belonging to different birth-year cohorts may have had substantially reduced power to detect effects of genotype on cigarette use. It also implies that significant special twin environmental effects can be explained by age-moderation in some cases. This effect likely contributes to the missing heritability paradox.
Eddy covariance based methane flux in Sundarbans mangroves, India
Indian Academy of Sciences (India)
Eddy covariance based methane flux in Sundarbans mangroves, India ... Eddy covariance; mangrove forests; methane flux; Sundarbans. ... In order to quantify the methane flux in mangroves, an eddy covariance flux tower was recently erected in the largest unpolluted and undisturbed mangrove ecosystem in Sundarbans ...
Earth Observation System Flight Dynamics System Covariance Realism
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Cosmology of a covariant Galilean field.
De Felice, Antonio; Tsujikawa, Shinji
2010-09-10
We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.
Anomalies in covariant W-gravity
Ceresole, Anna T.; Frau, Marialuisa; McCarthy, Jim; Lerda, Alberto
1991-08-01
We consider free scalar matter covariantly coupled to background W-gravity. Expanding to second order in the W-gravity fields, we study the appropriate anomalous Ward-Takahashi identities and find the counterterms which maintain diffeomorphism invariance and its W-analogue. We see that a redefinition of the vielbein transformation rule under W-diffeomorphism is required in order to cancel nonlocal contributions to the anomaly. Moreover, we explicitly write all gauge invariances at this order. Some consequences of these results for the chiral gauge quantization are discussed. On leave of absence from Dipartimento di Fisica Teorica, Università di Torino, Turin, Italy.
Gallilei covariant quantum mechanics in electromagnetic fields
Directory of Open Access Journals (Sweden)
H. E. Wilhelm
1985-01-01
Full Text Available A formulation of the quantum mechanics of charged particles in time-dependent electromagnetic fields is presented, in which both the Schroedinger equation and wave equations for the electromagnetic potentials are Galilei covariant, it is shown that the Galilean relativity principle leads to the introduction of the electromagnetic substratum in which the matter and electromagnetic waves propagate. The electromagnetic substratum effects are quantitatively significant for quantum mechanics in reference frames, in which the substratum velocity w is in magnitude comparable with the velocity of light c. The electromagnetic substratum velocity w occurs explicitly in the wave equations for the electromagnetic potentials but not in the Schroedinger equation.
Minimal covariant observables identifying all pure states
Energy Technology Data Exchange (ETDEWEB)
Carmeli, Claudio, E-mail: claudio.carmeli@gmail.com [D.I.M.E., Università di Genova, Via Cadorna 2, I-17100 Savona (Italy); I.N.F.N., Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy); Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku (Finland); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); I.N.F.N., Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy)
2013-09-02
It has been recently shown by Heinosaari, Mazzarella and Wolf (2013) [1] that an observable that identifies all pure states of a d-dimensional quantum system has minimally 4d−4 outcomes or slightly less (the exact number depending on d). However, no simple construction of this type of minimal observable is known. We investigate covariant observables that identify all pure states and have minimal number of outcomes. It is shown that the existence of this kind of observables depends on the dimension of the Hilbert space.
MIMO-radar Waveform Covariance Matrices for High SINR and Low Side-lobe Levels
Ahmed, Sajid
2012-12-29
MIMO-radar has better parametric identifiability but compared to phased-array radar it shows loss in signal-to-noise ratio due to non-coherent processing. To exploit the benefits of both MIMO-radar and phased-array two transmit covariance matrices are found. Both of the covariance matrices yield gain in signal-to-interference-plus-noise ratio (SINR) compared to MIMO-radar and have lower side-lobe levels (SLL)\\'s compared to phased-array and MIMO-radar. Moreover, in contrast to recently introduced phased-MIMO scheme, where each antenna transmit different power, our proposed schemes allows same power transmission from each antenna. The SLL\\'s of the proposed first covariance matrix are higher than the phased-MIMO scheme while the SLL\\'s of the second proposed covariance matrix are lower than the phased-MIMO scheme. The first covariance matrix is generated using an auto-regressive process, which allow us to change the SINR and side lobe levels by changing the auto-regressive parameter, while to generate the second covariance matrix the values of sine function between 0 and $\\\\pi$ with the step size of $\\\\pi/n_T$ are used to form a positive-semidefinite Toeplitiz matrix, where $n_T$ is the number of transmit antennas. Simulation results validate our analytical results.
Litvinenko, Alexander
2017-09-26
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\H$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Litvinenko, Alexander
2017-09-24
The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\\\\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\\\\\\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\\\\mathcal{H}$-) matrix format with computational cost $\\\\mathcal{O}(k^2n \\\\log^2 n/p)$ and storage $\\\\mathcal{O}(kn \\\\log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
Covariance-based maneuver optimization for NEO threats
Peterson, G.
The Near Earth Object (NEO) conjunction analysis and mitigation problem is fundamentally the same as Earth-centered space traffic control, albeit on a larger scale and in different temporal and spatial frames. The Aerospace Corporation has been conducting conjunction detection and collision avoidance analysis for a variety of satellite systems in the Earth environment for over 3 years. As part of this process, techniques have been developed that are applicable to analyzing the NEO threat. In space traffic control operations in the Earth orbiting environment, dangerous conjunctions between satellites are determined using collision probability models, realistic covariances, and accurate trajectories in the software suite Collision Vision. Once a potentially dangerous conjunction (or series of conjunctions) is found, a maneuver solution is developed through the program DVOPT (DeltaV OPTimization) that will reduce the risk to a pre -defined acceptable level. DVOPT works by taking the primary's state vector at conjunction, back- propagating it to the time of the proposed burn, then applying the burn to the state vector, and forward-propagating back to the time of the original conjunction. The probability of collision is then re-computed based upon the new state vector and original covariances. This backwards-forwards propagation is coupled with a search algorithm to find the optimal burn solution as a function of time. Since the burns are small (typically cm/sec for Earth-centered space traffic control), Kepler's Equation was assumed for the backwards-forwards propagation with little loss in accuracy. The covariance-based DVOPT process can be easily expanded to cover heliocentric orbits and conjunctions between the Earth and an approaching object. It is shown that minimizing the burn to increase the miss distance between the conjuncting objects does not correspond to a burn solution that minimizes the probability of impact between the same two objects. Since a
Testing the prediction error difference between two predictors
van de Wiel, M.A.; Berkhof, J.; van Wieringen, W.N.
2009-01-01
We develop an inference framework for the difference in errors between 2 prediction procedures. The 2 procedures may differ in any aspect and possibly utilize different sets of covariates. We apply training and testing on the same data set, which is accommodated by sample splitting. For each split,
Stochastic precipitation generator with hidden state covariates
Kim, Yongku; Lee, GyuWon
2017-08-01
Time series of daily weather such as precipitation, minimum temperature and maximum temperature are commonly required for various fields. Stochastic weather generators constitute one of the techniques to produce synthetic daily weather. The recently introduced approach for stochastic weather generators is based on generalized linear modeling (GLM) with covariates to account for seasonality and teleconnections (e.g., with the El Niño). In general, stochastic weather generators tend to underestimate the observed interannual variance of seasonally aggregated variables. To reduce this overdispersion, we incorporated time series of seasonal dry/wet indicators in the GLM weather generator as covariates. These seasonal time series were local (or global) decodings obtained by a hidden Markov model of seasonal total precipitation and implemented in the weather generator. The proposed method is applied to time series of daily weather from Seoul, Korea and Pergamino, Argentina. This method provides a straightforward translation of the uncertainty of the seasonal forecast to the corresponding conditional daily weather statistics.
Covariates of alcohol consumption among career firefighters.
Piazza-Gardner, A K; Barry, A E; Chaney, E; Dodd, V; Weiler, R; Delisle, A
2014-12-01
Little is known about rates of alcohol consumption in career firefighters. To assess the quantity and frequency of alcohol consumption among career firefighters and the covariates that influence consumption levels. A convenience sample of career firefighters completed an online, self-administered, health assessment survey. Hierarchical binary logistic regression assessed the ability of several covariates to predict binge drinking status. The majority of the sample (n = 160) consumed alcohol (89%), with approximately one-third (34%) having a drinking binge in the past 30 days. The regression model explained 13-18% of the variance in binge drinking status and correctly classified 71% of cases. Race (P firefighters were 1.08 times less likely to binge drink (95% CI: 0.87-0.97). Drinking levels observed in this study exceed those of the general adult population, including college students. Thus, it appears that firefighters represent an at-risk drinking group. Further investigations addressing reasons for alcohol use and abuse among firefighters are warranted. This study and subsequent research will provide information necessary for the development and testing of tailored interventions aimed at reducing firefighter alcohol consumption. © The Author 2014. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Energy Technology Data Exchange (ETDEWEB)
Konecke, Brian A.; Fiege, Adrian; Simon, Adam C.; Parat, Fleurice; Stechern, André
2017-03-01
In this study, we use micro-X-ray absorption near-edge structures (μ-XANES) spectroscopy at the S K-edge to investigate the oxidation state of S in natural magmatic-hydrothermal apatite (Durango, Mexico, and Mina Carmen, Chile) and experimental apatites crystallized from volatile-saturated lamproitic melts at 1000 °C and 300 MPa over a broad range of oxygen fugacities [( Embedded Image , FMQ+1.2, FMQ+3; FMQ = fayalite-magnetite-quartz solid buffer]. The data are used to test the hypothesis that S oxidation states other than S6+ may substitute into the apatite structure. Peak energies corresponding to sulfate S6+ (~2482 eV), sulfite S4+ (~2478 eV), and sulfide S2- (~2470 eV) were observed in apatite, and the integrated areas of the different sulfur peaks correspond to changes in Embedded Image and bulk S content. Here, multiple tests confirmed that the S oxidation state in apatite remains constant when exposed to the synchrotron beam, at least for up to 1 h exposure (i.e., no irradiation damages). To our knowledge, this observation makes apatite the first mineral to incorporate reduced (S2-), intermediate (S4+), and oxidized (S6+) S in variable proportions as a function of the prevailing Embedded Image of the system. Apatites crystallized under oxidizing conditions (FMQ+1.2 and FMQ+3), where the S6+/STotal peak area ratio in the coexisting glass (i.e., quenched melt) is ~1, are dominated by S6+ with a small contribution of S4+, whereas apatites crystallizing at reduced conditions (FMQ) contain predominantly S2-, lesser amounts of S6+, and possibly traces of S4+. A sulfur oxidation state vs. S concentration analytical line transect across hydrothermally altered apatite from the Mina Carmen iron oxide-apatite (IOA) deposit (Chile) demonstrates that apatite can become enriched in S4+ relative to S6+, indicating metasomatic overprinting via a SO2-bearing fluid or vapor phase. This XANES study demonstrates that as the Embedded Image increases from FQM to FMQ+1.2 to FMQ
Medical errors in neurosurgery
Rolston, John D.; Zygourakis, Corinna C.; Han, Seunggu J.; Lau, Catherine Y.; Berger, Mitchel S.; Parsa, Andrew T
2014-01-01
Background: Medical errors cause nearly 100,000 deaths per year and cost billions of dollars annually. In order to rationally develop and institute programs to mitigate errors, the relative frequency and costs of different errors must be documented. This analysis will permit the judicious allocation of scarce healthcare resources to address the most costly errors as they are identified. Methods: Here, we provide a systematic review of the neurosurgical literature describing medical errors...
Begashaw, I. G.; Kathilankal, J. C.; Li, J.; Beaty, K.; Ediger, K.; Forgione, A.; Fratini, G.; Johnson, D.; Velgersdyk, M.; Hupp, J. R.; Xu, L.; Burba, G. G.
2014-12-01
The eddy covariance method is widely used for direct measurements of turbulent exchange of gases and energy between the surface and atmosphere. In the past, raw data were collected first in the field and then processed back in the laboratory to achieve fully corrected publication-ready flux results. This post-processing consumed significant amount of time and resources, and precluded researchers from accessing near real-time final flux results. A new automated measurement system with novel hardware and software designs was developed, tested, and deployed starting late 2013. The major advancements with this automated flux system include: 1) Enabling logging high-frequency, three-dimensional wind speeds and multiple gas densities (CO2, H2O and CH4), low-frequency meteorological data, and site metadata simultaneously through a specially designed file format 2) Conducting fully corrected, real-time on-site flux computations using conventional as well as user-specified methods, by implementing EddyPro Software on a small low-power microprocessor 3) Providing precision clock control and coordinate information for data synchronization and inter-site data comparison by incorporating a GPS and Precision Time Protocol. Along with these innovations, a data management server application was also developed to chart fully corrected real-time fluxes to assist remote system monitoring, to send e-mail alerts, and to automate data QA/QC, transfer and archiving at individual stations or on a network level. Combination of all of these functions was designed to help save substantial amount of time and costs associated with managing a research site by eliminating the post-field data processing, reducing user errors and facilitating real-time access to fully corrected flux results. The design, functionality, and test results from this new eddy covariance measurement tool will be presented.
Implementing phase-covariant cloning in circuit quantum electrodynamics
Energy Technology Data Exchange (ETDEWEB)
Zhu, Meng-Zheng [School of Physics and Material Science, Anhui University, Hefei 230039 (China); School of Physics and Electronic Information, Huaibei Normal University, Huaibei 235000 (China); Ye, Liu, E-mail: yeliu@ahu.edu.cn [School of Physics and Material Science, Anhui University, Hefei 230039 (China)
2016-10-15
An efficient scheme is proposed to implement phase-covariant quantum cloning by using a superconducting transmon qubit coupled to a microwave cavity resonator in the strong dispersive limit of circuit quantum electrodynamics (QED). By solving the master equation numerically, we plot the Wigner function and Poisson distribution of the cavity mode after each operation in the cloning transformation sequence according to two logic circuits proposed. The visualizations of the quasi-probability distribution in phase-space for the cavity mode and the occupation probability distribution in the Fock basis enable us to penetrate the evolution process of cavity mode during the phase-covariant cloning (PCC) transformation. With the help of numerical simulation method, we find out that the present cloning machine is not the isotropic model because its output fidelity depends on the polar angle and the azimuthal angle of the initial input state on the Bloch sphere. The fidelity for the actual output clone of the present scheme is slightly smaller than one in the theoretical case. The simulation results are consistent with the theoretical ones. This further corroborates our scheme based on circuit QED can implement efficiently PCC transformation.
Medication errors: prescribing faults and prescription errors.
Velo, Giampaolo P; Minuz, Pietro
2009-06-01
1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.
Learner Corpora without Error Tagging
Directory of Open Access Journals (Sweden)
Rastelli, Stefano
2009-01-01
Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.
A fast method for testing covariates in population PK/PD Models.
Khandelwal, Akash; Harling, Kajsa; Jonsson, E Niclas; Hooker, Andrew C; Karlsson, Mats O
2011-09-01
The development of covariate models within the population modeling program like NONMEM is generally a time-consuming and non-trivial task. In this study, a fast procedure to approximate the change in objective function values of covariate-parameter models is presented and evaluated. The proposed method is a first-order conditional estimation (FOCE)-based linear approximation of the influence of covariates on the model predictions. Simulated and real datasets were used to compare this method with the conventional nonlinear mixed effect model using both first-order (FO) and FOCE approximations. The methods were mainly assessed in terms of difference in objective function values (ΔOFV) between base and covariate models. The FOCE linearization was superior to the FO linearization and showed a high degree of concordance with corresponding nonlinear models in ΔOFV. The linear and nonlinear FOCE models provided similar coefficient estimates and identified the same covariate-parameter relations as statistically significant or non-significant for the real and simulated datasets. The time required to fit tesaglitazar and docetaxel datasets with 4 and 15 parameter-covariate relations using the linearization method was 5.1 and 0.5 min compared with 152 and 34 h, respectively, with the nonlinear models. The FOCE linearization method allows for a fast estimation of covariate-parameter relations models with good concordance with the nonlinear models. This allows a more efficient model building and may allow the utilization of model building techniques that would otherwise be too time-consuming.
Horowitz-Kraus, Tzipi; Holland, Scott K.
2015-01-01
The Reading Acceleration Program is a computerized program that improves reading and the activation of the error-detection mechanism in individuals with reading difficulty (RD) and typical readers (TRs). The current study aims to find the neural correlates for this effect in English-speaking 8-12-year-old children with RD and TRs using a…
Directory of Open Access Journals (Sweden)
Y. Zhu
2017-04-01
Full Text Available High Frequency (HF radio waves propagating in the ionospheric random inhomogeneous media exhibit a spatial nonlinearity wavefront which may limit the performance of conventional high-resolution methods for HF sky wave radar systems. In this paper, the spatial correlation function of wavefront is theoretically derived on condition that the radio waves propagate through the ionospheric structure containing irregularities. With this function, the influence of wavefront distortions on the array covariance matrix can be quantitatively described with the spatial coherence matrix, which is characterized with the coherence loss parameter. Therefore, the problem of wavefront correction is recast as the determination of coherence loss parameter and this is solved by the covariance matching (CM technique. The effectiveness of the proposed method is evaluated both by the simulated and real radar data. It is shown numerically that an improved direction of arrival (DOA estimation performance can be achieved with the corrected array covariance matrix.
Immediate error correction process following sleep deprivation.
Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling
2007-06-01
Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation.
An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter
Song, Hajoon
2010-07-01
A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.
Medical errors in neurosurgery.
Rolston, John D; Zygourakis, Corinna C; Han, Seunggu J; Lau, Catherine Y; Berger, Mitchel S; Parsa, Andrew T
2014-01-01
Medical errors cause nearly 100,000 deaths per year and cost billions of dollars annually. In order to rationally develop and institute programs to mitigate errors, the relative frequency and costs of different errors must be documented. This analysis will permit the judicious allocation of scarce healthcare resources to address the most costly errors as they are identified. Here, we provide a systematic review of the neurosurgical literature describing medical errors at the departmental level. Eligible articles were identified from the PubMed database, and restricted to reports of recognizable errors across neurosurgical practices. We limited this analysis to cross-sectional studies of errors in order to better match systems-level concerns, rather than reviewing the literature for individually selected errors like wrong-sided or wrong-level surgery. Only a small number of articles met these criteria, highlighting the paucity of data on this topic. From these studies, errors were documented in anywhere from 12% to 88.7% of cases. These errors had many sources, of which only 23.7-27.8% were technical, related to the execution of the surgery itself, highlighting the importance of systems-level approaches to protecting patients and reducing errors. Overall, the magnitude of medical errors in neurosurgery and the lack of focused research emphasize the need for prospective categorization of morbidity with judicious attribution. Ultimately, we must raise awareness of the impact of medical errors in neurosurgery, reduce the occurrence of medical errors, and mitigate their detrimental effects.
Madugundu, Rangaswamy; Al-Gaadi, Khalid A; Tola, ElKamil; Kayad, Ahmed G; Jha, Chandra Sekhar
2017-02-01
A study was conducted to understand the potential of Landsat-8 in the estimation of gross primary production (GPP) and to quantify the productivity of maize crop cultivated under hyper-arid conditions of Saudi Arabia. The GPP of maize crop was estimated by using the Vegetation Photosynthesis Model (VPM) utilizing remote sensing data from Landsat-8 reflectance (GPPVPM) as well as the meteorological data provided by Eddy Covariance (EC) system (GPPEC), for the period from August to November 2015. Results revealed that the cumulative GPPEC for the entire growth period of maize crop was 1871 g C m(-2). However, the cumulative GPP determined as a function of the enhanced vegetation index - EVI (GPPEVI) was 1979 g C m(-2), and that determined as a function of the normalized difference vegetation index - NDVI (GPPNDVI) was 1754 g C m(-2). These results indicated that the GPPEVI was significantly higher than the GPPEC (R(2) = 0.96, P = 0.0241 and RMSE = 12.6%). While, the GPPNDVI was significantly lower than the GPPEC (R(2) = 0.93, P = 0.0384 and RMSE = 19.7%). However, the recorded relative error between the GPPEC and both the GPPEVI and the GPPNDVI was -6.22% and 5.76%, respectively. These results demonstrated the potential of the landsat-8 driven VPM model for the estimation of GPP, which is relevant to the productivity and carbon fluxes.
Directory of Open Access Journals (Sweden)
Rangaswamy Madugundu
2017-02-01
Full Text Available A study was conducted to understand the potential of Landsat-8 in the estimation of gross primary production (GPP and to quantify the productivity of maize crop cultivated under hyper-arid conditions of Saudi Arabia. The GPP of maize crop was estimated by using the Vegetation Photosynthesis Model (VPM utilizing remote sensing data from Landsat-8 reflectance (GPPVPM as well as the meteorological data provided by Eddy Covariance (EC system (GPPEC, for the period from August to November 2015. Results revealed that the cumulative GPPEC for the entire growth period of maize crop was 1871 g C m−2. However, the cumulative GPP determined as a function of the enhanced vegetation index – EVI (GPPEVI was 1979 g C m−2, and that determined as a function of the normalized difference vegetation index – NDVI (GPPNDVI was 1754 g C m−2. These results indicated that the GPPEVI was significantly higher than the GPPEC (R2 = 0.96, P = 0.0241 and RMSE = 12.6%. While, the GPPNDVI was significantly lower than the GPPEC (R2 = 0.93, P = 0.0384 and RMSE = 19.7%. However, the recorded relative error between the GPPEC and both the GPPEVI and the GPPNDVI was −6.22% and 5.76%, respectively. These results demonstrated the potential of the landsat-8 driven VPM model for the estimation of GPP, which is relevant to the productivity and carbon fluxes.
Covariant Hyperbolization of Force-free Electrodynamics
Carrasco, Federico
2016-01-01
Force-Free Flectrodynamics (FFE) is a non-linear system of equations modeling the evolution of the electromagnetic field, in the presence of a magnetically dominated relativistic plasma. This configuration arises on several astrophysical scenarios, which represent exciting laboratories to understand physics in extreme regimes. We show that this system, when restricted to the correct constraint submanifold, is symmetric hyperbolic. In numerical applications is not feasible to keep the system in that submanifold, and so, it is necessary to analyze its structure first in the tangent space of that submanifold and then in a whole neighborhood of it. As already shown by Pfeiffer, a direct (or naive) formulation of this system (in the whole tangent space) results in a weakly hyperbolic system of evolution equations for which well-possednes for the initial value formulation does not follows. Using the generalized symmetric hyperbolic formalism due to Geroch, we introduce here a covariant hyperbolization for the FFE s...
Covariant perturbations in the gonihedric string model
Rojas, Efraín
2017-11-01
We provide a covariant framework to study classically the stability of small perturbations on the so-called gonihedric string model by making precise use of variational techniques. The local action depends on the square root of the quadratic mean extrinsic curvature of the worldsheet swept out by the string, and is reparametrization invariant. A general expression for the worldsheet perturbations, guided by Jacobi equations without any early gauge fixing, is obtained. This is manifested through a set of highly coupled nonlinear differential partial equations where the perturbations are described by scalar fields, Φi, living in the worldsheet. This model contains, as a special limit, to the linear model in the mean extrinsic curvature. In such a case the Jacobi equations specialize to a single wave-like equation for Φ.
EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.
Energy Technology Data Exchange (ETDEWEB)
HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.
2007-04-22
The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.
A covariant approach to entropic dynamics
Ipek, Selman; Abedi, Mohammad; Caticha, Ariel
2017-06-01
Entropic Dynamics (ED) is a framework for constructing dynamical theories of inference using the tools of inductive reasoning. A central feature of the ED framework is the special focus placed on time. In [2] a global entropic time was used to derive a quantum theory of relativistic scalar fields. This theory, however, suffered from a lack of explicit or manifest Lorentz symmetry. In this paper we explore an alternative formulation in which the relativistic aspects of the theory are manifest. The approach we pursue here is inspired by the methods of Dirac, Kuchař, and Teitelboim in their development of covariant Hamiltonian approaches. The key ingredient here is the adoption of a local notion of entropic time, which allows compatibility with an arbitrary notion of simultaneity. However, in order to ensure that the evolution does not depend on the particular sequence of hypersurfaces, we must impose a set of constraints that guarantee a consistent evolution.
Least Squared Simulated Errors
Directory of Open Access Journals (Sweden)
Peter J. Veazie
2015-03-01
Full Text Available Estimation by minimizing the sum of squared residuals is a common method for parameters of regression functions; however, regression functions are not always known or of interest. Maximizing the likelihood function is an alternative if a distribution can be properly specified. However, cases can arise in which a regression function is not known, no additional moment conditions are indicated, and we have a distribution for the random quantities, but maximum likelihood estimation is difficult to implement. In this article, we present the least squared simulated errors (LSSE estimator for such cases. The conditions for consistency and asymptotic normality are given. Finite sample properties are investigated via Monte Carlo experiments on two examples. Results suggest LSSE can perform well in finite samples. We discuss the estimator’s limitations and conclude that the estimator is a viable option. We recommend Monte Carlo investigation of any given model to judge bias for a particular finite sample size of interest and discern whether asymptotic approximations or resampling techniques are preferable for the construction of tests or confidence intervals.
Covariance Functions and Random Regression Models in the ...
African Journals Online (AJOL)
ARC-IRENE
Since its inception the application of genetic principles to selective breeding of farm animals has led ... animal increases in size or weight continuously over time until reaching a plateau at maturity. Such a process .... where A and I are the numerator relationship matrix and an identity matrix, respectively; KG and KC are the.
Covariance Functions and Random Regression Models in the ...
African Journals Online (AJOL)
ARC-IRENE
many, highly correlated measures (Meyer, 1998a). Several approaches have been proposed to deal with such data, from simplest repeatability models (SRM) to complex multivariate models (MTM). The SRM considers different measurements at different stages (ages) as a realization of the same genetic trait with constant.
Using the Kalman Algorithm to Correct Data Errors of a 24-Bit Visible Spectrometer.
Pham, Son; Dinh, Anh
2017-12-18
To reduce cost, increase resolution, and reduce errors due to changing light intensity of the VIS SPEC, a new technique is proposed which applies the Kalman algorithm along with a simple hardware setup and implementation. In real time, the SPEC automatically corrects spectral data errors resulting from an unstable light source by adding a photodiode sensor to monitor the changes in light source intensity. The Kalman algorithm is applied on the data to correct the errors. The light intensity instability is one of the sources of error considered in this work. The change in light intensity is due to the remaining lifetime, working time and physical mechanism of the halogen lamp, and/or battery and regulator stability. Coefficients and parameters for the processing are determined from MATLAB simulations based on two real types of datasets, which are mono-changing and multi-changing datasets, collected from the prototype SPEC. From the saved datasets, and based on the Kalman algorithm and other computer algorithms such as divide-and-conquer algorithm and greedy technique, the simulation program implements the search for process noise covariance, the correction function and its correction coefficients. These components, which will be implemented in the processor of the SPEC, Kalman algorithm and the light-source-monitoring sensor are essential to build the Kalman corrector. Through experimental results, the corrector can reduce the total error in the spectra on the order of 10 times; for certain typical local spectral data, it can reduce the error by up to 60 times. The experimental results prove that accuracy of the SPEC increases considerably by using the proposed Kalman corrector in the case of changes in light source intensity. The proposed Kalman technique can be applied to other applications to correct the errors due to slow changes in certain system components.
James Elliott, C.; McVey, Brian D.; Quimby, David C.
1991-07-01
The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.
A general field-covariant formulation of quantum field theory
Energy Technology Data Exchange (ETDEWEB)
Anselmi, Damiano [Universita di Pisa, Dipartimento di Fisica ' ' Enrico Fermi' ' , Pisa (Italy)
2013-03-15
In all nontrivial cases renormalization, as it is usually formulated, is not a change of integration variables in the functional integral, plus parameter redefinitions, but a set of replacements, of actions and/or field variables and parameters. Because of this, we cannot write simple identities relating bare and renormalized generating functionals, or generating functionals before and after nonlinear changes of field variables. In this paper we investigate this issue and work out a general field-covariant approach to quantum field theory, which allows us to treat all perturbative changes of field variables, including the relation between bare and renormalized fields, as true changes of variables in the functional integral, under which the functionals Z and W=lnZ behave as scalars. We investigate the relation between composite fields and changes of field variables, and we show that, if J are the sources coupled to the elementary fields, all changes of field variables can be expressed as J-dependent redefinitions of the sources L coupled to the composite fields. We also work out the relation between the renormalization of variable-changes and the renormalization of composite fields. Using our transformation rules it is possible to derive the renormalization of a theory in a new variable frame from the renormalization in the old variable frame, without having to calculate it anew. We define several approaches, useful for different purposes, in particular a linear approach where all variable changes are described as linear source redefinitions. We include a number of explicit examples. (orig.)
Lagrangian analysis, data covariance, and the impulse time integral
Energy Technology Data Exchange (ETDEWEB)
Forest, C.A.
1991-01-01
Lagrangian analysis is mathematical analysis of data derived from flow experiments in which embedded gauges move with the material motion (constant Lagrangian mass-point coordinate). With sufficient data, the conservation laws of mass, momentum, and energy are applied to the data in order to construct flow-variable fields, of particle velocity, stress, density, et cetera. Toward this end, a new Lagrangian analysis method has been constructed, that is centered upon a function, {alpha}, that incorporates conservation of mass and momentum into its definition. Further, the existence of {alpha} allows simultaneous, consistent, least-squares fitting of surfaces to all of the flow data. The method also incorporates a novel treatment of the data covariance effects resulting from gauge-to-gauge calibration uncertainty. Analysis of a synthetic data set illustrates the method. 8 refs., 8 figs.
Accounting for observation errors in image data assimilation
Directory of Open Access Journals (Sweden)
Vincent Chabot
2015-02-01
Full Text Available This paper deals with the assimilation of image-type data. Such kinds of data, such as satellite images, have good properties (dense coverage in space and time, but also one crucial problem for data assimilation: they are affected by spatially correlated errors. Classical approaches in data assimilation assume uncorrelated noise, because the proper description and numerical manipulation of non-diagonal error covariance matrices is complex. This paper proposes a simple way to provide observation error covariance matrices adapted to spatially correlated errors. This is done using various image transformations: multiscale (Wavelets, Fourier, Curvelets, gradients, and gradient orientations. These transformations are described and compared to classical approaches, such as pixel-to-pixel comparison and observation thinning. We provide simple yet effective covariance matrices for each of these transformations, which take into account the observation error correlations and improve the results. The effectiveness of the proposed approach is demonstrated on twin experiments performed on a 2-D shallow-water model.
Gravity Induced Position Errors in Airborne Inertial Navigation,
1981-12-01
Squares Collocation . Report No. 240 of the Department of Geodetic Science, The Ohio State University, Columbus, Ohio. Moritz , H. (1977): On the Computation...Markov processes by making use of the essential parameters of a covariance function proposed by Moritz . The expressions for the gravity induced position...their associated covariance functions. The justification of such an approach is given in Moritz (1980) and within the limits indicated there, it is
Desmet, Charlotte; Deschrijver, Eliane; Brass, Marcel
2014-04-01
Recently, it has been shown that the medial prefrontal cortex (MPFC) is involved in error execution as well as error observation. Based on this finding, it has been argued that recognizing each other's mistakes might rely on motor simulation. In the current functional magnetic resonance imaging (fMRI) study, we directly tested this hypothesis by investigating whether medial prefrontal activity in error observation is restricted to situations that enable simulation. To this aim, we compared brain activity related to the observation of errors that can be simulated (human errors) with brain activity related to errors that cannot be simulated (machine errors). We show that medial prefrontal activity is not only restricted to the observation of human errors but also occurs when observing errors of a machine. In addition, our data indicate that the MPFC reflects a domain general mechanism of monitoring violations of expectancies.
ATC operational error analysis.
1972-01-01
The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...
Drug Errors in Anaesthesiology
Directory of Open Access Journals (Sweden)
Rajnish Kumar Jain
2009-01-01
Full Text Available Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed.
Hoede, C.; Li, Z.
2002-01-01
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,
Terrestrial gross carbon dioxide uptake : Global distribution and covariation with climate
Beer, Christian; Reichstein, Markus; Tomelleri, Enrico; Ciais, Philippe; Jung, Martin; Carvalhais, Nuno; Rödenbeck, Christian; Arain, M. Altaf; Baldocchi, Dennis D.; Bonan, Gordon B.; Bondeau, Alberte; Cescatti, Alessandro; Lasslop, Gitta; Lindroth, Anders; Lomas, Mark; Luyssaert, Sebastiaan; Margolis, Hank; Oleson, Keith W.; Roupsard, Olivier; Veenendaal, Elmar; Viovy, Nicolas; Williams, Christopher M.; Woodward, F. Ian; Papale, Dario
2010-01-01
Terrestrial gross primary production (GPP) is the largest global CO 2 flux driving several ecosystem functions. We provide an observation-based estimate of this flux at 123 ± 8 petagrams of carbon per year (Pg C year-1) using eddy covariance flux data and various diagnostic models. Tropical forests
Terrestrial gross carbon dioxide uptake: Global distribution and covariation with climate
Beer, C.; Veenendaal, E.M.
2010-01-01
Terrestrial gross primary production (GPP) is the largest global CO2 flux driving several ecosystem functions. We provide an observation-based estimate of this flux at 123 ± 8 petagrams of carbon per year (Pg C year-1) using eddy covariance flux data and various diagnostic models. Tropical forests
Cox regression with missing covariate data using a modified partial likelihood method
DEFF Research Database (Denmark)
Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.
2016-01-01
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...
Power To Detect Additive Treatment Effects with Randomized Block and Analysis of Covariance Designs.
Klockars, Alan J.; Potter, Nina Salcedo; Beretvas, S. Natasha
1999-01-01
Compared the power of analysis of covariance (ANCOVA) and two types of randomized block designs as a function of the correlation between the concomitant variable and the outcome measure, the number of groups, the number of participants, and nominal power. Discusses advantages of ANCOVA. (Author/SLD)
AFCI-2.0 Library of Neutron Cross Section Covariances
Energy Technology Data Exchange (ETDEWEB)
Herman, M.; Herman,M.; Oblozinsky,P.; Mattoon,C.; Pigni,M.; Hoblit,S.; Mughabghab,S.F.; Sonzogni,A.; Talou,P.; Chadwick,M.B.; Hale.G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G.
2011-06-26
Neutron cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The primary purpose of the library is to provide covariances for the Advanced Fuel Cycle Initiative (AFCI) data adjustment project, which is focusing on the needs of fast advanced burner reactors. The covariances refer to central values given in the 2006 release of the U.S. neutron evaluated library ENDF/B-VII. The preliminary version (AFCI-2.0beta) has been completed in October 2010 and made available to the users for comments. In the final 2.0 release, covariances for a few materials were updated, in particular new LANL evaluations for {sup 238,240}Pu and {sup 241}Am were adopted. BNL was responsible for covariances for structural materials and fission products, management of the library and coordination of the work, while LANL was in charge of covariances for light nuclei and for actinides.
Energy Technology Data Exchange (ETDEWEB)
Alfred Stadler, Franz Gross
2010-10-01
We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.
Data Selection for Within-Class Covariance Estimation
2016-09-08
covariance matrix training data collection in real world applications. Index Terms— channel compensation, i-vectors, within-class covariance...normalized to have zero mean and unit variance. Finally, the utterance feature vectors were converted to i-vectors using a 2048-order Universal...Background Model (UBM) and a rank-600 total variability (T) matrix . The estimated within-class covariance matrix was computed via [1] 1 1 1 1
On covariant Poisson brackets in classical field theory
Energy Technology Data Exchange (ETDEWEB)
Forger, Michael [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Salles, Mário O. [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Centro de Ciências Exatas e da Terra, Universidade Federal do Rio Grande do Norte, Campus Universitário – Lagoa Nova, BR–59078-970 Natal, RN (Brazil)
2015-10-15
How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.
Fully Bayesian inference under ignorable missingness in the presence of auxiliary covariates.
Daniels, M J; Wang, C; Marcus, B H
2014-03-01
In order to make a missing at random (MAR) or ignorability assumption realistic, auxiliary covariates are often required. However, the auxiliary covariates are not desired in the model for inference. Typical multiple imputation approaches do not assume that the imputation model marginalizes to the inference model. This has been termed "uncongenial" [Meng (1994, Statistical Science 9, 538-558)]. In order to make the two models congenial (or compatible), we would rather not assume a parametric model for the marginal distribution of the auxiliary covariates, but we typically do not have enough data to estimate the joint distribution well non-parametrically. In addition, when the imputation model uses a non-linear link function (e.g., the logistic link for a binary response), the marginalization over the auxiliary covariates to derive the inference model typically results in a difficult to interpret form for the effect of covariates. In this article, we propose a fully Bayesian approach to ensure that the models are compatible for incomplete longitudinal data by embedding an interpretable inference model within an imputation model and that also addresses the two complications described above. We evaluate the approach via simulations and implement it on a recent clinical trial. © 2013, The International Biometric Society.
Junker, Robert R; Kuppler, Jonas; Amo, Luisa; Blande, James D; Borges, Renee M; van Dam, Nicole M; Dicke, Marcel; Dötterl, Stefan; Ehlers, Bodil K; Etl, Florian; Gershenzon, Jonathan; Glinwood, Robert; Gols, Rieta; Groot, Astrid T; Heil, Martin; Hoffmeister, Mathias; Holopainen, Jarmo K; Jarau, Stefan; John, Lena; Kessler, Andre; Knudsen, Jette T; Kost, Christian; Larue-Kontic, Anne-Amélie C; Leonhardt, Sara Diana; Lucas-Barbosa, Dani; Majetic, Cassie J; Menzel, Florian; Parachnowitsch, Amy L; Pasquet, Rémy S; Poelman, Erik H; Raguso, Robert A; Ruther, Joachim; Schiestl, Florian P; Schmitt, Thomas; Tholl, Dorothea; Unsicker, Sybille B; Verhulst, Niels; Visser, Marcel E; Weldegergis, Berhane T; Köllner, Tobias G
2017-03-03
Chemical communication is ubiquitous. The identification of conserved structural elements in visual and acoustic communication is well established, but comparable information on chemical communication displays (CCDs) is lacking. We assessed the phenotypic integration of CCDs in a meta-analysis to characterize patterns of covariation in CCDs and identified functional or biosynthetically constrained modules. Poorly integrated plant CCDs (i.e. low covariation between scent compounds) support the notion that plants often utilize one or few key compounds to repel antagonists or to attract pollinators and enemies of herbivores. Animal CCDs (mostly insect pheromones) were usually more integrated than those of plants (i.e. stronger covariation), suggesting that animals communicate via fixed proportions among compounds. Both plant and animal CCDs were composed of modules, which are groups of strongly covarying compounds. Biosynthetic similarity of compounds revealed biosynthetic constraints in the covariation patterns of plant CCDs. We provide a novel perspective on chemical communication and a basis for future investigations on structural properties of CCDs. This will facilitate identifying modules and biosynthetic constraints that may affect the outcome of selection and thus provide a predictive framework for evolutionary trajectories of CCDs in plants and animals. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Disruption of structural covariance networks for language in autism is modulated by verbal ability.
Sharda, Megha; Khundrakpam, Budhachandra S; Evans, Alan C; Singh, Nandini C
2016-03-01
The presence of widespread speech and language deficits is a core feature of autism spectrum disorders (ASD). These impairments have often been attributed to altered connections between brain regions. Recent developments in anatomical correlation-based approaches to map structural covariance offer an effective way of studying such connections in vivo. In this study, we employed such a structural covariance network (SCN)-based approach to investigate the integrity of anatomical networks in fronto-temporal brain regions of twenty children with ASD compared to an age and gender-matched control group of twenty-two children. Our findings reflected large-scale disruption of inter and intrahemispheric covariance in left frontal SCNs in the ASD group compared to controls, but no differences in right fronto-temporal SCNs. Interhemispheric covariance in left-seeded networks was further found to be modulated by verbal ability of the participants irrespective of autism diagnosis, suggesting that language function might be related to the strength of interhemispheric structural covariance between frontal regions. Additionally, regional cortical thickening was observed in right frontal and left posterior regions, which was predicted by decreasing symptom severity and increasing verbal ability in ASD. These findings unify reports of regional differences in cortical morphology in ASD. They also suggest that reduced left hemisphere asymmetry and increased frontal growth may not only reflect neurodevelopmental aberrations but also compensatory mechanisms.
Performance of growth mixture models in the presence of time-varying covariates.
Diallo, Thierno M O; Morin, Alexandre J S; Lu, HuiZhong
2017-10-01
Growth mixture modeling is often used to identify unobserved heterogeneity in populations. Despite the usefulness of growth mixture modeling in practice, little is known about the performance of this data analysis technique in the presence of time-varying covariates. In the present simulation study, we examined the impacts of five design factors: the proportion of the total variance of the outcome explained by the time-varying covariates, the number of time points, the error structure, the sample size, and the mixing ratio. More precisely, we examined the impact of these factors on the accuracy of parameter and standard error estimates, as well as on the class enumeration accuracy. Our results showed that the consistent Akaike information criterion (CAIC), the sample-size-adjusted CAIC (SCAIC), the Bayesian information criterion (BIC), and the integrated completed likelihood criterion (ICL-BIC) proved to be highly reliable indicators of the true number of latent classes in the data, across design conditions, and that the sample-size-adjusted BIC (SBIC) also proved quite accurate, especially in larger samples. In contrast, the Akaike information criterion (AIC), the entropy, the normalized entropy criterion (NEC), and the classification likelihood criterion (CLC) proved to be unreliable indicators of the true number of latent classes in the data. Our results also showed that substantial biases in the parameter and standard error estimates tended to be associated with growth mixture models that included only four time points.
Testing for causality in covarying traits: genes and latitude in a molecular world.
O'Brien, Conor; Bradshaw, William E; Holzapfel, Christina M
2011-06-01
Many traits are assumed to have a causal (necessary) relationship with one another because of their common covariation with a physiological, ecological or geographical factor. Herein, we demonstrate a straightforward test for inferring causality using residuals from regression of the traits with the common factor. We illustrate this test using the covariation with latitude of a proxy for the circadian clock and a proxy for the photoperiodic timer in Drosophila and salmon. A negative result of this test means that further discussion of the adaptive significance of a causal connection between the covarying traits is unwarranted. A positive result of this test provides a point of departure that can then be used as a platform from which to determine experimentally the underlying functional connections and only then to discuss their adaptive significance.
Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series
DEFF Research Database (Denmark)
Davis, Richard A.; Mikosch, Thomas Valentin; Pfaffel, Olivier
2016-01-01
In this paper we give an asymptotic theory for the eigenvalues of the sample covariance matrix of a multivariate time series. The time series constitutes a linear process across time and between components. The input noise of the linear process has regularly varying tails with index α∈(0,4) in...... particular, the time series has infinite fourth moment. We derive the limiting behavior for the largest eigenvalues of the sample covariance matrix and show point process convergence of the normalized eigenvalues. The limiting process has an explicit form involving points of a Poisson process and eigenvalues...... of a non-negative definite matrix. Based on this convergence we derive limit theory for a host of other continuous functionals of the eigenvalues, including the joint convergence of the largest eigenvalues, the joint convergence of the largest eigenvalue and the trace of the sample covariance matrix...
Carroll, Raymond J.
2011-03-01
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
Parallel evolution controlled by adaptation and covariation in ammonoid cephalopods
Directory of Open Access Journals (Sweden)
Klug Christian
2011-04-01
Full Text Available Abstract Background A major goal in evolutionary biology is to understand the processes that shape the evolutionary trajectory of clades. The repeated and similar large-scale morphological evolutionary trends of distinct lineages suggest that adaptation by means of natural selection (functional constraints is the major cause of parallel evolution, a very common phenomenon in extinct and extant lineages. However, parallel evolution can result from other processes, which are usually ignored or difficult to identify, such as developmental constraints. Hence, understanding the underlying processes of parallel evolution still requires further research. Results Herein, we present a possible case of parallel evolution between two ammonoid lineages (Auguritidae and Pinacitidae of Early-Middle Devonian age (405-395 Ma, which are extinct cephalopods with an external, chambered shell. In time and through phylogenetic order of appearance, both lineages display a morphological shift toward more involute coiling (i.e. more tightly coiled whorls, larger adult body size, more complex suture line (the folded walls separating the gas-filled buoyancy-chambers, and the development of an umbilical lid (a very peculiar extension of the lateral shell wall covering the umbilicus in the most derived taxa. Increased involution toward shells with closed umbilicus has been demonstrated to reflect improved hydrodynamic properties of the shell and thus likely results from similar natural selection pressures. The peculiar umbilical lid might have also added to the improvement of the hydrodynamic properties of the shell. Finally, increasing complexity of suture lines likely results from covariation induced by trends of increasing adult size and whorl overlap given the morphogenetic properties of the suture. Conclusions The morphological evolution of these two Devonian ammonoid lineages follows a near parallel evolutionary path for some important shell characters during several
Parallel evolution controlled by adaptation and covariation in ammonoid cephalopods.
Monnet, Claude; De Baets, Kenneth; Klug, Christian
2011-04-29
A major goal in evolutionary biology is to understand the processes that shape the evolutionary trajectory of clades. The repeated and similar large-scale morphological evolutionary trends of distinct lineages suggest that adaptation by means of natural selection (functional constraints) is the major cause of parallel evolution, a very common phenomenon in extinct and extant lineages. However, parallel evolution can result from other processes, which are usually ignored or difficult to identify, such as developmental constraints. Hence, understanding the underlying processes of parallel evolution still requires further research. Herein, we present a possible case of parallel evolution between two ammonoid lineages (Auguritidae and Pinacitidae) of Early-Middle Devonian age (405-395 Ma), which are extinct cephalopods with an external, chambered shell. In time and through phylogenetic order of appearance, both lineages display a morphological shift toward more involute coiling (i.e. more tightly coiled whorls), larger adult body size, more complex suture line (the folded walls separating the gas-filled buoyancy-chambers), and the development of an umbilical lid (a very peculiar extension of the lateral shell wall covering the umbilicus) in the most derived taxa. Increased involution toward shells with closed umbilicus has been demonstrated to reflect improved hydrodynamic properties of the shell and thus likely results from similar natural selection pressures. The peculiar umbilical lid might have also added to the improvement of the hydrodynamic properties of the shell. Finally, increasing complexity of suture lines likely results from covariation induced by trends of increasing adult size and whorl overlap given the morphogenetic properties of the suture. The morphological evolution of these two Devonian ammonoid lineages follows a near parallel evolutionary path for some important shell characters during several million years and through their phylogeny. Evolution
Huang, Yangxin; Yan, Chunning; Xing, Dongyuan; Zhang, Nanhua; Chen, Henian
2015-01-01
In longitudinal studies it is often of interest to investigate how a repeatedly measured marker in time is associated with a time to an event of interest. This type of research question has given rise to a rapidly developing field of biostatistics research that deals with the joint modeling of longitudinal and time-to-event data. Normality of model errors in longitudinal model is a routine assumption, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain between- and within-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. Moreover, the responses may encounter nonignorable missing. Statistical analysis may be complicated dramatically based on longitudinal-survival joint models where longitudinal data with skewness, missing values, and measurement errors are observed. In this article, we relax the distributional assumptions for the longitudinal models using skewed (parametric) distribution and unspecified (nonparametric) distribution placed by a Dirichlet process prior, and address the simultaneous influence of skewness, missingness, covariate measurement error, and time-to-event process by jointly modeling three components (response process with missing values, covariate process with measurement errors, and time-to-event process) linked through the random-effects that characterize the underlying individual-specific longitudinal processes in Bayesian analysis. The method is illustrated with an AIDS study by jointly modeling HIV/CD4 dynamics and time to viral rebound in comparison with potential models with various scenarios and different distributional specifications.
Interpretive Error in Radiology.
Waite, Stephen; Scott, Jinel; Gale, Brian; Fuchs, Travis; Kolla, Srinivas; Reede, Deborah
2017-04-01
Although imaging technology has advanced significantly since the work of Garland in 1949, interpretive error rates remain unchanged. In addition to patient harm, interpretive errors are a major cause of litigation and distress to radiologists. In this article, we discuss the mechanics involved in searching an image, categorize omission errors, and discuss factors influencing diagnostic accuracy. Potential individual- and system-based solutions to mitigate or eliminate errors are also discussed. Radiologists use visual detection, pattern recognition, memory, and cognitive reasoning to synthesize final interpretations of radiologic studies. This synthesis is performed in an environment in which there are numerous extrinsic distractors, increasing workloads and fatigue. Given the ultimately human task of perception, some degree of error is likely inevitable even with experienced observers. However, an understanding of the causes of interpretive errors can help in the development of tools to mitigate errors and improve patient safety.
Generalized Functional Linear Models With Semiparametric Single-Index Interactions
Li, Yehua
2010-06-01
We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.
Selection and evolution of causally covarying traits.
Morrissey, Michael B
2014-06-01
When traits cause variation in fitness, the distribution of phenotype, weighted by fitness, necessarily changes. The degree to which traits cause fitness variation is therefore of central importance to evolutionary biology. Multivariate selection gradients are the main quantity used to describe components of trait-fitness covariation, but they quantify the direct effects of traits on (relative) fitness, which are not necessarily the total effects of traits on fitness. Despite considerable use in evolutionary ecology, path analytic characterizations of the total effects of traits on fitness have not been formally incorporated into quantitative genetic theory. By formally defining "extended" selection gradients, which are the total effects of traits on fitness, as opposed to the existing definition of selection gradients, a more intuitive scheme for characterizing selection is obtained. Extended selection gradients are distinct quantities, differing from the standard definition of selection gradients not only in the statistical means by which they may be assessed and the assumptions required for their estimation from observational data, but also in their fundamental biological meaning. Like direct selection gradients, extended selection gradients can be combined with genetic inference of multivariate phenotypic variation to provide quantitative prediction of microevolutionary trajectories. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia
2012-09-25
The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools.
McDermitt, D. K.; Fratini, G.; Papale, D.
2013-12-01
Apparent instrumental biases can cause errors in gas concentration measurements by infrared gas analysers (IRGAs) used in eddy-covariance flux measurements. Such biases most often result from deposition of atmospheric pollutants, such as aerosols, pollen, or particulate matter on surfaces in the optical path, which in some cases, can cause differential signal attenuation in the sample and reference channels of the gas analyser. We refer to such biases as apparent, to stress that they are not the result of an intrinsic loss of instrumental metric performance, but rather of incidental and avoidable deployment artefacts. Nonetheless, due to the curvilinear nature of IRGAs calibration curves, they can cause errors in eddy-covariance fluxes, resulting from reduced accuracy of the gas concentration measurement. In this work we describe the phenomenological and mathematical foundations of these concentration biases, also showing how measurements from different IRGA typologies are affected as a result. By means of numerical simulations, we find that concentration biases can lead to roughly proportional systematic flux errors, where the fractional errors in fluxes are roughly 30-40% the fractional errors in concentrations. We also propose a correction procedure and provide recommendations for field deployment and operation, to minimize or completely eliminate such errors. The correction procedure will soon be available in the EddyPro software (www.licor.com/eddypro).
Covariances of nuclear matrix elements for O{nu}{beta}{beta} decay
Energy Technology Data Exchange (ETDEWEB)
Fogli, G L; Rotunno, A M [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' , Via Orabona 4, 70126 Bari (Italy); Lisi, E, E-mail: annamaria.rotunno@ba.infn.i [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Via Orabona 4, 70126 Bari (Italy)
2010-01-01
Estimates of nuclear matrix elements (NME) for neutrinoless double beta decay (O{nu}{beta}{beta}) based on the quasiparticle random phase approximations (QRPA) are affected by theoretical uncertainties, which may play a dominant role in comparison with projected experimental errors of future O{nu}{beta}{beta} experiments. We discuss the estimated variances and covariances of NME of several candidate nuclei within the QRPA, focusing on the following aspects: 1) the comparison of O{nu}{beta}{beta} signals, or limits, in different nuclei; 2) the prospects for testing nonstandard O{nu}2{beta} mechanisms in future experiments.
Electron scattering disintegration processes on light nuclei in covariant approach
Directory of Open Access Journals (Sweden)
Kuznietsov P.E.
2016-01-01
Full Text Available We provide general analysis of electro-break up process of compound scalar system. We use covariant approach with conserved EM current, which gives the ability to include strong interaction into QED. Therefore, we receive the ability to describe disintegration processes on nonlocal matter fields applying standard Feynman rules of QED. Inclusion of phase exponent into wave function receives a physical sense while we deal with the dominance of strong interaction in the process. We apply Green’s function (GF formalism to describe disintegration processes. Generalized gauge invariant electro-break up process amplitude is considered. One is a sum of traditional pole series and the regular part. We explore the deposits of regular part of amplitude, and its physical sense. A transition from virtual to real photon considered in photon point limit. The general analysis for electro-break up process of component scalar system is given. Precisely conserved nuclear electromagnetic currents at arbitrary square of transited momentum are received. The only undefined quantity in theory is vertex function. Therefore, we have the opportunity to describe electron scattering processes taking into account minimal necessary set of parameters.
Electron scattering disintegration processes on light nuclei in covariant approach
Kuznietsov, P. E.; Kasatkin, Yu. A.; Klepikov, V. F.
2016-07-01
We provide general analysis of electro-break up process of compound scalar system. We use covariant approach with conserved EM current, which gives the ability to include strong interaction into QED. Therefore, we receive the ability to describe disintegration processes on nonlocal matter fields applying standard Feynman rules of QED. Inclusion of phase exponent into wave function receives a physical sense while we deal with the dominance of strong interaction in the process. We apply Green's function (GF) formalism to describe disintegration processes. Generalized gauge invariant electro-break up process amplitude is considered. One is a sum of traditional pole series and the regular part. We explore the deposits of regular part of amplitude, and its physical sense. A transition from virtual to real photon considered in photon point limit. The general analysis for electro-break up process of component scalar system is given. Precisely conserved nuclear electromagnetic currents at arbitrary square of transited momentum are received. The only undefined quantity in theory is vertex function. Therefore, we have the opportunity to describe electron scattering processes taking into account minimal necessary set of parameters.
Covariant Spectator Theory of np scattering: Isoscalar interaction currents
Energy Technology Data Exchange (ETDEWEB)
Gross, Franz L. [JLAB
2014-06-01
Using the Covariant Spectator Theory (CST), one boson exchange (OBE) models have been found that give precision fits to low energy $np$ scattering and the deuteron binding energy. The boson-nucleon vertices used in these models contain a momentum dependence that requires a new class of interaction currents for use with electromagnetic interactions. Current conservation requires that these new interaction currents satisfy a two-body Ward-Takahashi (WT), and using principals of {\\it simplicity\\/} and {\\it picture independence\\/}, these currents can be uniquely determined. The results lead to general formulae for a two-body current that can be expressed in terms of relativistic $np$ wave functions, ${\\it \\Psi}$, and two convenient truncated wave functions, ${\\it \\Psi}^{(2)}$ and $\\widehat {\\it \\Psi}$, which contain all of the information needed for the explicit evaluation of the contributions from the interaction current. These three wave functions can be calculated from the CST bound or scattering state equations (and their off-shell extrapolations). A companion paper uses this formalism to evaluate the deuteron magnetic moment.
Error Propagation in Equations for Geochemical Modeling of ...
Indian Academy of Sciences (India)
This paper presents error propagation equations for modeling of radiogenic isotopes during mixing of two components or end-members. These equations can be used to estimate errors on an isotopic ratio in the mixture of two components, as a function of the analytical errors or the total errors of geological field sampling ...
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
Energy Technology Data Exchange (ETDEWEB)
Helgesson, P., E-mail: petter.helgesson@physics.uu.se [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Nuclear Research and Consultancy Group NRG, Petten (Netherlands); Sjöstrand, H. [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Koning, A.J. [Nuclear Research and Consultancy Group NRG, Petten (Netherlands); Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden); Rydén, J. [Department of Mathematics, Uppsala University, Uppsala (Sweden); Rochman, D. [Paul Scherrer Institute PSI, Villigen (Switzerland); Alhassan, E.; Pomp, S. [Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala (Sweden)
2016-01-21
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also
Recurrence Analysis of Eddy Covariance Fluxes
Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael
2015-04-01
The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.
Imposed quasi-normality in covariance structure analysis
Koning, Ruud H.; Neudecker, H.; Wansbeek, T.
1993-01-01
In the analysis of covariance structures, the distance between an observed covariance matrix S of order k x k and C(6) E(S) is minimized by searching over the 8-space. The criterion leading to a best asymptotically normal (BAN) estimator of 0 is found by minimizing the difference between vecS and
Empirical Performance of Covariates in Education Observational Studies
Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate
2017-01-01
This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…
A three domain covariance framework for EEG/MEG data
Ros, B.P.; Bijma, F.; de Gunst, M.C.M.; de Munck, J.C.
2015-01-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three
Using transformation algorithms to estimate (co)variance ...
African Journals Online (AJOL)
... to multiple traits by the use of canonical transformations. A computing strategy is developed for use on large data sets employing two different REML algorithms for the estimation of (co)variance components. Results from a simulation study indicate that (co)variance components can be estimated efficiently at a low cost on ...
Validity of covariance models for the analysis of geographical variation
DEFF Research Database (Denmark)
Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio
2014-01-01
1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained...
Propensity score matching and unmeasured covariate imbalance: A simulation study
Ali, M. Sanni|info:eu-repo/dai/nl/345709497; Groenwold, Rolf H.H.; Belitser, Svetlana V.; Hoes, Arno W.; De Boer, A.|info:eu-repo/dai/nl/075097346; Klungel, Olaf H.|info:eu-repo/dai/nl/181447649
2014-01-01
Background: Selecting covariates for adjustment or inclusion in propensity score (PS) analysis is a trade-off between reducing confounding bias and a risk of amplifying residual bias by unmeasured confounders. Objectives: To assess the covariate balancing properties of PS matching with respect to
Intraoral radiographic errors.
Patel, J R
1979-11-01
The purpose of this investigation was to investigate intraoral radiography in regards to the frequency of errors, the types of error necessitating retakes, and the relationship of error frequency to the teeth area examined and type x-ray cone used. The present study used 283 complete mouth radiographic surveys made, and 890 radiographs were found to be clinically unacceptable for one or more errors in technique. Thirteen and one-tenth errors per one hundred radiographs were found in this study. The three major radiographic errors occurring in this study were incorrect film placement (49.9 percent), cone-cutting (20.8 percent), and incorrect vertical angulation (12.5 percent).
Newton law in covariant unimodular $F(R)$ gravity
Nojiri, S; Oikonomou, V K
2016-01-01
We propose a covariant ghost-free unimodular $F(R)$ gravity theory, which contains a three-form field and study its structure using the analogy of the proposed theory with a quantum system which describes a charged particle in uniform magnetic field. Newton's law in non-covariant unimodular $F(R)$ gravity as well as in unimodular Einstein gravity is derived and it is shown to be just the same as in General Relativity. The derivation of Newton's law in covariant unimodular $F(R)$ gravity shows that it is modified precisely in the same way as in the ordinary $F(R)$ theory. We also demonstrate that the cosmology of a Friedmann-Robertson-Walker background, is equivalent in the non-covariant and covariant formulations of unimodular $F(R)$ theory.
Noble, Viveca K.
1993-11-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Identifying systematic DFT errors in catalytic reactions
DEFF Research Database (Denmark)
Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs
2015-01-01
Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence of the ...
Structural Covariance of Sensory Networks, the Cerebellum, and Amygdala in Autism Spectrum Disorder
Directory of Open Access Journals (Sweden)
Garrett J. Cardon
2017-11-01
Full Text Available Sensory dysfunction is a core symptom of autism spectrum disorder (ASD, and abnormalities with sensory responsivity and processing can be extremely debilitating to ASD patients and their families. However, relatively little is known about the underlying neuroanatomical and neurophysiological factors that lead to sensory abnormalities in ASD. Investigation into these aspects of ASD could lead to significant advancements in our general knowledge about ASD, as well as provide targets for treatment and inform diagnostic procedures. Thus, the current study aimed to measure the covariation of volumes of brain structures (i.e., structural magnetic resonance imaging that may be involved in abnormal sensory processing, in order to infer connectivity of these brain regions. Specifically, we quantified the structural covariation of sensory-related cerebral cortical structures, in addition to the cerebellum and amygdala by computing partial correlations between the structural volumes of these structures. These analyses were performed in participants with ASD (n = 36, as well as typically developing peers (n = 32. Results showed decreased structural covariation between sensory-related cortical structures, especially between the left and right cerebral hemispheres, in participants with ASD. In contrast, these same participants presented with increased structural covariation of structures in the right cerebral hemisphere. Additionally, sensory-related cerebral structures exhibited decreased structural covariation with functionally identified cerebellar networks. Also, the left amygdala showed significantly increased structural covariation with cerebral structures related to visual processing. Taken together, these results may suggest several patterns of altered connectivity both within and between cerebral cortices and other brain structures that may be related to sensory processing.
Language Ability Predicts Cortical Structure and Covariance in Boys with Autism Spectrum Disorder.
Sharda, Megha; Foster, Nicholas E V; Tryfon, Ana; Doyle-Thomas, Krissy A R; Ouimet, Tia; Anagnostou, Evdokia; Evans, Alan C; Zwaigenbaum, Lonnie; Lerch, Jason P; Lewis, John D; Hyde, Krista L
2017-03-01
There is significant clinical heterogeneity in language and communication abilities of individuals with Autism Spectrum Disorders (ASD). However, no consistent pathology regarding the relationship of these abilities to brain structure has emerged. Recent developments in anatomical correlation-based approaches to map structural covariance networks (SCNs), combined with detailed behavioral characterization, offer an alternative for studying these relationships. In this study, such an approach was used to study the integrity of SCNs of cortical thickness and surface area associated with language and communication, in 46 high-functioning, school-age children with ASD compared with 50 matched, typically developing controls (all males) with IQ > 75. Findings showed that there was alteration of cortical structure and disruption of fronto-temporal cortical covariance in ASD compared with controls. Furthermore, in an analysis of a subset of ASD participants, alterations in both cortical structure and covariance were modulated by structural language ability of the participants, but not communicative function. These findings indicate that structural language abilities are related to altered fronto-temporal cortical covariance in ASD, much more than symptom severity or cognitive ability. They also support the importance of better characterizing ASD samples while studying brain structure and for better understanding individual differences in language and communication abilities in ASD. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Multisensor Parallel Largest Ellipsoid Distributed Data Fusion with Unknown Cross-Covariances.
Liu, Baoyu; Zhan, Xingqun; Zhu, Zheng H
2017-06-29
As the largest ellipsoid (LE) data fusion algorithm can only be applied to two-sensor system, in this contribution, parallel fusion structure is proposed to introduce the LE algorithm into a multisensor system with unknown cross-covariances, and three parallel fusion structures based on different estimate pairing methods are presented and analyzed. In order to assess the influence of fusion structure on fusion performance, two fusion performance assessment parameters are defined as Fusion Distance and Fusion Index. Moreover, the formula for calculating the upper bounds of actual fused error covariances of the presented multisensor LE fusers is also provided. Demonstrated with simulation examples, the Fusion Index indicates fuser's actual fused accuracy and its sensitivity to the sensor orders, as well as its robustness to the accuracy of newly added sensors. Compared to the LE fuser with sequential structure, the LE fusers with proposed parallel structures not only significantly improve their properties in these aspects, but also embrace better performances in consistency and computation efficiency. The presented multisensor LE fusers generally have better accuracies than that of covariance intersection (CI) fusion algorithm and are consistent when the local estimates are weakly correlated.
Quantum-Wave Equation and Heisenberg Inequalities of Covariant Quantum Gravity
Directory of Open Access Journals (Sweden)
Claudio Cremaschini
2017-07-01
Full Text Available Key aspects of the manifestly-covariant theory of quantum gravity (Cremaschini and Tessarotto 2015–2017 are investigated. These refer, first, to the establishment of the four-scalar, manifestly-covariant evolution quantum wave equation, denoted as covariant quantum gravity (CQG wave equation, which advances the quantum state ψ associated with a prescribed background space-time. In this paper, the CQG-wave equation is proved to follow at once by means of a Hamilton–Jacobi quantization of the classical variational tensor field g ≡ g μ ν and its conjugate momentum, referred to as (canonical g-quantization. The same equation is also shown to be variational and to follow from a synchronous variational principle identified here with the quantum Hamilton variational principle. The corresponding quantum hydrodynamic equations are then obtained upon introducing the Madelung representation for ψ , which provides an equivalent statistical interpretation of the CQG-wave equation. Finally, the quantum state ψ is proven to fulfill generalized Heisenberg inequalities, relating the statistical measurement errors of quantum observables. These are shown to be represented in terms of the standard deviations of the metric tensor g ≡ g μ ν and its quantum conjugate momentum operator.
The Variance-covariance Method using IOWGA Operator for Tourism Forecast Combination
Directory of Open Access Journals (Sweden)
Liangping Wu
2014-08-01
Full Text Available Three combination methods commonly used in tourism forecasting are the simple average method, the variance-covariance method and the discounted MSFE method. These methods assign the different weights that can not change at each time point to each individual forecasting model. In this study, we introduce the IOWGA operator combination method which can overcome the defect of previous three combination methods into tourism forecasting. Moreover, we further investigate the performance of the four combination methods through the theoretical evaluation and the forecasting evaluation. The results of the theoretical evaluation show that the IOWGA operator combination method obtains extremely well performance and outperforms the other forecast combination methods. Furthermore, the IOWGA operator combination method can be of well forecast performance and performs almost the same to the variance-covariance combination method for the forecasting evaluation. The IOWGA operator combination method mainly reflects the maximization of improving forecasting accuracy and the variance-covariance combination method mainly reflects the decrease of the forecast error. For future research, it may be worthwhile introducing and examining other new combination methods that may improve forecasting accuracy or employing other techniques to control the time for updating the weights in combined forecasts.
Metagenomic covariation along densely sampled environmental gradients in the Red Sea
Thompson, Luke R
2016-07-15
Oceanic microbial diversity covaries with physicochemical parameters. Temperature, for example, explains approximately half of global variation in surface taxonomic abundance. It is unknown, however, whether covariation patterns hold over narrower parameter gradients and spatial scales, and extending to mesopelagic depths. We collected and sequenced 45 epipelagic and mesopelagic microbial metagenomes on a meridional transect through the eastern Red Sea. We asked which environmental parameters explain the most variation in relative abundances of taxonomic groups, gene ortholog groups, and pathways—at a spatial scale of <2000 km, along narrow but well-defined latitudinal and depth-dependent gradients. We also asked how microbes are adapted to gradients and extremes in irradiance, temperature, salinity, and nutrients, examining the responses of individual gene ortholog groups to these parameters. Functional and taxonomic metrics were equally well explained (75–79%) by environmental parameters. However, only functional and not taxonomic covariation patterns were conserved when comparing with an intruding water mass with different physicochemical properties. Temperature explained the most variation in each metric, followed by nitrate, chlorophyll, phosphate, and salinity. That nitrate explained more variation than phosphate suggested nitrogen limitation, consistent with low surface N:P ratios. Covariation of gene ortholog groups with environmental parameters revealed patterns of functional adaptation to the challenging Red Sea environment: high irradiance, temperature, salinity, and low nutrients. Nutrient-acquisition gene ortholog groups were anti-correlated with concentrations of their respective nutrient species, recapturing trends previously observed across much larger distances and environmental gradients. This dataset of metagenomic covariation along densely sampled environmental gradients includes online data exploration supplements, serving as a community
Covariance of dynamic strain responses for structural damage detection
Li, X. Y.; Wang, L. X.; Law, S. S.; Nie, Z. H.
2017-10-01
A new approach to address the practical problems with condition evaluation/damage detection of structures is proposed based on the distinct features of a new damage index. The covariance of strain response function (CoS) is a function of modal parameters of the structure. A local stiffness reduction in structure would cause monotonous increase in the CoS. Its sensitivity matrix with respect to local damages of structure is negative and narrow-banded. The damage extent can be estimated with an approximation to the sensitivity matrix to decouple the identification equations. The CoS sensitivity can be calibrated in practice from two previous states of measurements to estimate approximately the damage extent of a structure. A seven-storey plane frame structure is numerically studied to illustrate the features of the CoS index and the proposed method. A steel circular arch in the laboratory is tested. Natural frequencies changed due to damage in the arch and the damage occurrence can be judged. However, the proposed CoS method can identify not only damage happening but also location, even damage extent without need of an analytical model. It is promising for structural condition evaluation of selected components.
ErrorCheck: A New Method for Controlling the Accuracy of Pose Estimates
DEFF Research Database (Denmark)
Holm, Preben Hagh Strunge; Petersen, Henrik Gordon
2011-01-01
of a validated pose refinement method. ErrorCheck uses a theoretical estimate of the pose error covariance both for validating robustness and controlling the accuracy.We illustrate the first usage of ErrorCheck by applying it to state-of-the-art methods for pose refinement and some variations of these methods......In this paper, we present ErrorCheck, which is a new method for controlling the accuracy of a computer vision based pose refinement method. ErrorCheck consists of a way for validating robustness of a pose refinement method towards false correspondences and a way of controlling the accuracy...
Ding, Aidong Adam; Hsieh, Jin-Jian; Wang, Weijing
2015-01-01
Bivariate survival analysis has wide applications. In the presence of covariates, most literature focuses on studying their effects on the marginal distributions. However covariates can also affect the association between the two variables. In this article we consider the latter issue by proposing a nonstandard local linear estimator for the concordance probability as a function of covariates. Under the Clayton copula, the conditional concordance probability has a simple one-to-one correspondence with the copula parameter for different data structures including those subject to independent or dependent censoring and dependent truncation. The proposed method can be used to study how covariates affect the Clayton association parameter without specifying marginal regression models. Asymptotic properties of the proposed estimators are derived and their finite-sample performances are examined via simulations. Finally, for illustration, we apply the proposed method to analyze a bone marrow transplant data set.
Error handling strategies in multiphase inverse modeling
Energy Technology Data Exchange (ETDEWEB)
Finsterle, S.; Zhang, Y.
2010-12-01
Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Correction for quadrature errors
DEFF Research Database (Denmark)
Netterstrøm, A.; Christensen, Erik Lintz
1994-01-01
In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signals...
White, Andrew A; Gallagher, Thomas H
2013-01-01
Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. © 2013 Elsevier B.V. All rights reserved.
Graphical representation of covariant-contravariant modal formulae
Directory of Open Access Journals (Sweden)
Miguel Palomino
2011-08-01
Full Text Available Covariant-contravariant simulation is a combination of standard (covariant simulation, its contravariant counterpart and bisimulation. We have previously studied its logical characterization by means of the covariant-contravariant modal logic. Moreover, we have investigated the relationships between this model and that of modal transition systems, where two kinds of transitions (the so-called may and must transitions were combined in order to obtain a simple framework to express a notion of refinement over state-transition models. In a classic paper, Boudol and Larsen established a precise connection between the graphical approach, by means of modal transition systems, and the logical approach, based on Hennessy-Milner logic without negation, to system specification. They obtained a (graphical representation theorem proving that a formula can be represented by a term if, and only if, it is consistent and prime. We show in this paper that the formulae from the covariant-contravariant modal logic that admit a "graphical" representation by means of processes, modulo the covariant-contravariant simulation preorder, are also the consistent and prime ones. In order to obtain the desired graphical representation result, we first restrict ourselves to the case of covariant-contravariant systems without bivariant actions. Bivariant actions can be incorporated later by means of an encoding that splits each bivariant action into its covariant and its contravariant parts.
Covariance of maximum likelihood evolutionary distances between sequences aligned pairwise
Directory of Open Access Journals (Sweden)
Dessimoz Christophe
2008-06-01
Full Text Available Abstract Background The estimation of a distance between two biological sequences is a fundamental process in molecular evolution. It is usually performed by maximum likelihood (ML on characters aligned either pairwise or jointly in a multiple sequence alignment (MSA. Estimators for the covariance of pairs from an MSA are known, but we are not aware of any solution for cases of pairs aligned independently. In large-scale analyses, it may be too costly to compute MSAs every time distances must be compared, and therefore a covariance estimator for distances estimated from pairs aligned independently is desirable. Knowledge of covariances improves any process that compares or combines distances, such as in generalized least-squares phylogenetic tree building, orthology inference, or lateral gene transfer detection. Results In this paper, we introduce an estimator for the covariance of distances from sequences aligned pairwise. Its performance is analyzed through extensive Monte Carlo simulations, and compared to the well-known variance estimator of ML distances. Our covariance estimator can be used together with the ML variance estimator to form covariance matrices. Conclusion The estimator performs similarly to the ML variance estimator. In particular, it shows no sign of bias when sequence divergence is below 150 PAM units (i.e. above ~29% expected sequence identity. Above that distance, the covariances tend to be underestimated, but then ML variances are also underestimated.
Berg, P.; Reimers, C. E.; Rosman, J. H.; Huettel, M.; Delgard, M. L.; Reidenbach, M. A.; Özkan-Haller, H. T.
2015-11-01
Extracting benthic oxygen fluxes from eddy covariance time series measured in the presence of surface gravity waves requires careful consideration of the temporal alignment of the vertical velocity and the oxygen concentration. Using a model based on linear wave theory and measured eddy covariance data, we show that a substantial error in flux can arise if these two variables are not aligned correctly in time. We refer to this error in flux as the time lag bias. In one example, produced with the wave model, we found that an offset of 0.25 s between the oxygen and the velocity data produced a 2-fold overestimation of the flux. In another example, relying on nighttime data measured over a seagrass meadow, a similar offset reversed the flux from an uptake of -50 mmol m-2 d-1 to a release of 40 mmol m-2 d-1. The bias is most acute for data measured at shallow-water sites with short-period waves and low current velocities. At moderate or higher current velocities (> 5-10 cm s-1), the bias is usually insignificant. The widely used traditional time shift correction for data measured in unidirectional flows, where the maximum numerical flux is sought, should not be applied in the presence of waves because it tends to maximize the time lag bias or give unrealistic flux estimates. Based on wave model predictions and measured data, we propose a new time lag correction that minimizes the time lag bias. The correction requires that the time series of both vertical velocity and oxygen concentration contain a clear periodic wave signal. Because wave motions are often evident in eddy covariance data measured at shallow-water sites, we encourage more work on identifying new time lag corrections.
Covariate-adjusted measures of discrimination for survival data
DEFF Research Database (Denmark)
White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth
2015-01-01
statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
1996-01-01
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...... for multi-variate systems to an ARMAV model. The covariance equivalent model structure is also considered when the number of channels are different from the number of degrees offreedom to be modelled. Finally, it is reviewed how to estimate an ARMAV model from sampled data....
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continuous-time system excited by Gaussian white noise. This result is generalized...... for multivariate systems to an ARMAV model. The covariance equivalent model structure is also considered when the number of channels are different from the number of degrees of freedom to be modelled. Finally, it is reviewed how to estimate an ARMAV model from sampled data....
A spatial error model with continuous random effects and an application to growth convergence
Laurini, Márcio Poletti
2017-10-01
We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.
Directory of Open Access Journals (Sweden)
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755
Covariation between eumelanic pigmentation and body mass only under specific conditions
Roulin, Alexandre
2009-03-01
Identifying the factors that mediate covariation between an ornament and other phenotypic attributes is important to determine the signaling function of ornaments. Sign and magnitude of a covariation may vary across environments if the expression of the ornament or of its linked genes regulating correlated phenotypes is condition-dependent. I investigated in the barn owl Tyto alba whether sign and magnitude of covariation between body mass and two heritable melanin-based plumage ornaments change with food supply, along the reproductive cycle and from the morning to the evening. Using a dataset of 1,848 measurements of body mass in 336 breeding females, I found that females displaying large black spots were heavier than conspecifics with smaller spots in the afternoon (i.e., a long time after the last feeding) but not in the morning (i.e., a short time after the last feeding). This is consistent with the recently proposed hypothesis that eumelanin-based ornaments are associated with the ability to maintain energy balance between food intake and energy expenditure. Thus, covariation between melanin-based coloration and body mass can be detected only under specific conditions potentially explaining why it has been reported in only ten out of 28 vertebrate species. The proposition that ornamented individuals achieve a higher fitness than drab conspecifics only in specific environments should be tested for other ornaments.
Matlow, Anne; Stevens, Polly; Harrison, Christine; Laxer, Ronald M
2006-12-01
The 1999 release of the Institute of Medicine's document To Err is Human was akin to removing the lid of Pandora's box. Not only were the magnitude and impact of medical errors now apparent to those working in the health care industry, but consumers or health care were alerted to the occurrence of medical events causing harm. One specific solution advocated was the disclosure to patients and their families of adverse events resulting from medical error. Knowledge of the historical perspective, ethical underpinnings, and medico-legal implications gives us a better appreciation of current recommendations for disclosing adverse events resulting from medical error to those affected.
Empirical likelihood for cumulative hazard ratio estimation with covariate adjustment.
Dong, Bin; Matthews, David E
2012-06-01
In medical studies, it is often of scientific interest to evaluate the treatment effect via the ratio of cumulative hazards, especially when those hazards may be nonproportional. To deal with nonproportionality in the Cox regression model, investigators usually assume that the treatment effect has some functional form. However, to do so may create a model misspecification problem because it is generally difficult to justify the specific parametric form chosen for the treatment effect. In this article, we employ empirical likelihood (EL) to develop a nonparametric estimator of the cumulative hazard ratio with covariate adjustment under two nonproportional hazard models, one that is stratified, as well as a less restrictive framework involving group-specific treatment adjustment. The asymptotic properties of the EL ratio statistic are derived in each situation and the finite-sample properties of EL-based estimators are assessed via simulation studies. Simultaneous confidence bands for all values of the adjusted cumulative hazard ratio in a fixed interval of interest are also developed. The proposed methods are illustrated using two different datasets concerning the survival experience of patients with non-Hodgkin's lymphoma or ovarian cancer. © 2011, The International Biometric Society.
Covariant gaussian approximation in Ginzburg-Landau model
Wang, J. F.; Li, D. P.; Kao, H. C.; Rosenstein, B.
2017-05-01
Condensed matter systems undergoing second order transition away from the critical fluctuation region are usually described sufficiently well by the mean field approximation. The critical fluctuation region, determined by the Ginzburg criterion, | T /Tc - 1 | ≪ Gi, is narrow even in high Tc superconductors and has universal features well captured by the renormalization group method. However recent experiments on magnetization, conductivity and Nernst effect suggest that fluctuations effects are large in a wider region both above and below Tc. In particular some ;pseudogap; phenomena and strong renormalization of the mean field critical temperature Tmf can be interpreted as strong fluctuations effects that are nonperturbative (cannot be accounted for by ;gaussian fluctuations;). The physics in a broader region therefore requires more accurate approach. Self consistent methods are generally ;non-conserving; in the sense that the Ward identities are not obeyed. This is especially detrimental in the symmetry broken phase where, for example, Goldstone bosons become massive. Covariant gaussian approximation remedies these problems. The Green's functions obey all the Ward identities and describe the fluctuations much better. The results for the order parameter correlator and magnetic penetration depth of the Ginzburg-Landau model of superconductivity are compared with both Monte Carlo simulations and experiments in high Tc cuprates.
Adaptive Error Estimation in Linearized Ocean General Circulation Models
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large
Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds
Mitra, Arpita
2017-12-01
The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.
AFCI-2.0 Neutron Cross Section Covariance Library
Energy Technology Data Exchange (ETDEWEB)
Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.
2011-03-01
The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural
Schwerg, N
2006-01-01
A new fit function for the critical current density of superconducting NbTi cables for the LHC main dipoles is presented. Existing fit functions usually show a good matching of the very low field range, but produce a current density which is significantly too small for the intermediate and high field range. Consequently the multipole range measured at cold is only partially reproduced and loops from current cycling do not match. The presented function is used as input for the field quality calculation of a complete magnet cross-section including arbitrary current cycling and all hysteresis effects. This way allows to trace a so-called finger-print of the cable combination used in the LHC main bending magnets. The finger-print pattern is a consequence of the differences of the measured superconductor magnetization of cables from different manufacturers. The simulation results have been compared with measurements at cold obtained from LHC main dipoles and a very good agreement for low and intermediate field val...
Gauge covariance of the fermion Schwinger–Dyson equation in QED
Energy Technology Data Exchange (ETDEWEB)
Jia, Shaoyang, E-mail: sjia@email.wm.edu [Physics Department, College of William & Mary, Williamsburg, VA 23187 (United States); Pennington, M.R., E-mail: michaelp@jlab.org [Physics Department, College of William & Mary, Williamsburg, VA 23187 (United States); Theory Center, Thomas Jefferson National Accelerator Facility, Newport News, VA 23606 (United States)
2017-06-10
Any practical application of the Schwinger–Dyson equations to the study of n-point Green's functions in a strong coupling field theory requires truncations. In the case of QED, the gauge covariance, governed by the Landau–Khalatnikov–Fradkin transformations (LKFT), provides a unique constraint on such truncation. By using a spectral representation for the massive fermion propagator in QED, we are able to show that the constraints imposed by the LKFT are linear operations on the spectral densities. We formally define these group operations and show with a couple of examples how in practice they provide a straightforward way to test the gauge covariance of any viable truncation of the Schwinger–Dyson equation for the fermion 2-point function.
Gauge covariance of the fermion Schwinger–Dyson equation in QED
Directory of Open Access Journals (Sweden)
Shaoyang Jia
2017-06-01
Full Text Available Any practical application of the Schwinger–Dyson equations to the study of n-point Green's functions in a strong coupling field theory requires truncations. In the case of QED, the gauge covariance, governed by the Landau–Khalatnikov–Fradkin transformations (LKFT, provides a unique constraint on such truncation. By using a spectral representation for the massive fermion propagator in QED, we are able to show that the constraints imposed by the LKFT are linear operations on the spectral densities. We formally define these group operations and show with a couple of examples how in practice they provide a straightforward way to test the gauge covariance of any viable truncation of the Schwinger–Dyson equation for the fermion 2-point function.
... metabolism. A few of them are: Fructose intolerance Galactosemia Maple sugar urine disease (MSUD) Phenylketonuria (PKU) Newborn ... disorder. Alternative Names Metabolism - inborn errors of Images Galactosemia Phenylketonuria test References Bodamer OA. Approach to inborn ...
Medical Errors Reduction Initiative
National Research Council Canada - National Science Library
Mutter, Michael L
2005-01-01
The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...
Data Selection for Within-Class Covariance Estimation
2016-09-08
Data Selection for Within- Class Covariance Estimation Elliot Singer1, Tyler Campbell,2, and Douglas Reynolds1 1 Massachusetts Institute of...Technology Lincoln Laboratory 2 Rensselaer Polytechnic Institute es@ll.mit.edu, tylercampbell@mac.com, dar@ll.mit.edu Abstract * Methods for performing...NIST evaluations to train the within- class and across- class covariance matrices required by these techniques, little attention has been paid to the
Covariant Noether charge for higher dimensional Chern-Simons terms
Energy Technology Data Exchange (ETDEWEB)
Azeyanagi, Tatsuo [Département de Physique, Ecole Normale Supérieure, CNRS,24 rue Lhomond, 75005 Paris (France); Loganayagam, R. [School of Natural Sciences, Institute for Advanced Study,1 Einstein Drive, Princeton, NJ 08540 (United States); Ng, Gim Seng [Center for the Fundamental Laws of Nature, Harvard University,17 Oxford St, Cambridge, MA 02138 (United States); Rodriguez, Maria J. [Center for the Fundamental Laws of Nature, Harvard University,17 Oxford St, Cambridge, MA 02138 (United States); Institut de Physique Théorique, Orme des Merisiers batiment 774,Point courrier 136, CEA/DSM/IPhT, CEA/Saclay,F-91191 Gif-sur-Yvette Cedex (France)
2015-05-07
We construct a manifestly covariant differential Noether charge for theories with Chern-Simons terms in higher dimensional spacetimes. This is in contrast to Tachikawa’s extension of the standard Lee-Iyer-Wald formalism which results in a non-covariant differential Noether charge for Chern-Simons terms. On a bifurcation surface, our differential Noether charge integrates to the Wald-like entropy formula proposed by Tachikawa in (arXiv:hep-th/0611141v2).
Extreme Covariant Observables for Type I Symmetry Groups
Holevo, Alexander S.; Pellonpää, Juha-Pekka
2009-06-01
The structure of covariant observables—normalized positive operator measures (POMs)—is studied in the case of a type I symmetry group. Such measures are completely determined by kernels which are measurable fields of positive semidefinite sesquilinear forms. We produce the minimal Kolmogorov decompositions for the kernels and determine those which correspond to the extreme covariant observables. Illustrative examples of the extremals in the case of the Abelian symmetry group are given.
Modeling Portfolio Defaults using Hidden Markov Models with Covariates
Banachewicz, Konrad; van der Vaart, Aad; Lucas, André
2006-01-01
We extend the Hidden Markov Model for defaults of Crowder, Davis, and Giampieri (2005) to include covariates. The covariates enhance the prediction of transition probabilities from high to low default regimes. To estimate the model, we extend the EM estimating equations to account for the time varying nature of the conditional likelihoods due to sample attrition and extension. Using empirical U.S. default data, we find that GDP growth, the term structure of interest rates and stock market ret...
DEFF Research Database (Denmark)
Rasmussen, Jens
1983-01-01
An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....
Interruption Practice Reduces Errors
2014-01-01
dangers of errors at the PCS. Electronic health record systems are used to reduce certain errors related to poor- handwriting and dosage...Arlington VA 22202-4302 Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to...RESPONSIBLE PERSON a REPORT unclassified b ABSTRACT unclassified c THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39
Error estimation and adaptivity for incompressible hyperelasticity
Whiteley, J.P.
2014-04-30
SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Pierozzi, Marco [Istituto Geografico Militare, Florence (Italy). Direzione Geodetica
1997-01-01
The expression of the error committed when using GPS differential positioning directly within a local Datum (with no application of transformation formulas between the geodesic systems involved) is given. That error is considered both in cartesian and ellipsoidal coordinates and in the latter case it is shown, using covariant derivative, that it includes also a non-negligible term due to curvature induced in space by this kind of coordinates.
A three domain covariance framework for EEG/MEG data.
Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C
2015-10-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets. Copyright © 2015 Elsevier Inc. All rights reserved.
The Performance Analysis Based on SAR Sample Covariance Matrix
Directory of Open Access Journals (Sweden)
Esra Erten
2012-03-01
Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.