WorldWideScience

Sample records for handling missing covariate

  1. A New Approach to Handle Missing Covariate Data in Twin Research : With an Application to Educational Achievement Data.

    Science.gov (United States)

    Schwabe, Inga; Boomsma, Dorret I; Zeeuw, Eveline L de; Berg, Stéphanie M van den

    2016-07-01

    The often-used ACE model which decomposes phenotypic variance into additive genetic (A), common-environmental (C) and unique-environmental (E) parts can be extended to include covariates. Collection of these variables however often leads to a large amount of missing data, for example when self-reports (e.g. questionnaires) are not fully completed. The usual approach to handle missing covariate data in twin research results in reduced power to detect statistical effects, as only phenotypic and covariate data of individual twins with complete data can be used. Here we present a full information approach to handle missing covariate data that makes it possible to use all available data. A simulation study shows that, independent of missingness scenario, number of covariates or amount of missingness, the full information approach is more powerful than the usual approach. To illustrate the new method, we applied it to test scores on a Dutch national school achievement test (Eindtoets Basisonderwijs) in the final grade of primary school of 990 twin pairs. The effects of school-aggregated measures (e.g. school denomination, pedagogical philosophy, school size) and the effect of the sex of a twin on these test scores were tested. None of the covariates had a significant effect on individual differences in test scores.

  2. A comparison of multiple imputation methods for handling missing values in longitudinal data in the presence of a time-varying covariate with a non-linear association with time: a simulation study.

    Science.gov (United States)

    De Silva, Anurika Priyanjali; Moreno-Betancur, Margarita; De Livera, Alysha Madhu; Lee, Katherine Jane; Simpson, Julie Anne

    2017-07-25

    Missing data is a common problem in epidemiological studies, and is particularly prominent in longitudinal data, which involve multiple waves of data collection. Traditional multiple imputation (MI) methods (fully conditional specification (FCS) and multivariate normal imputation (MVNI)) treat repeated measurements of the same time-dependent variable as just another 'distinct' variable for imputation and therefore do not make the most of the longitudinal structure of the data. Only a few studies have explored extensions to the standard approaches to account for the temporal structure of longitudinal data. One suggestion is the two-fold fully conditional specification (two-fold FCS) algorithm, which restricts the imputation of a time-dependent variable to time blocks where the imputation model includes measurements taken at the specified and adjacent times. To date, no study has investigated the performance of two-fold FCS and standard MI methods for handling missing data in a time-varying covariate with a non-linear trajectory over time - a commonly encountered scenario in epidemiological studies. We simulated 1000 datasets of 5000 individuals based on the Longitudinal Study of Australian Children (LSAC). Three missing data mechanisms: missing completely at random (MCAR), and a weak and a strong missing at random (MAR) scenarios were used to impose missingness on body mass index (BMI) for age z-scores; a continuous time-varying exposure variable with a non-linear trajectory over time. We evaluated the performance of FCS, MVNI, and two-fold FCS for handling up to 50% of missing data when assessing the association between childhood obesity and sleep problems. The standard two-fold FCS produced slightly more biased and less precise estimates than FCS and MVNI. We observed slight improvements in bias and precision when using a time window width of two for the two-fold FCS algorithm compared to the standard width of one. We recommend the use of FCS or MVNI in a similar

  3. Randomization-based adjustment of multiple treatment hazard ratios for covariates with missing data.

    Science.gov (United States)

    Lam, Diana; Koch, Gary G; Preisser, John S; Saville, Benjamin R; Hussey, Michael A

    2017-01-01

    Clinical trials are designed to compare treatment effects when applied to samples from the same population. Randomization is used so that the samples are not biased with respect to baseline covariates that may influence the efficacy of the treatment. We develop randomization-based covariance adjustment methodology to estimate the log hazard ratios and their confidence intervals of multiple treatments in a randomized clinical trial with time-to-event outcomes and missingness among the baseline covariates. The randomization-based covariance adjustment method is a computationally straight-forward method for handling missing baseline covariate values.

  4. Semiparametric approach for non-monotone missing covariates in a parametric regression model

    KAUST Repository

    Sinha, Samiran

    2014-02-26

    Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.

  5. Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.

    Science.gov (United States)

    Xie, Yanmei; Zhang, Biao

    2017-04-20

    Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and

  6. A Review of Missing Data Handling Methods in Education Research

    Science.gov (United States)

    Cheema, Jehanzeb R.

    2014-01-01

    Missing data are a common occurrence in survey-based research studies in education, and the way missing values are handled can significantly affect the results of analyses based on such data. Despite known problems with performance of some missing data handling methods, such as mean imputation, many researchers in education continue to use those…

  7. A semiparametric missing-data-induced intensity method for missing covariate data in individually matched case-control studies.

    Science.gov (United States)

    Gebregziabher, Mulugeta; Langholz, Bryan

    2010-09-01

    In individually matched case-control studies, when some covariates are incomplete, an analysis based on the complete data may result in a large loss of information both in the missing and completely observed variables. This usually results in a bias and loss of efficiency. In this article, we propose a new method for handling the problem of missing covariate data based on a missing-data-induced intensity approach when the missingness mechanism does not depend on case-control status and show that this leads to a generalization of the missing indicator method. We derive the asymptotic properties of the estimates from the proposed method and, using an extensive simulation study, assess the finite sample performance in terms of bias, efficiency, and 95% confidence coverage under several missing data scenarios. We also make comparisons with complete-case analysis (CCA) and some missing data methods that have been proposed previously. Our results indicate that, under the assumption of predictable missingness, the suggested method provides valid estimation of parameters, is more efficient than CCA, and is competitive with other, more complex methods of analysis. A case-control study of multiple myeloma risk and a polymorphism in the receptor Inter-Leukin-6 (IL-6-α) is used to illustrate our findings. © 2009, The International Biometric Society.

  8. Evaluation of Alzheimer's disease progression based on clinical dementia rating scale with missing responses and covariates.

    Science.gov (United States)

    Das, Kalyan; Rana, Subrata; Roy, Surupa

    2017-11-27

    In clinical trials, patient's disease severity is usually assessed on a Likert-type scale. Patients, however, may miss one or more follow-up visits (non-monotone missing). The statistical analysis of non-Gaussian longitudinal data with non-monotone missingness is difficult to handle, particularly when both response and time-dependent covariates are subject to such missingness. Even when the number of patients with intermittent missing data is small, ignoring those patients from analysis seems to be unsatisfactory. The focus of the current investigation is to study the progression of Alzheimer's disease by incorporating a non-ignorable missing data mechanism for both response and covariates in a longitudinal setup. Combining the cumulative logit longitudinal model for Alzheimer's disease progression with the bivariate binary model for the missing pattern, we develop a joint likelihood. The parameters are then estimated using the Monte Carlo Newton Raphson Expectation Maximization (MCNREM) method. This approach is quite easy to handle and the convergence of the estimates is attained in a reasonable amount of time. The study reveals that apolipo-protein plays a significant role in assessing a patient's disease severity. A detailed simulation has also been carried out for justifying the performance of our approach.

  9. Handling missing data in ranked set sampling

    CERN Document Server

    Bouza-Herrera, Carlos N

    2013-01-01

    The existence of missing observations is a very important aspect to be considered in the application of survey sampling, for example. In human populations they may be caused by a refusal of some interviewees to give the true value for the variable of interest. Traditionally, simple random sampling is used to select samples. Most statistical models are supported by the use of samples selected by means of this design. In recent decades, an alternative design has started being used, which, in many cases, shows an improvement in terms of accuracy compared with traditional sampling. It is called R

  10. Bayesian structural equations modeling for ordinal response data with missing responses and missing covariates

    CSIR Research Space (South Africa)

    Kim, S

    2009-01-01

    Full Text Available for Ordinal Response Data with Missing Responses and Missing Covariates SUNGDUK KIM1, SONALI DAS2, MING-HUI CHEN3, AND NICHOLAS WARREN4 1Division of Epidemiology, Statistics and Prevention Research, Eunice Kennedy Shriver National Institute of Child.... Such response variables usually have a number of categories, often on a Likert-like scale. Ordinal response data arising from self-reported survey questionnaires are also common in assessment studies (Eaton and Bohrnstedt, 1989; Meredith and Mis- lap, 1992...

  11. Handling missing values in the MDS-UPDRS.

    Science.gov (United States)

    Goetz, Christopher G; Luo, Sheng; Wang, Lu; Tilley, Barbara C; LaPelle, Nancy R; Stebbins, Glenn T

    2015-10-01

    This study was undertaken to define the number of missing values permissible to render valid total scores for each Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) part. To handle missing values, imputation strategies serve as guidelines to reject an incomplete rating or create a surrogate score. We tested a rigorous, scale-specific, data-based approach to handling missing values for the MDS-UPDRS. From two large MDS-UPDRS datasets, we sequentially deleted item scores, either consistently (same items) or randomly (different items) across all subjects. Lin's Concordance Correlation Coefficient (CCC) compared scores calculated without missing values with prorated scores based on sequentially increasing missing values. The maximal number of missing values retaining a CCC greater than 0.95 determined the threshold for rendering a valid prorated score. A second confirmatory sample was selected from the MDS-UPDRS international translation program. To provide valid part scores applicable across all Hoehn and Yahr (H&Y) stages when the same items are consistently missing, one missing item from Part I, one from Part II, three from Part III, but none from Part IV can be allowed. To provide valid part scores applicable across all H&Y stages when random item entries are missing, one missing item from Part I, two from Part II, seven from Part III, but none from Part IV can be allowed. All cutoff values were confirmed in the validation sample. These analyses are useful for constructing valid surrogate part scores for MDS-UPDRS when missing items fall within the identified threshold and give scientific justification for rejecting partially completed ratings that fall below the threshold. © 2015 International Parkinson and Movement Disorder Society.

  12. Multiple imputation of missing blood pressure covariates in survival analysis

    NARCIS (Netherlands)

    Buuren, S. van; Boshuizen, H.C.; Knook, D.L.

    1999-01-01

    This paper studies a non-response problem in survival analysis where the occurrence of missing data in the risk factor is related to mortality. In a study to determine the influence of blood pressure on survival in the very old (85+ years), blood pressure measurements are missing in about 12.5 per

  13. A comparison of model-based imputation methods for handling missing predictor values in a linear regression model: A simulation study

    Science.gov (United States)

    Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah

    2017-08-01

    In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.

  14. Merging multiple longitudinal studies with study-specific missing covariates: A joint estimating function approach.

    Science.gov (United States)

    Wang, Fei; Song, Peter X-K; Wang, Lu

    2015-12-01

    Merging multiple datasets collected from studies with identical or similar scientific objectives is often undertaken in practice to increase statistical power. This article concerns the development of an effective statistical method that enables to merge multiple longitudinal datasets subject to various heterogeneous characteristics, such as different follow-up schedules and study-specific missing covariates (e.g., covariates observed in some studies but missing in other studies). The presence of study-specific missing covariates presents great statistical methodology challenge in data merging and analysis. We propose a joint estimating function approach to addressing this challenge, in which a novel nonparametric estimating function constructed via splines-based sieve approximation is utilized to bridge estimating equations from studies with missing covariates to those with fully observed covariates. Under mild regularity conditions, we show that the proposed estimator is consistent and asymptotically normal. We evaluate finite-sample performances of the proposed method through simulation studies. In comparison to the conventional multiple imputation approach, our method exhibits smaller estimation bias. We provide an illustrative data analysis using longitudinal cohorts collected in Mexico City to assess the effect of lead exposures on children's somatic growth. © 2015, The International Biometric Society.

  15. Some Simple Procedures for Handling Missing Data in Multivariate Analysis

    Science.gov (United States)

    Frane, James W.

    1976-01-01

    Several procedures are outlined for replacing missing values in multivariate analyses by regression values obtained in various ways, and for adjusting coefficients (such as factor score coefficients) when data are missing. None of the procedures are complex or expensive. (Author)

  16. Cox regression with missing covariate data using a modified partial likelihood method

    DEFF Research Database (Denmark)

    Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.

    2016-01-01

    Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...

  17. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Bayesian structural equations model for multilevel data with missing responses and missing covariates

    CSIR Research Space (South Africa)

    Kim, S

    2008-03-01

    Full Text Available – with increasing applications where self assessment questionnaires are the means to collect data. A very compre- hensive application of latent variables in psychology and social sciences is available in Bollen (2002). The data we investigate are from the Veterans... (1976) and Little and Rubin (2002)). Letting yij = (yij1, yij2, . . . , yijK)′, we thus partition the re- sponse vector y′ij into (y′ij,obs, y′ij,mis), and the vector of corresponding covariates Z ′ij as (Z ′ij,obs, Z′ij,mis), where suffix obs...

  19. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    Science.gov (United States)

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  20. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION

    Science.gov (United States)

    Allen, Genevera I.; Tibshirani, Robert

    2015-01-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823

  1. Spatio-Temporal Regression Based Clustering of Precipitation Extremes in a Presence of Systematically Missing Covariates

    Science.gov (United States)

    Kaiser, Olga; Martius, Olivia; Horenko, Illia

    2017-04-01

    Regression based Generalized Pareto Distribution (GPD) models are often used to describe the dynamics of hydrological threshold excesses relying on the explicit availability of all of the relevant covariates. But, in real application the complete set of relevant covariates might be not available. In this context, it was shown that under weak assumptions the influence coming from systematically missing covariates can be reflected by a nonstationary and nonhomogenous dynamics. We present a data-driven, semiparametric and an adaptive approach for spatio-temporal regression based clustering of threshold excesses in a presence of systematically missing covariates. The nonstationary and nonhomogenous behavior of threshold excesses is describes by a set of local stationary GPD models, where the parameters are expressed as regression models, and a non-parametric spatio-temporal hidden switching process. Exploiting nonparametric Finite Element time-series analysis Methodology (FEM) with Bounded Variation of the model parameters (BV) for resolving the spatio-temporal switching process, the approach goes beyond strong a priori assumptions made is standard latent class models like Mixture Models and Hidden Markov Models. Additionally, the presented FEM-BV-GPD provides a pragmatic description of the corresponding spatial dependence structure by grouping together all locations that exhibit similar behavior of the switching process. The performance of the framework is demonstrated on daily accumulated precipitation series over 17 different locations in Switzerland from 1981 till 2013 - showing that the introduced approach allows for a better description of the historical data.

  2. The handling of missing binary data in language research

    Directory of Open Access Journals (Sweden)

    François Pichette

    2015-01-01

    Full Text Available Researchers are frequently confronted with unanswered questions or items on their questionnaires and tests, due to factors such as item difficulty, lack of testing time, or participant distraction. This paper first presents results from a poll confirming previous claims (Rietveld & van Hout, 2006; Schafer & Gra- ham, 2002 that data replacement and deletion methods are common in research. Language researchers declared that when faced with missing answers of the yes/no type (that translate into zero or one in data tables, the three most common solutions they adopt are to exclude the participant’s data from the analyses, to leave the square empty, or to fill in with zero, as for an incorrect answer. This study then examines the impact on Cronbach’s α of five types of data insertion, using simulated and actual data with various numbers of participants and missing percentages. Our analyses indicate that the three most common methods we identified among language researchers are the ones with the greatest impact  n Cronbach's α coefficients; in other words, they are the least desirable solutions to the missing data problem. On the basis of our results, we make recommendations for language researchers concerning the best way to deal with missing data. Given that none of the most common simple methods works properly, we suggest that the missing data be replaced either by the item’s mean or by the participants’ overall mean to provide a better, more accurate image of the instrument’s internal consistency.

  3. A review of the handling of missing longitudinal outcome data in clinical trials.

    Science.gov (United States)

    Powney, Matthew; Williamson, Paula; Kirkham, Jamie; Kolamunnage-Dona, Ruwanthi

    2014-06-19

    The aim of this review was to establish the frequency with which trials take into account missingness, and to discover what methods trialists use for adjustment in randomised controlled trials with longitudinal measurements. Failing to address the problems that can arise from missing outcome data can result in misleading conclusions. Missing data should be addressed as a means of a sensitivity analysis of the complete case analysis results. One hundred publications of randomised controlled trials with longitudinal measurements were selected randomly from trial publications from the years 2005 to 2012. Information was extracted from these trials, including whether reasons for dropout were reported, what methods were used for handing the missing data, whether there was any explanation of the methods for missing data handling, and whether a statistician was involved in the analysis. The main focus of the review was on missing data post dropout rather than missing interim data. Of all the papers in the study, 9 (9%) had no missing data. More than half of the papers included in the study failed to make any attempt to explain the reasons for their choice of missing data handling method. Of the papers with clear missing data handling methods, 44 papers (50%) used adequate methods of missing data handling, whereas 30 (34%) of the papers used missing data methods which may not have been appropriate. In the remaining 17 papers (19%), it was difficult to assess the validity of the methods used. An imputation method was used in 18 papers (20%). Multiple imputation methods were introduced in 1987 and are an efficient way of accounting for missing data in general, and yet only 4 papers used these methods. Out of the 18 papers which used imputation, only 7 displayed the results as a sensitivity analysis of the complete case analysis results. 61% of the papers that used an imputation explained the reasons for their chosen method. Just under a third of the papers made no reference

  4. missMDA: A Package for Handling Missing Values in Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Julie Josse

    2016-04-01

    Full Text Available We present the R package missMDA which performs principal component methods on incomplete data sets, aiming to obtain scores, loadings and graphical representations despite missing values. Package methods include principal component analysis for continuous variables, multiple correspondence analysis for categorical variables, factorial analysis on mixed data for both continuous and categorical variables, and multiple factor analysis for multi-table data. Furthermore, missMDA can be used to perform single imputation to complete data involving continuous, categorical and mixed variables. A multiple imputation method is also available. In the principal component analysis framework, variability across different imputations is represented by confidence areas around the row and column positions on the graphical outputs. This allows assessment of the credibility of results obtained from incomplete data sets.

  5. Imputation and variable selection in linear regression models with missing covariates.

    Science.gov (United States)

    Yang, Xiaowei; Belin, Thomas R; Boscardin, W John

    2005-06-01

    Across multiply imputed data sets, variable selection methods such as stepwise regression and other criterion-based strategies that include or exclude particular variables typically result in models with different selected predictors, thus presenting a problem for combining the results from separate complete-data analyses. Here, drawing on a Bayesian framework, we propose two alternative strategies to address the problem of choosing among linear regression models when there are missing covariates. One approach, which we call "impute, then select" (ITS) involves initially performing multiple imputation and then applying Bayesian variable selection to the multiply imputed data sets. A second strategy is to conduct Bayesian variable selection and missing data imputation simultaneously within one Gibbs sampling process, which we call "simultaneously impute and select" (SIAS). The methods are implemented and evaluated using the Bayesian procedure known as stochastic search variable selection for multivariate normal data sets, but both strategies offer general frameworks within which different Bayesian variable selection algorithms could be used for other types of data sets. A study of mental health services utilization among children in foster care programs is used to illustrate the techniques. Simulation studies show that both ITS and SIAS outperform complete-case analysis with stepwise variable selection and that SIAS slightly outperforms ITS.

  6. The competing risks Cox model with auxiliary case covariates under weaker missing-at-random cause of failure.

    Science.gov (United States)

    Nevo, Daniel; Nishihara, Reiko; Ogino, Shuji; Wang, Molin

    2017-08-04

    In the analysis of time-to-event data with multiple causes using a competing risks Cox model, often the cause of failure is unknown for some of the cases. The probability of a missing cause is typically assumed to be independent of the cause given the time of the event and covariates measured before the event occurred. In practice, however, the underlying missing-at-random assumption does not necessarily hold. Motivated by colorectal cancer molecular pathological epidemiology analysis, we develop a method to conduct valid analysis when additional auxiliary variables are available for cases only. We consider a weaker missing-at-random assumption, with missing pattern depending on the observed quantities, which include the auxiliary covariates. We use an informative likelihood approach that will yield consistent estimates even when the underlying model for missing cause of failure is misspecified. The superiority of our method over naive methods in finite samples is demonstrated by simulation study results. We illustrate the use of our method in an analysis of colorectal cancer data from the Nurses' Health Study cohort, where, apparently, the traditional missing-at-random assumption fails to hold.

  7. Jointly Modeling Event Time and Skewed-Longitudinal Data with Missing Response and Mismeasured Covariate for AIDS Studies.

    Science.gov (United States)

    Huang, Yangxin; Yan, Chunning; Xing, Dongyuan; Zhang, Nanhua; Chen, Henian

    2015-01-01

    In longitudinal studies it is often of interest to investigate how a repeatedly measured marker in time is associated with a time to an event of interest. This type of research question has given rise to a rapidly developing field of biostatistics research that deals with the joint modeling of longitudinal and time-to-event data. Normality of model errors in longitudinal model is a routine assumption, but it may be unrealistically obscuring important features of subject variations. Covariates are usually introduced in the models to partially explain between- and within-subject variations, but some covariates such as CD4 cell count may be often measured with substantial errors. Moreover, the responses may encounter nonignorable missing. Statistical analysis may be complicated dramatically based on longitudinal-survival joint models where longitudinal data with skewness, missing values, and measurement errors are observed. In this article, we relax the distributional assumptions for the longitudinal models using skewed (parametric) distribution and unspecified (nonparametric) distribution placed by a Dirichlet process prior, and address the simultaneous influence of skewness, missingness, covariate measurement error, and time-to-event process by jointly modeling three components (response process with missing values, covariate process with measurement errors, and time-to-event process) linked through the random-effects that characterize the underlying individual-specific longitudinal processes in Bayesian analysis. The method is illustrated with an AIDS study by jointly modeling HIV/CD4 dynamics and time to viral rebound in comparison with potential models with various scenarios and different distributional specifications.

  8. Multiple imputation of missing covariates with non-linear effects and interactions: an evaluation of statistical methods.

    Science.gov (United States)

    Seaman, Shaun R; Bartlett, Jonathan W; White, Ian R

    2012-04-10

    Multiple imputation is often used for missing data. When a model contains as covariates more than one function of a variable, it is not obvious how best to impute missing values in these covariates. Consider a regression with outcome Y and covariates X and X2. In 'passive imputation' a value X* is imputed for X and then X2 is imputed as (X*)2. A recent proposal is to treat X2 as 'just another variable' (JAV) and impute X and X2 under multivariate normality. We use simulation to investigate the performance of three methods that can easily be implemented in standard software: 1) linear regression of X on Y to impute X then passive imputation of X2; 2) the same regression but with predictive mean matching (PMM); and 3) JAV. We also investigate the performance of analogous methods when the analysis involves an interaction, and study the theoretical properties of JAV. The application of the methods when complete or incomplete confounders are also present is illustrated using data from the EPIC Study. JAV gives consistent estimation when the analysis is linear regression with a quadratic or interaction term and X is missing completely at random. When X is missing at random, JAV may be biased, but this bias is generally less than for passive imputation and PMM. Coverage for JAV was usually good when bias was small. However, in some scenarios with a more pronounced quadratic effect, bias was large and coverage poor. When the analysis was logistic regression, JAV's performance was sometimes very poor. PMM generally improved on passive imputation, in terms of bias and coverage, but did not eliminate the bias. Given the current state of available software, JAV is the best of a set of imperfect imputation methods for linear regression with a quadratic or interaction effect, but should not be used for logistic regression.

  9. A review of the reporting and handling of missing data in cohort studies with repeated assessment of exposure measures.

    Science.gov (United States)

    Karahalios, Amalia; Baglietto, Laura; Carlin, John B; English, Dallas R; Simpson, Julie A

    2012-07-11

    Retaining participants in cohort studies with multiple follow-up waves is difficult. Commonly, researchers are faced with the problem of missing data, which may introduce biased results as well as a loss of statistical power and precision. The STROBE guidelines von Elm et al. (Lancet, 370:1453-1457, 2007); Vandenbroucke et al. (PLoS Med, 4:e297, 2007) and the guidelines proposed by Sterne et al. (BMJ, 338:b2393, 2009) recommend that cohort studies report on the amount of missing data, the reasons for non-participation and non-response, and the method used to handle missing data in the analyses. We have conducted a review of publications from cohort studies in order to document the reporting of missing data for exposure measures and to describe the statistical methods used to account for the missing data. A systematic search of English language papers published from January 2000 to December 2009 was carried out in PubMed. Prospective cohort studies with a sample size greater than 1,000 that analysed data using repeated measures of exposure were included. Among the 82 papers meeting the inclusion criteria, only 35 (43%) reported the amount of missing data according to the suggested guidelines. Sixty-eight papers (83%) described how they dealt with missing data in the analysis. Most of the papers excluded participants with missing data and performed a complete-case analysis (n=54, 66%). Other papers used more sophisticated methods including multiple imputation (n=5) or fully Bayesian modeling (n=1). Methods known to produce biased results were also used, for example, Last Observation Carried Forward (n=7), the missing indicator method (n=1), and mean value substitution (n=3). For the remaining 14 papers, the method used to handle missing data in the analysis was not stated. This review highlights the inconsistent reporting of missing data in cohort studies and the continuing use of inappropriate methods to handle missing data in the analysis. Epidemiological journals

  10. When and how should multiple imputation be used for handling missing data in randomised clinical trials – a practical guide with flowcharts

    Directory of Open Access Journals (Sweden)

    Janus Christian Jakobsen

    2017-12-01

    Full Text Available Abstract Background Missing data may seriously compromise inferences from randomised clinical trials, especially if missing data are not handled appropriately. The potential bias due to missing data depends on the mechanism causing the data to be missing, and the analytical methods applied to amend the missingness. Therefore, the analysis of trial data with missing values requires careful planning and attention. Methods The authors had several meetings and discussions considering optimal ways of handling missing data to minimise the bias potential. We also searched PubMed (key words: missing data; randomi*; statistical analysis and reference lists of known studies for papers (theoretical papers; empirical studies; simulation studies; etc. on how to deal with missing data when analysing randomised clinical trials. Results Handling missing data is an important, yet difficult and complex task when analysing results of randomised clinical trials. We consider how to optimise the handling of missing data during the planning stage of a randomised clinical trial and recommend analytical approaches which may prevent bias caused by unavoidable missing data. We consider the strengths and limitations of using of best-worst and worst-best sensitivity analyses, multiple imputation, and full information maximum likelihood. We also present practical flowcharts on how to deal with missing data and an overview of the steps that always need to be considered during the analysis stage of a trial. Conclusions We present a practical guide and flowcharts describing when and how multiple imputation should be used to handle missing data in randomised clinical.

  11. Criteria of GenCall score to edit marker data and methods to handle missing markers have an influence on accuracy of genomic predictions

    DEFF Research Database (Denmark)

    Edriss, Vahid; Guldbrandtsen, Bernt; Lund, Mogens Sandø

    2013-01-01

    contained 1071 Jersey bulls that were genotyped with the Illumina Bovine 50K chip. After preliminary editing, 39227 SNP remained in the dataset. Four methods to handle missing genotypes were: 1) BEAGLE: missing markers were imputed using Beagle 3.3 software, 2) COMMON: missing genotypes at a locus were...... in this study, imputation with Beagle was the best approach to handle missing genotypes. Treating missing markers as a pseudo-allele, replacing missing markers with a population average or substituting the most common alleles each reduced the accuracy of genomic predictions. The results from this study suggest...

  12. Missing data in a multi-item instrument were best handled by multiple imputation at the item score level

    NARCIS (Netherlands)

    Eekhout, Iris; de Vet, Henrica C. W.; Twisk, Jos W. R.; Brand, Jaap P. L.; de Boer, Michiel R.; Heymans, Martijn W.

    Objectives: Regardless of the proportion of missing values, complete-case analysis is most frequently applied, although advanced techniques such as multiple imputation (MI) are available. The objective of this study was to explore the performance of simple and more advanced methods for handling

  13. A systematic review identifies a lack of standardization in methods for handling missing variance data.

    Science.gov (United States)

    Wiebe, Natasha; Vandermeer, Ben; Platt, Robert W; Klassen, Terry P; Moher, David; Barrowman, Nicholas J

    2006-04-01

    To describe and critically appraise available methods for handling missing variance data in meta-analysis (MA). Systematic review. MEDLINE, EMBASE, Web of Science, MathSciNet, Current Index to Statistics, BMJ SearchAll, The Cochrane Library and Cochrance Colloquium proceedings, MA texts and references were searched. Any form of text was included: MA, method chapter, or otherwise. Descriptions of how to implement each method, the theoretic basis and/or ad hoc motivation(s), and the input and output variable(s) were extracted and assessed. Methods may be: true imputations, methods that obviate the need for a standard deviation (SD), or methods that recalculate the SD. Eight classes of methods were identified: algebraic recalculations, approximate algebraic recalculations, imputed study-level SDs, imputed study-level SDs from nonparametric summaries, imputed study-level correlations (e.g., for change-from-baseline SD), imputed MA-level effect sizes, MA-level tests, and no-impute methods. This work aggregates the ideas of many investigators. The abundance of methods suggests a lack of consistency within the systematic review community. Appropriate use of methods is sometimes suspect; consulting a statistician, early in the review process, is recommended. Further work is required to optimize method choice to alleviate any potential for bias and improve accuracy. Improved reporting is also encouraged.

  14. Urban eddy covariance measurements reveal significant missing NOx emissions in Central Europe.

    Science.gov (United States)

    Karl, T; Graus, M; Striednig, M; Lamprecht, C; Hammerle, A; Wohlfahrt, G; Held, A; von der Heyden, L; Deventer, M J; Krismer, A; Haun, C; Feichter, R; Lee, J

    2017-05-30

    Nitrogen oxide (NOx) pollution is emerging as a primary environmental concern across Europe. While some large European metropolitan areas are already in breach of EU safety limits for NO2, this phenomenon does not seem to be only restricted to large industrialized areas anymore. Many smaller scale populated agglomerations including their surrounding rural areas are seeing frequent NO2 concentration violations. The question of a quantitative understanding of different NOx emission sources is therefore of immanent relevance for climate and air chemistry models as well as air pollution management and health. Here we report simultaneous eddy covariance flux measurements of NOx, CO2, CO and non methane volatile organic compound tracers in a city that might be considered representative for Central Europe and the greater Alpine region. Our data show that NOx fluxes are largely at variance with modelled emission projections, suggesting an appreciable underestimation of the traffic related atmospheric NOx input in Europe, comparable to the weekend-weekday effect, which locally changes ozone production rates by 40%.

  15. A pattern-mixture model approach for handling missing continuous outcome data in longitudinal cluster randomized trials.

    Science.gov (United States)

    Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L

    2017-11-20

    We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.

  16. CE: Original Research: Exploring How Nursing Schools Handle Student Errors and Near Misses.

    Science.gov (United States)

    Disch, Joanne; Barnsteiner, Jane; Connor, Susan; Brogren, Fabiana

    2017-10-01

    : Background: Little attention has been paid to how nursing students learn about quality and safety, and to the tools and policies that guide nursing schools in helping students respond to errors and near misses. This study sought to determine whether prelicensure nursing programs have a policy for reporting and following up on student clinical errors and near misses, a tool for such reporting, a tool or process (or both) for identifying trends, strategies for follow-up with students after errors and near misses, and strategies for follow-up with clinical agencies and individual faculty members. A national electronic survey of 1,667 schools of nursing with a prelicensure registered nursing program was conducted. Data from 494 responding schools (30%) were analyzed. Of the responding schools, 245 (50%) reported having no policy for managing students following a clinical error or near miss, and 272 (55%) reported having no tool for reporting student errors or near misses. Significant work is needed if the principles of a fair and just culture are to shape the response to nursing student errors and near misses. For nursing schools, some essential first steps are to understand the tools and policies a school has in place; the school's philosophy regarding errors and near misses; the resources needed to establish a fair and just culture; and how faculty can work together to create learning environments that eliminate or minimize the negative consequences of errors and near misses for patients, students, and faculty.

  17. Bias in regression coefficient estimates when assumptions for handling missing data are violated: a simulation study

    Directory of Open Access Journals (Sweden)

    Sander MJ van Kuijk

    2016-03-01

    Full Text Available BackgroundThe purpose of this simulation study is to assess the performance of multiple imputation compared to complete case analysis when assumptions of missing data mechanisms are violated.MethodsThe authors performed a stochastic simulation study to assess the performance of Complete Case (CC analysis and Multiple Imputation (MI with different missing data mechanisms (missing completely at random (MCAR, at random (MAR, and not at random (MNAR. The study focused on the point estimation of regression coefficients and standard errors.ResultsWhen data were MAR conditional on Y, CC analysis resulted in biased regression coefficients; they were all underestimated in our scenarios. In these scenarios, analysis after MI gave correct estimates. Yet, in case of MNAR MI yielded biased regression coefficients, while CC analysis performed well.ConclusionThe authors demonstrated that MI was only superior to CC analysis in case of MCAR or MAR. In some scenarios CC may be superior over MI. Often it is not feasible to identify the reason why data in a given dataset are missing. Therefore, emphasis should be put on reporting the extent of missing values, the method used to address them, and the assumptions that were made about the mechanism that caused missing data.

  18. Handling missing rows in multi-omics data integration: multiple imputation in multiple factor analysis framework.

    Science.gov (United States)

    Voillet, Valentin; Besse, Philippe; Liaubet, Laurence; San Cristobal, Magali; González, Ignacio

    2016-10-03

    In omics data integration studies, it is common, for a variety of reasons, for some individuals to not be present in all data tables. Missing row values are challenging to deal with because most statistical methods cannot be directly applied to incomplete datasets. To overcome this issue, we propose a multiple imputation (MI) approach in a multivariate framework. In this study, we focus on multiple factor analysis (MFA) as a tool to compare and integrate multiple layers of information. MI involves filling the missing rows with plausible values, resulting in M completed datasets. MFA is then applied to each completed dataset to produce M different configurations (the matrices of coordinates of individuals). Finally, the M configurations are combined to yield a single consensus solution. We assessed the performance of our method, named MI-MFA, on two real omics datasets. Incomplete artificial datasets with different patterns of missingness were created from these data. The MI-MFA results were compared with two other approaches i.e., regularized iterative MFA (RI-MFA) and mean variable imputation (MVI-MFA). For each configuration resulting from these three strategies, the suitability of the solution was determined against the true MFA configuration obtained from the original data and a comprehensive graphical comparison showing how the MI-, RI- or MVI-MFA configurations diverge from the true configuration was produced. Two approaches i.e., confidence ellipses and convex hulls, to visualize and assess the uncertainty due to missing values were also described. We showed how the areas of ellipses and convex hulls increased with the number of missing individuals. A free and easy-to-use code was proposed to implement the MI-MFA method in the R statistical environment. We believe that MI-MFA provides a useful and attractive method for estimating the coordinates of individuals on the first MFA components despite missing rows. MI-MFA configurations were close to the true

  19. Scalable Data Quality for Big Data: The Pythia Framework for Handling Missing Values.

    Science.gov (United States)

    Cahsai, Atoshum; Anagnostopoulos, Christos; Triantafillou, Peter

    2015-09-01

    Solving the missing-value (MV) problem with small estimation errors in large-scale data environments is a notoriously resource-demanding task. The most widely used MV imputation approaches are computationally expensive because they explicitly depend on the volume and the dimension of the data. Moreover, as datasets and their user community continuously grow, the problem can only be exacerbated. In an attempt to deal with such a problem, in our previous work, we introduced a novel framework coined Pythia, which employs a number of distributed data nodes (cohorts), each of which contains a partition of the original dataset. To perform MV imputation, the Pythia, based on specific machine and statistical learning structures (signatures), selects the most appropriate subset of cohorts to perform locally a missing value substitution algorithm (MVA). This selection relies on the principle that particular subset of cohorts maintains the most relevant partition of the dataset. In addition to this, as Pythia uses only part of the dataset for imputation and accesses different cohorts in parallel, it improves efficiency, scalability, and accuracy compared to a single machine (coined Godzilla), which uses the entire massive dataset to compute imputation requests. Although this article is an extension of our previous work, we particularly investigate the robustness of the Pythia framework and show that the Pythia is independent from any MVA and signature construction algorithms. In order to facilitate our research, we considered two well-known MVAs (namely K-nearest neighbor and expectation-maximization imputation algorithms), as well as two machine and neural computational learning signature construction algorithms based on adaptive vector quantization and competitive learning. We prove comprehensive experiments to assess the performance of the Pythia against Godzilla and showcase the benefits stemmed from this framework.

  20. Handling of incidents, near-misses; Hantering av haendelser, naera misstag

    Energy Technology Data Exchange (ETDEWEB)

    Renborg, Bo; Jonsson, Klas; Broqvist, Kristoffer; Keski-Seppaelae, Sven [Professor Sten Luthander Ingenjoersbyraa AB, Gaevlegatan 22, SE-113 30 Stockholm (Sweden)

    2006-12-15

    This work has primarily been done as a study of available literature about reporting systems. The following items have also been considered: the participants' experience of safety work in general and reporting systems in particular, as well as correspondence with researchers and organisations that have experience from reporting systems in safety-critical applications. A number of definitions of the English term 'near-miss' have been found in the documentation about safety-critical systems. An important conclusion is that creating a precise definition in itself is not critical. The main objective is to persuade the individuals to report perceived risks as well as actual events or conditions. In this report, we have chosen to use the following definition of what should be reported: A condition or an incident with potential for more serious consequences. The reporting systems that have been evaluated have all data in the same system; they do not divide data into separate systems for incidents or 'near-misses'. The term incident in the literature is not used consistently, especially if both Swedish and English texts are considered. In a large portion of the documentation where the reporting system is mentioned, the focus lies more on analysis than on the problem with the willingness to report. Even when the focus is on reporting it is often dealing with the design of the actual report in order to enable the subsequent treatment of data. In some cases this has led to unnecessary complicated report forms. The cornerstone of a high willingness to report is the creation of a 'no-blame' culture. Based on experience it can be concluded that the question whether a report could lead to personal reprisals is crucial. Even a system that explicitly gives the reporter immunity is still brittle. The bare suspicion (that immunity may vanish) in the mind of the one reporting reduces the willingness to report dramatically. Meaning that the purpose of

  1. Research Note: The consequences of different methods for handling missing network data in Stochastic Actor Based Models.

    Science.gov (United States)

    Hipp, John R; Wang, Cheng; Butts, Carter T; Jose, Rupa; Lakon, Cynthia M

    2015-05-01

    Although stochastic actor based models (e.g., as implemented in the SIENA software program) are growing in popularity as a technique for estimating longitudinal network data, a relatively understudied issue is the consequence of missing network data for longitudinal analysis. We explore this issue in our research note by utilizing data from four schools in an existing dataset (the AddHealth dataset) over three time points, assessing the substantive consequences of using four different strategies for addressing missing network data. The results indicate that whereas some measures in such models are estimated relatively robustly regardless of the strategy chosen for addressing missing network data, some of the substantive conclusions will differ based on the missing data strategy chosen. These results have important implications for this burgeoning applied research area, implying that researchers should more carefully consider how they address missing data when estimating such models.

  2. Reporting and Handling Missing Outcome Data in Mental Health: A Systematic Review of Cochrane Systematic Reviews and Meta-Analyses

    Science.gov (United States)

    Spineli, Loukia M.; Pandis, Nikolaos; Salanti, Georgia

    2015-01-01

    Objectives: The purpose of the study was to provide empirical evidence about the reporting of methodology to address missing outcome data and the acknowledgement of their impact in Cochrane systematic reviews in the mental health field. Methods: Systematic reviews published in the Cochrane Database of Systematic Reviews after January 1, 2009 by…

  3. Dealing with missing data in the MICROSCOPE space mission: An adaptation of inpainting to handle colored-noise data

    Science.gov (United States)

    Pires, Sandrine; Bergé, Joel; Baghi, Quentin; Touboul, Pierre; Métris, Gilles

    2016-12-01

    The MICROSCOPE space mission, launched on April 25, 2016, aims to test the weak equivalence principle (WEP) with a 10-15 precision. Reaching this performance requires an accurate and robust data analysis method, especially since the possible WEP violation signal will be dominated by a strongly colored noise. An important complication is brought by the fact that some values will be missing—therefore, the measured time series will not be strictly regularly sampled. Those missing values induce a spectral leakage that significantly increases the noise in Fourier space, where the WEP violation signal is looked for, thereby complicating scientific returns. Recently, we developed an inpainting algorithm to correct the MICROSCOPE data for missing values. This code has been integrated in the official MICROSCOPE data processing and analysis pipeline because it enables us to significantly measure an equivalence principle violation (EPV) signal in a model-independent way, in the inertial satellite configuration. In this work, we present several improvements to the method that may allow us now to reach the MICROSCOPE requirements for both inertial and spin satellite configurations. The main improvement has been obtained using a prior on the power spectrum of the colored noise that can be directly derived from the incomplete data. We show that after reconstructing missing values with this new algorithm, a least-squares fit may allow us to significantly measure an EPV signal with a 0.96 ×10-15 precision in the inertial mode and 1.20 ×10-15 precision in the spin mode. Although, the inpainting method presented in this paper has been optimized to the MICROSCOPE data, it remains sufficiently general to be used in the general context of missing data in time series dominated by an unknown colored noise. The improved inpainting software, called inpainting for colored-noise dominated signals, is freely available at http://www.cosmostat.org/software/icon.

  4. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  5. Flexible Imputation of Missing Data

    CERN Document Server

    van Buuren, Stef

    2012-01-01

    Missing data form a problem in every scientific discipline, yet the techniques required to handle them are complicated and often lacking. One of the great ideas in statistical science--multiple imputation--fills gaps in the data with plausible values, the uncertainty of which is coded in the data itself. It also solves other problems, many of which are missing data problems in disguise. Flexible Imputation of Missing Data is supported by many examples using real data taken from the author's vast experience of collaborative research, and presents a practical guide for handling missing data unde

  6. Complex Covariance

    Directory of Open Access Journals (Sweden)

    Frieder Kleefeld

    2013-01-01

    Full Text Available According to some generalized correspondence principle the classical limit of a non-Hermitian quantum theory describing quantum degrees of freedom is expected to be the well known classical mechanics of classical degrees of freedom in the complex phase space, i.e., some phase space spanned by complex-valued space and momentum coordinates. As special relativity was developed by Einstein merely for real-valued space-time and four-momentum, we will try to understand how special relativity and covariance can be extended to complex-valued space-time and four-momentum. Our considerations will lead us not only to some unconventional derivation of Lorentz transformations for complex-valued velocities, but also to the non-Hermitian Klein-Gordon and Dirac equations, which are to lay the foundations of a non-Hermitian quantum theory.

  7. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...... measures and longitudinal structures, and the third involves a spatiotemporal analysis of rainfall data. The models take non-normality into account in the conventional way by means of a variance function, and the mean structure is modelled by means of a link function and a linear predictor. The models...

  8. Covariant quantum Markovian evolutions

    Science.gov (United States)

    Holevo, A. S.

    1996-04-01

    Quantum Markovian master equations with generally unbounded generators, having physically relevant symmetries, such as Weyl, Galilean or boost covariance, are characterized. It is proven in particular that a fully Galilean covariant zero spin Markovian evolution reduces to the free motion perturbed by a covariant stochastic process with independent stationary increments in the classical phase space. A general form of the boost covariant Markovian master equation is discussed and a formal dilation to the Langevin equation driven by quantum Boson noises is described.

  9. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-09-01

    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  10. Covariance evaluation system

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Shibata, Keiichi

    1997-09-01

    A covariance evaluation system for the evaluated nuclear data library was established. The parameter estimation method and the least squares method with a spline function are used to generate the covariance data. Uncertainties of nuclear reaction model parameters are estimated from experimental data uncertainties, then the covariance of the evaluated cross sections is calculated by means of error propagation. Computer programs ELIESE-3, EGNASH4, ECIS, and CASTHY are used. Covariances of {sup 238}U reaction cross sections were calculated with this system. (author)

  11. Missing Letters

    Directory of Open Access Journals (Sweden)

    Christopher Lee

    2011-11-01

    Full Text Available ABSTRACT: This paper tracks the symptom of ‘missing letters’ in order to connect the anxiety that permeates August Strindberg’s life and works to his destiny as the carrier of his dead sister’s crypt. In his 1887 essay “‘Soul Murder’ (A Propos “Rosmersholm”,” Strindberg reveals that the bottom-line of his anxiety is not interpersonal conflict, but the potential for loss that always accompanies transmitted messages along their itineraries. By couching this threat of loss in the image of missing letters, Strindberg establishes the interchangeability between letters that go missing in transit to those missing letters that enter the corpus uninvited through the apertures of communication. Following the trajectory of these missing letters in his oeuvre (most notably, inThe Father, Miss Julie, and The Dance of Death I, the paper eventually locates at the (missing dead center of Strindberg’s literary corpus the phantom transmission from mother to son of the author’s younger sister Eleonora. Cet article suit le phénomène des « lettres disparues » pour faire le lien entre les grandes inquiétudes qui imprègnent la vie et les œuvres d’August Strinberg et son destin de porter la crypte de sa sœur décédée. Dans son essai écrit en 1887, « Soul Murder (A Propos Rosmerholm » Strindberg révèle que le résultat de ses grandes inquiétudes n’est pas un conflit interpersonnel, mais la probabilité des messages de se perdre en cours de route. En déguisant la menace de perte en l’image des lettres disparues, Strindberg établit l’interchangeabilité entre les lettres disparues en transit et celles non sollicitées qui pénètrent le corpus à travers les ouvertures de communication. En suivant la trajectoire de ces lettres disparues dans ces œuvres (notamment dans Père, dansMademoiselle Julie et dans La danse de mort I, cet article trouve finalement la transmission fantôme de mère à fils de la sœur cadette

  12. Coal Handling.

    Science.gov (United States)

    1981-04-01

    Engineering Co. 324 Barnhart St Marion , OH 43302 FMC Corp Material Handling Equip. Div. Homer City, PA 15748 General Kenematics Corp. 777 Lake Zurich Rd...NJ 07055 Midwest Conveyor Co 450-B E. Donovan Rd. Kansas City, KS 44711 Pennsylvania Crusher Corp P.O. Box 100 Broomau, PA 19008 Ramsey Engineering Co

  13. Covariant w∞ gravity

    NARCIS (Netherlands)

    Bergshoeff, E.; Pope, C.N.; Stelle, K.S.

    1990-01-01

    We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.

  14. Comparison of Imputation Methods for Handling Missing Categorical Data with Univariate Pattern // Una comparación de métodos de imputación de variables categóricas con patrón univariado

    OpenAIRE

    Torres Munguía, Juan Armando

    2014-01-01

    This paper examines the sample proportions estimates in the presence of univariate missing categorical data. A database about smoking habits (2011 National Addiction Survey of Mexico) was used to create simulated yet realistic datasets at rates 5% and 15% of missingness, each for MCAR, MAR and MNAR mechanisms. Then the performance of six methods for addressing missingness is evaluated: listwise, mode imputation, random imputation, hot-deck, imputation by polytomous regression and random fores...

  15. Comparison of Imputation Methods for Handling Missing Categorical Data with Univariate Pattern|| Una comparación de métodos de imputación de variables categóricas con patrón univariado

    Directory of Open Access Journals (Sweden)

    Torres Munguía, Juan Armando

    2014-06-01

    Full Text Available This paper examines the sample proportions estimates in the presence of univariate missing categorical data. A database about smoking habits (2011 National Addiction Survey of Mexico was used to create simulated yet realistic datasets at rates 5% and 15% of missingness, each for MCAR, MAR and MNAR mechanisms. Then the performance of six methods for addressing missingness is evaluated: listwise, mode imputation, random imputation, hot-deck, imputation by polytomous regression and random forests. Results showed that the most effective methods for dealing with missing categorical data in most of the scenarios assessed in this paper were hot-deck and polytomous regression approaches. || El presente estudio examina la estimación de proporciones muestrales en la presencia de valores faltantes en una variable categórica. Se utiliza una encuesta de consumo de tabaco (Encuesta Nacional de Adicciones de México 2011 para crear bases de datos simuladas pero reales con 5% y 15% de valores perdidos para cada mecanismo de no respuesta MCAR, MAR y MNAR. Se evalúa el desempeño de seis métodos para tratar la falta de respuesta: listwise, imputación de moda, imputación aleatoria, hot-deck, imputación por regresión politómica y árboles de clasificación. Los resultados de las simulaciones indican que los métodos más efectivos para el tratamiento de la no respuesta en variables categóricas, bajo los escenarios simulados, son hot-deck y la regresión politómica.

  16. Handling Metalloproteinases.

    Science.gov (United States)

    Fridrich, Sven; Karmilin, Konstantin; Stöcker, Walter

    2016-02-02

    Substrate cleavage by metalloproteinases involves nucleophilic attack on the scissile peptide bond by a water molecule that is polarized by a catalytic metal, usually a zinc ion, and a general base, usually the carboxyl group of a glutamic acid side chain. The zinc ion is most often complexed by imidazole nitrogens of histidine side chains. This arrangement suggests that the physiological pH optimum of most metalloproteinases is in the neutral range. In addition to their catalytic metal ion, many metalloproteinases contain additional transition metal or alkaline earth ions, which are structurally important or modulate the catalytic activity. As a consequence, these enzymes are generally sensitive to metal chelators. Moreover, the catalytic metal can be displaced by adventitious metal ions from buffers or biological fluids, which may fundamentally alter the catalytic function. Therefore, handling, purification, and assaying of metalloproteinases require specific precautions to warrant their stability. Copyright © 2016 John Wiley & Sons, Inc.

  17. Covariance Applications with Kiwi

    Science.gov (United States)

    Mattoon, C. M.; Brown, D.; Elliott, J. B.

    2012-05-01

    The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL) is developing a new tool, named `Kiwi', that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL) and large-scale Uncertainty Quantification (UQ) studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.

  18. Covariance Applications with Kiwi

    Directory of Open Access Journals (Sweden)

    Elliott J.B.

    2012-05-01

    Full Text Available The Computational Nuclear Physics group at Lawrence Livermore National Laboratory (LLNL is developing a new tool, named ‘Kiwi’, that is intended as an interface between the covariance data increasingly available in major nuclear reaction libraries (including ENDF and ENDL and large-scale Uncertainty Quantification (UQ studies. Kiwi is designed to integrate smoothly into large UQ studies, using the covariance matrix to generate multiple variations of nuclear data. The code has been tested using critical assemblies as a test case, and is being integrated into LLNL's quality assurance and benchmarking for nuclear data.

  19. Métodos de imputación para el tratamiento de datos faltantes: aplicación mediante R/Splus = Imputation methods to handle the problem of missing data: an application using R/Splus

    Directory of Open Access Journals (Sweden)

    Muñoz Rosas, Juan Francisco

    2009-01-01

    Full Text Available La aparición de datos faltantes es un problema común en la mayoría de las encuestas llevadas a cabo en distintos ámbitos. Una técnica tradicional y muy conocida para el tratamiento de datos faltantes es la imputación. La mayoría de los estudios relacionados con los métodos de imputación se centran en el problema de la estimación de la media y su varianza y están basados en diseños muestrales simples tales como el muestreo aleatorio simple. En este trabajo se describen los métodos de imputación más conocidos y se plantean bajo el contexto de un diseño muestral general y para el caso de diferentes mecanismos de respuesta. Mediante estudios de simulación Monte Carlo basados en datos reales extraídos del ámbito de la economía y la empresa, analizamos las propiedades de varios métodos de imputación en la estimación de otros parámetros que también son utilizados con frecuencia en la práctica, como son las funciones de distribución y los cuantiles. Con el fin de que los métodos de imputación descritos en este trabajo se puedan implementar y usar con mayor facilidad, se proporcionan sus códigos en los lenguajes de programación R y Splus. = Missing values are a common problem in many sampling surveys, and imputation is usually employed to compensate for non-response. Most imputation methods are based upon the problem of the mean estimation and its variance, and they also assume simple sampling designs such as the simple random sampling without replacement. In this paper we describe some imputation methods and define them under a general sampling design. Different response mechanisms are also discussed. Assuming some populations based upon real data extracted from the context of the economy and business, Monte Carlo simulations are carried out to analyze the properties of the various imputation methods in the estimation of parameters such as distribution functions and quantiles. The various imputation methods are implemented

  20. Covariant approximation averaging

    CERN Document Server

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2014-01-01

    We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.

  1. Covariant approximation averaging

    Science.gov (United States)

    Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph

    2015-06-01

    We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.

  2. Some observations on interpolating gauges and non-covariant gauges

    Indian Academy of Sciences (India)

    tion that are not normally taken into account in the BRST formalism that ignores the ε-term, and that they are characteristic of the way the singularities in propagators are handled. We argue that a prescription, in general, will require renormalization; if at all it is to be viable. Keywords. Non-covariant gauges; interpolating ...

  3. Covariant Magnetic Connection Hypersurfaces

    CERN Document Server

    Pegoraro, F

    2016-01-01

    In the single fluid, nonrelativistic, ideal-Magnetohydrodynamic (MHD) plasma description magnetic field lines play a fundamental role by defining dynamically preserved "magnetic connections" between plasma elements. Here we show how the concept of magnetic connection needs to be generalized in the case of a relativistic MHD description where we require covariance under arbitrary Lorentz transformations. This is performed by defining 2-D {\\it magnetic connection hypersurfaces} in the 4-D Minkowski space. This generalization accounts for the loss of simultaneity between spatially separated events in different frames and is expected to provide a powerful insight into the 4-D geometry of electromagnetic fields when ${\\bf E} \\cdot {\\bf B} = 0$.

  4. Earth Observing System Covariance Realism

    Science.gov (United States)

    Zaidi, Waqar H.; Hejduk, Matthew D.

    2016-01-01

    The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.

  5. Deriving covariant holographic entanglement

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Xi [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Lewkowycz, Aitor [Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Rangamani, Mukund [Center for Quantum Mathematics and Physics (QMAP), Department of Physics, University of California, Davis, CA 95616 (United States)

    2016-11-07

    We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.

  6. The handling of missing binary data in language research

    NARCIS (Netherlands)

    Pichette, François; Béland, Sébastien; Jolani, Shahab|info:eu-repo/dai/nl/352791330; Leśniewska, Justyna

    2015-01-01

    Researchers are frequently confronted with unanswered questions or items on their questionnaires and tests, due to factors such as item difficulty, lack of testing time, or participant distraction. This paper first presents results from a poll confirming previous claims (Rietveld & van Hout, 2006;

  7. General Galilei Covariant Gaussian Maps

    Science.gov (United States)

    Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo

    2017-09-01

    We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].

  8. Breast Cancer and Modifiable Lifestyle Factors in Argentinean Women: Addressing Missing Data in a Case-Control Study

    Science.gov (United States)

    Coquet, Julia Becaria; Tumas, Natalia; Osella, Alberto Ruben; Tanzi, Matteo; Franco, Isabella; Diaz, Maria Del Pilar

    2016-01-01

    A number of studies have evidenced the effect of modifiable lifestyle factors such as diet, breastfeeding and nutritional status on breast cancer risk. However, none have addressed the missing data problem in nutritional epidemiologic research in South America. Missing data is a frequent problem in breast cancer studies and epidemiological settings in general. Estimates of effect obtained from these studies may be biased, if no appropriate method for handling missing data is applied. We performed Multiple Imputation for missing values on covariates in a breast cancer case-control study of Córdoba (Argentina) to optimize risk estimates. Data was obtained from a breast cancer case control study from 2008 to 2015 (318 cases, 526 controls). Complete case analysis and multiple imputation using chained equations were the methods applied to estimate the effects of a Traditional dietary pattern and other recognized factors associated with breast cancer. Physical activity and socioeconomic status were imputed. Logistic regression models were performed. When complete case analysis was performed only 31% of women were considered. Although a positive association of Traditional dietary pattern and breast cancer was observed from both approaches (complete case analysis OR=1.3, 95%CI=1.0-1.7; multiple imputation OR=1.4, 95%CI=1.2-1.7), effects of other covariates, like BMI and breastfeeding, were only identified when multiple imputation was considered. A Traditional dietary pattern, BMI and breastfeeding are associated with the occurrence of breast cancer in this Argentinean population when multiple imputation is appropriately performed. Multiple Imputation is suggested in Latin America’s epidemiologic studies to optimize effect estimates in the future. PMID:27892664

  9. Miss Lora juveelikauplus = Miss Lora jewellery store

    Index Scriptorium Estoniae

    2009-01-01

    Narvas Fama kaubanduskeskuses (Tallinna mnt. 19c) asuva juveelikaupluse Miss Lora sisekujundusest. Sisearhitektid Annes Arro ja Hanna Karits. Poe sisu - vitriinkapid, vaip, valgustid - on valmistatud eritellimusel. Sisearhitektide tähtsamate tööde loetelu

  10. Comparison of population-averaged and cluster-specific models for the analysis of cluster randomized trials with missing binary outcomes: a simulation study

    Directory of Open Access Journals (Sweden)

    Ma Jinhui

    2013-01-01

    Full Text Available Abstracts Background The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE and cluster-specific (i.e. random-effects logistic regression (RELR models for analyzing data from cluster randomized trials (CRTs with missing binary responses. Methods In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE, and coverage probability. Results GEE performs well on all four measures — provided the downward bias of the standard error (when the number of clusters per arm is small is adjusted appropriately — under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF 50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. Conclusion GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.

  11. Covariance Models for Hydrological Applications

    Science.gov (United States)

    Hristopulos, Dionissios

    2014-05-01

    This methodological contribution aims to present some new covariance models with applications in the stochastic analysis of hydrological processes. More specifically, we present explicit expressions for radially symmetric, non-differentiable, Spartan covariance functions in one, two, and three dimensions. The Spartan covariance parameters include a characteristic length, an amplitude coefficient, and a rigidity coefficient which determines the shape of the covariance function. Different expressions are obtained depending on the value of the rigidity coefficient and the dimensionality. If the value of the rigidity coefficient is much larger than one, the Spartan covariance function exhibits multiscaling. Spartan covariance models are more flexible than the classical geostatatistical models (e.g., spherical, exponential). Their non-differentiability makes them suitable for modelling the properties of geological media. We also present a family of radially symmetric, infinitely differentiable Bessel-Lommel covariance functions which are valid in any dimension. These models involve combinations of Bessel and Lommel functions. They provide a generalization of the J-Bessel covariance function, and they can be used to model smooth processes with an oscillatory decay of correlations. We discuss the dependence of the integral range of the Spartan and Bessel-Lommel covariance functions on the parameters. We point out that the dependence is not uniquely specified by the characteristic length, unlike the classical geostatistical models. Finally, we define and discuss the use of the generalized spectrum for characterizing different correlation length scales; the spectrum is defined in terms of an exponent α. We show that the spectrum values obtained for exponent values less than one can be used to discriminate between mean-square continuous but non-differentiable random fields. References [1] D. T. Hristopulos and S. Elogne, 2007. Analytic properties and covariance functions of

  12. A cautionary note on generalized linear models for covariance of unbalanced longitudinal data

    KAUST Repository

    Huang, Jianhua Z.

    2012-03-01

    Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.

  13. Treatment of Nuclear Data Covariance Information in Sample Generation

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wieselquist, William [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division

    2017-10-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.

  14. Piecewise exponential survival trees with time-dependent covariates.

    Science.gov (United States)

    Huang, X; Chen, S; Soong, S J

    1998-12-01

    Survival trees methods are nonparametric alternatives to the semiparametric Cox regression in survival analysis. In this paper, a tree-based method for censored survival data with time-dependent covariates is proposed. The proposed method assumes a very general model for the hazard function and is fully nonparametric. The recursive partitioning algorithm uses the likelihood estimation procedure to grow trees under a piecewise exponential structure that handles time-dependent covariates in a parallel way to time-independent covariates. In general, the estimated hazard at a node gives the risk for a group of individuals during a specific time period. Both cross-validation and bootstrap resampling techniques are implemented in the tree selection procedure. The performance of the proposed survival trees method is shown to be good through simulation and application to real data.

  15. Missing Boxes in Central Europe

    DEFF Research Database (Denmark)

    Prockl, Günter; Weibrecht Kristensen, Kirsten

    2015-01-01

    of their supply chains, eg handling containers again close to the ports. On an overall level this also impacts the viability and sustainability of the transportation network. The case deals which a fictional company built on a blend of elements that have been taken from different companies. The problem......The Chinese New Year is an event that obviously happens every year. Every year however it also causes severe problems for the companies involved in the industry in form of missing containers throughout the chain but in particular in the European Hinterland. Illustrated on the symptoms...... of the Chinese New Year event the case reveals underlying key challenges for a responsible management in a network of different actors that are dependent of the others but also have responsibilities for their own business domain. Shipping companies for instance do order fewer proportions of containers for new...

  16. Dummy covariates in CUB models

    Directory of Open Access Journals (Sweden)

    Maria Iannario

    2013-05-01

    Full Text Available In this paper we discuss the use of dummy variables as sensible covariates in a class of statistical models which aim at explaining the subjects’ preferences with respect to several items. After a brief introduction to CUB models, the work considers statistical interpretations of dummy covariates. Then, a simulation study is performed to evaluate the power discrimination of an asymptotic test among sub-populations. Some empirical evidences and concluding remarks end the paper.

  17. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis.

    Science.gov (United States)

    Eekhout, Iris; van de Wiel, Mark A; Heymans, Martijn W

    2017-08-22

    Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin's Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels significantly contributes to the model, different methods are available. For example pooling chi-square tests with multiple degrees of freedom, pooling likelihood ratio test statistics, and pooling based on the covariance matrix of the regression model. These methods are more complex than RR and are not available in all mainstream statistical software packages. In addition, they do not always obtain optimal power levels. We argue that the median of the p-values from the overall significance tests from the analyses on the imputed datasets can be used as an alternative pooling rule for categorical variables. The aim of the current study is to compare different methods to test a categorical variable for significance after multiple imputation on applicability and power. In a large simulation study, we demonstrated the control of the type I error and power levels of different pooling methods for categorical variables. This simulation study showed that for non-significant categorical covariates the type I error is controlled and the statistical power of the median pooling rule was at least equal to current multiple parameter tests. An empirical data example showed similar results. It can therefore be concluded that using the median of the p-values from the imputed data analyses is an attractive and easy to use alternative method for significance testing of categorical variables.

  18. Missing data in alcohol clinical trials: a comparison of methods.

    Science.gov (United States)

    Hallgren, Kevin A; Witkiewitz, Katie

    2013-12-01

    The rate of participant attrition in alcohol clinical trials is often substantial and can cause significant issues with regard to the handling of missing data in statistical analyses of treatment effects. It is common for researchers to assume that missing data is indicative of participant relapse, and under that assumption, many researchers have relied on setting all missing values to the worst-case scenario for the outcome (e.g., missing = heavy drinking). This sort of single-imputation method has been criticized for producing biased results in other areas of clinical research, but has not been evaluated within the context of alcohol clinical trials, and many alcohol researchers continue to use the missing = heavy drinking assumption. Data from the COMBINE study, a multisite randomized clinical trial, were used to generate simulated situations of missing data under a variety of conditions and assumptions. We manipulated the sample size (n = 200, 500, and 1,000) and dropout rate (5, 10, 25, 30%) under 3 missing data assumptions (missing completely at random, missing at random, and missing not at random). We then examined the association between receiving naltrexone and heavy drinking during the first 10 weeks following treatment using 5 methods for treating missing data (complete case analysis [CCA], last observation carried forward [LOCF], missing = heavy drinking, multiple imputation [MI], and full information maximum likelihood [FIML]). CCA, LOCF, and missing = heavy drinking produced the most biased naltrexone effect estimates and standard errors under conditions that are likely to exist in randomized clinical trials. MI and FIML produced the least biased naltrexone effect estimates and standard errors. Assuming that missing = heavy drinking produces biased results of the treatment effect and should not be used to evaluate treatment effects in alcohol clinical trials. Copyright © 2013 by the Research Society on Alcoholism.

  19. Handling in adults physiotherapy

    OpenAIRE

    Smutný, Michal

    2015-01-01

    The thesis Handling In Adults Physiotherapy summarizes the knowledge of respiratory handling in application on adult patients. Part of the thesis also covers the relationship between body position and respiratory motor control. Experimental part consists of a clinical study with 10 COPD patients. The patients were treated in 3 positions by respiratory handling therapy. The result demonstrates a significant change in blood saturation after the therapy in position on a side. It also proves appr...

  20. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  1. Covariant Description of Isothermic Surfaces

    Science.gov (United States)

    Tafel, J.

    2016-12-01

    We present a covariant formulation of the Gauss-Weingarten equations and the Gauss-Mainardi-Codazzi equations for surfaces in 3-dimensional curved spaces. We derive a coordinate invariant condition on the first and second fundamental form which is locally necessary and sufficient for the surface to be isothermic. We show how to construct isothermic coordinates.

  2. Practices of Handling

    DEFF Research Database (Denmark)

    Ræbild, Ulla

    meanings seem to merge in the fashion design process, thus opening up for an embodied engagement with matter that entails direction giving, organizational management and negotiation. By seeing processes of handling as a key fashion methodological practice, it is possible to divert the discourse away from...... a dichotomized idea of design as combined, alternating or parallel processes of thinking and doing. In other words, the notion of handling is not about reflection in or on action, as brought to the fore by Schön (1984), but about reflection as action. Below the methodological macro level of handling, the paper...

  3. Handling Pyrophoric Reagents

    Energy Technology Data Exchange (ETDEWEB)

    Alnajjar, Mikhail S.; Haynie, Todd O.

    2009-08-14

    Pyrophoric reagents are extremely hazardous. Special handling techniques are required to prevent contact with air and the resulting fire. This document provides several methods for working with pyrophoric reagents outside of an inert atmosphere.

  4. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei

    2017-11-08

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.

  5. Frame covariant nonminimal multifield inflation

    Science.gov (United States)

    Karamitsos, Sotirios; Pilaftsis, Apostolos

    2018-02-01

    We introduce a frame-covariant formalism for inflation of scalar-curvature theories by adopting a differential geometric approach which treats the scalar fields as coordinates living on a field-space manifold. This ensures that our description of inflation is both conformally and reparameterization covariant. Our formulation gives rise to extensions of the usual Hubble and potential slow-roll parameters to generalized fully frame-covariant forms, which allow us to provide manifestly frame-invariant predictions for cosmological observables, such as the tensor-to-scalar ratio r, the spectral indices nR and nT, their runnings αR and αT, the non-Gaussianity parameter fNL, and the isocurvature fraction βiso. We examine the role of the field space curvature in the generation and transfer of isocurvature modes, and we investigate the effect of boundary conditions for the scalar fields at the end of inflation on the observable inflationary quantities. We explore the stability of the trajectories with respect to the boundary conditions by using a suitable sensitivity parameter. To illustrate our approach, we first analyze a simple minimal two-field scenario before studying a more realistic nonminimal model inspired by Higgs inflation. We find that isocurvature effects are greatly enhanced in the latter scenario and must be taken into account for certain values in the parameter space such that the model is properly normalized to the observed scalar power spectrum PR. Finally, we outline how our frame-covariant approach may be extended beyond the tree-level approximation through the Vilkovisky-De Witt formalism, which we generalize to take into account conformal transformations, thereby leading to a fully frame-invariant effective action at the one-loop level.

  6. Frame covariant nonminimal multifield inflation

    Directory of Open Access Journals (Sweden)

    Sotirios Karamitsos

    2018-02-01

    Full Text Available We introduce a frame-covariant formalism for inflation of scalar-curvature theories by adopting a differential geometric approach which treats the scalar fields as coordinates living on a field-space manifold. This ensures that our description of inflation is both conformally and reparameterization covariant. Our formulation gives rise to extensions of the usual Hubble and potential slow-roll parameters to generalized fully frame-covariant forms, which allow us to provide manifestly frame-invariant predictions for cosmological observables, such as the tensor-to-scalar ratio r, the spectral indices nR and nT, their runnings αR and αT, the non-Gaussianity parameter fNL, and the isocurvature fraction βiso. We examine the role of the field space curvature in the generation and transfer of isocurvature modes, and we investigate the effect of boundary conditions for the scalar fields at the end of inflation on the observable inflationary quantities. We explore the stability of the trajectories with respect to the boundary conditions by using a suitable sensitivity parameter. To illustrate our approach, we first analyze a simple minimal two-field scenario before studying a more realistic nonminimal model inspired by Higgs inflation. We find that isocurvature effects are greatly enhanced in the latter scenario and must be taken into account for certain values in the parameter space such that the model is properly normalized to the observed scalar power spectrum PR. Finally, we outline how our frame-covariant approach may be extended beyond the tree-level approximation through the Vilkovisky–De Witt formalism, which we generalize to take into account conformal transformations, thereby leading to a fully frame-invariant effective action at the one-loop level.

  7. Szekeres models: a covariant approach

    Science.gov (United States)

    Apostolopoulos, Pantelis S.

    2017-05-01

    We exploit the 1  +  1  +  2 formalism to covariantly describe the inhomogeneous and anisotropic Szekeres models. It is shown that an average scale length can be defined covariantly which satisfies a 2d equation of motion driven from the effective gravitational mass (EGM) contained in the dust cloud. The contributions to the EGM are encoded to the energy density of the dust fluid and the free gravitational field E ab . We show that the quasi-symmetric property of the Szekeres models is justified through the existence of 3 independent intrinsic Killing vector fields (IKVFs). In addition the notions of the apparent and absolute apparent horizons are briefly discussed and we give an alternative gauge-invariant form to define them in terms of the kinematical variables of the spacelike congruences. We argue that the proposed program can be used in order to express Sachs’ optical equations in a covariant form and analyze the confrontation of a spatially inhomogeneous irrotational overdense fluid model with the observational data.

  8. Netbooks The Missing Manual

    CERN Document Server

    Biersdorfer, J

    2009-01-01

    Netbooks are the hot new thing in PCs -- small, inexpensive laptops designed for web browsing, email, and working with web-based programs. But chances are you don't know how to choose a netbook, let alone use one. Not to worry: with this Missing Manual, you'll learn which netbook is right for you and how to set it up and use it for everything from spreadsheets for work to hobbies like gaming and photo sharing. Netbooks: The Missing Manual provides easy-to-follow instructions and lots of advice to help you: Learn the basics for using a Windows- or Linux-based netbookConnect speakers, printe

  9. PCs The Missing Manual

    CERN Document Server

    Karp, David

    2005-01-01

    Your vacuum comes with one. Even your blender comes with one. But your PC--something that costs a whole lot more and is likely to be used daily and for tasks of far greater importance and complexity--doesn't come with a printed manual. Thankfully, that's not a problem any longer: PCs: The Missing Manual explains everything you need to know about PCs, both inside and out, and how to keep them running smoothly and working the way you want them to work. A complete PC manual for both beginners and power users, PCs: The Missing Manual has something for everyone. PC novices will appreciate the una

  10. Results of Database Studies in Spine Surgery Can Be Influenced by Missing Data.

    Science.gov (United States)

    Basques, Bryce A; McLynn, Ryan P; Fice, Michael P; Samuel, Andre M; Lukasiewicz, Adam M; Bohl, Daniel D; Ahn, Junyoung; Singh, Kern; Grauer, Jonathan N

    2017-12-01

    National databases are increasingly being used for research in spine surgery; however, one limitation of such databases that has received sparse mention is the frequency of missing data. Studies using these databases often do not emphasize the percentage of missing data for each variable used and do not specify how patients with missing data are incorporated into analyses. This study uses the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database to examine whether different treatments of missing data can influence the results of spine studies. (1) What is the frequency of missing data fields for demographics, medical comorbidities, preoperative laboratory values, operating room times, and length of stay recorded in ACS-NSQIP? (2) Using three common approaches to handling missing data, how frequently do those approaches agree in terms of finding particular variables to be associated with adverse events? (3) Do different approaches to handling missing data influence the outcomes and effect sizes of an analysis testing for an association with these variables with occurrence of adverse events? Patients who underwent spine surgery between 2005 and 2013 were identified from the ACS-NSQIP database. A total of 88,471 patients undergoing spine surgery were identified. The most common procedures were anterior cervical discectomy and fusion, lumbar decompression, and lumbar fusion. Demographics, comorbidities, and perioperative laboratory values were tabulated for each patient, and the percent of missing data was noted for each variable. These variables were tested for an association with "any adverse event" using three separate multivariate regressions that used the most common treatments for missing data. In the first regression, patients with any missing data were excluded. In the second regression, missing data were treated as a negative or "reference" value; for continuous variables, the mean of each variable's reference range

  11. Graphical Models for Recovering Probabilistic and Causal Queries from Missing Data

    Science.gov (United States)

    2014-11-01

    recommender systems which 7 develop probabilistic models that explicitly incorporate missing data mechanism ( Marlin et al. [2011], Marlin and Zemel [2009... Marlin et al. [2007]). Other methods for handling missing data can be classified into two: (a) Inverse Probability Weighted Methods and (b) Imputation...Data Analysis, 19(2):191–201, 1995. R.J.A. Little and D.B. Rubin. Statistical analysis with missing data. Wiley, 2002. B.M. Marlin and R.S. Zemel

  12. Examining solutions to missing data in longitudinal nursing research.

    Science.gov (United States)

    Roberts, Mary B; Sullivan, Mary C; Winchester, Suzy B

    2017-04-01

    Longitudinal studies are highly valuable in pediatrics because they provide useful data about developmental patterns of child health and behavior over time. When data are missing, the value of the research is impacted. The study's purpose was to (1) introduce a three-step approach to assess and address missing data and (2) illustrate this approach using categorical and continuous-level variables from a longitudinal study of premature infants. A three-step approach with simulations was followed to assess the amount and pattern of missing data and to determine the most appropriate imputation method for the missing data. Patterns of missingness were Missing Completely at Random, Missing at Random, and Not Missing at Random. Missing continuous-level data were imputed using mean replacement, stochastic regression, multiple imputation, and fully conditional specification (FCS). Missing categorical-level data were imputed using last value carried forward, hot-decking, stochastic regression, and FCS. Simulations were used to evaluate these imputation methods under different patterns of missingness at different levels of missing data. The rate of missingness was 16-23% for continuous variables and 1-28% for categorical variables. FCS imputation provided the least difference in mean and standard deviation estimates for continuous measures. FCS imputation was acceptable for categorical measures. Results obtained through simulation reinforced and confirmed these findings. Significant investments are made in the collection of longitudinal data. The prudent handling of missing data can protect these investments and potentially improve the scientific information contained in pediatric longitudinal studies. © 2017 Wiley Periodicals, Inc.

  13. Bayesian modeling of the covariance structure for irregular longitudinal data using the partial autocorrelation function.

    Science.gov (United States)

    Su, Li; Daniels, Michael J

    2015-05-30

    In long-term follow-up studies, irregular longitudinal data are observed when individuals are assessed repeatedly over time but at uncommon and irregularly spaced time points. Modeling the covariance structure for this type of data is challenging, as it requires specification of a covariance function that is positive definite. Moreover, in certain settings, careful modeling of the covariance structure for irregular longitudinal data can be crucial in order to ensure no bias arises in the mean structure. Two common settings where this occurs are studies with 'outcome-dependent follow-up' and studies with 'ignorable missing data'. 'Outcome-dependent follow-up' occurs when individuals with a history of poor health outcomes had more follow-up measurements, and the intervals between the repeated measurements were shorter. When the follow-up time process only depends on previous outcomes, likelihood-based methods can still provide consistent estimates of the regression parameters, given that both the mean and covariance structures of the irregular longitudinal data are correctly specified and no model for the follow-up time process is required. For 'ignorable missing data', the missing data mechanism does not need to be specified, but valid likelihood-based inference requires correct specification of the covariance structure. In both cases, flexible modeling approaches for the covariance structure are essential. In this paper, we develop a flexible approach to modeling the covariance structure for irregular continuous longitudinal data using the partial autocorrelation function and the variance function. In particular, we propose semiparametric non-stationary partial autocorrelation function models, which do not suffer from complex positive definiteness restrictions like the autocorrelation function. We describe a Bayesian approach, discuss computational issues, and apply the proposed methods to CD4 count data from a pediatric AIDS clinical trial. © 2015 The Authors

  14. TRANSPORT/HANDLING REQUESTS

    CERN Multimedia

    Groupe ST/HM

    2002-01-01

    A new EDH document entitled 'Transport/Handling Request' will be in operation as of Monday, 11th February 2002, when the corresponding icon will be accessible from the EDH desktop, together with the application instructions. This EDH form will replace the paper-format transport/handling request form for all activities involving the transport of equipment and materials. However, the paper form will still be used for all vehicle-hire requests. The introduction of the EDH transport/handling request form is accompanied by the establishment of the following time limits for the various services concerned: 24 hours for the removal of office items, 48 hours for the transport of heavy items (of up to 6 metric tons and of standard road width), 5 working days for a crane operation, extra-heavy transport operation or complete removal, 5 working days for all transport operations relating to LHC installation. ST/HM Group, Logistics Section Tel: 72672 - 72202

  15. SLUG HANDLING DEVICES

    Science.gov (United States)

    Gentry, J.R.

    1958-09-16

    A device is described for handling fuel elements of a neutronic reactor. The device consists of two concentric telescoped contalners that may fit about the fuel element. A number of ratchet members, equally spaced about the entrance to the containers, are pivoted on the inner container and spring biased to the outer container so thnt they are forced to hear against and hold the fuel element, the weight of which tends to force the ratchets tighter against the fuel element. The ratchets are released from their hold by raising the inner container relative to the outer memeber. This device reduces the radiation hazard to the personnel handling the fuel elements.

  16. Missing persons genetic identification

    Directory of Open Access Journals (Sweden)

    Matija Bajželj

    2017-09-01

    Full Text Available This article presents identification of missing persons from badly preserved post-mortem remains using molecular genetics methods. Extremely polymorphic and individually specific genetic markers that enable the identification of missing persons are microsatellites on autosomal chromosomes, microsatellites on Y chromosome and control region of mitochondrial DNA. For genetic profile comparison, biological material from post-mortem remains and reference samples have to be collected. If post-mortem remains are found shortly after the presumed death of the missing person, their personal items are used for comparison. If these are not available, (the missing person‘s relatives could be used as reference samples or achieved tissues stored in medical institutions if biopsy for the needs of medical diagnostics was performed earlier during their life. When reference samples are not available, genetic identification is not possible. The type of biological material sampled from the deceased depends on the condition of human remains. Blood, soft tissues, nails, teeth or bones are most commonly used for genetic identification, and the time required for DNA extraction depends on the type of biological material. The most demanding and time consuming is extraction of DNA from teeth and bones, therefore we use it in cases when only skeleton is available or we cannot get a sufficient amount of DNA for genetic identification from other tissues. If the genetic profile of post-mortem reamains and a reference sample of the missing person match, the strength of genetic evidence has to be statistically evaluated and the probability of identification reported.

  17. A note on covariant dynamical semigroups

    Science.gov (United States)

    Holevo, A. S.

    1993-04-01

    It is shown that in the standard representation of the generator of a norm continuous dynamical semigroup, which is covariant with respect to a unitary representation of an amenable group, the completely positive part can always be chosen covariant and the Hamiltonian commuting with the representation. The structure of the generator of a translation covariant dynamical semigroup is described.

  18. Covariant gauges at finite temperature

    OpenAIRE

    Landshoff, P V; Rebhan, A

    1992-01-01

    A prescription is presented for real-time finite-temperature perturbation theory in covariant gauges, in which only the two physical degrees of freedom of the gauge-field propagator acquire thermal parts. The propagators for the unphysical degrees of freedom of the gauge field, and for the Faddeev-Popov ghost field, are independent of temperature. This prescription is applied to the calculation of the one-loop gluon self-energy and the two-loop interaction pressure, and is found to be simpler...

  19. Improving coal handling effectiveness

    Energy Technology Data Exchange (ETDEWEB)

    Walker, S.

    2003-10-01

    Appropriate coal handling systems are essential for successful coal utilisation. The paper looks at some of the options available, including crushers and hammer mills, wear-resistant liners for chutes and wagons, and dewatering systems. These are individual components within larger systems such as stockyard stacking and reclaiming installations. 5 photos.

  20. Content Based Text Handling.

    Science.gov (United States)

    Schwarz, Christoph

    1990-01-01

    Gives an overview of various linguistic software tools in the field of intelligent text handling that are being developed in Germany utilizing artificial intelligence techniques in the field of natural language processing. Syntactical analysis of documents is described and application areas are discussed. (10 references) (LRW)

  1. Colonic potassium handling

    DEFF Research Database (Denmark)

    Sørensen, Mads Vaarby; Matos, Joana E.; Prætorius, Helle

    2010-01-01

    Homeostatic control of plasma K+ is a necessary physiological function. The daily dietary K+ intake of approximately 100 mmol is excreted predominantly by the distal tubules of the kidney. About 10% of the ingested K+ is excreted via the intestine. K+ handling in both organs is specifically regul...

  2. Recurrence Analysis of Eddy Covariance Fluxes

    Science.gov (United States)

    Lange, Holger; Flach, Milan; Foken, Thomas; Hauhs, Michael

    2015-04-01

    The eddy covariance (EC) method is one key method to quantify fluxes in biogeochemical cycles in general, and carbon and energy transport across the vegetation-atmosphere boundary layer in particular. EC data from the worldwide net of flux towers (Fluxnet) have also been used to validate biogeochemical models. The high resolution data are usually obtained at 20 Hz sampling rate but are affected by missing values and other restrictions. In this contribution, we investigate the nonlinear dynamics of EC fluxes using Recurrence Analysis (RA). High resolution data from the site DE-Bay (Waldstein-Weidenbrunnen) and fluxes calculated at half-hourly resolution from eight locations (part of the La Thuile dataset) provide a set of very long time series to analyze. After careful quality assessment and Fluxnet standard gapfilling pretreatment, we calculate properties and indicators of the recurrent structure based both on Recurrence Plots as well as Recurrence Networks. Time series of RA measures obtained from windows moving along the time axis are presented. Their interpretation is guided by three different questions: (1) Is RA able to discern periods where the (atmospheric) conditions are particularly suitable to obtain reliable EC fluxes? (2) Is RA capable to detect dynamical transitions (different behavior) beyond those obvious from visual inspection? (3) Does RA contribute to an understanding of the nonlinear synchronization between EC fluxes and atmospheric parameters, which is crucial for both improving carbon flux models as well for reliable interpolation of gaps? (4) Is RA able to recommend an optimal time resolution for measuring EC data and for analyzing EC fluxes? (5) Is it possible to detect non-trivial periodicities with a global RA? We will demonstrate that the answers to all five questions is affirmative, and that RA provides insights into EC dynamics not easily obtained otherwise.

  3. Misperception, misfearing, missed treatment, missed opportunities

    Directory of Open Access Journals (Sweden)

    Marcelo Katz

    2014-11-01

    Full Text Available Cardiovascular disease still represents the leading cause of death worldwide. Preventive measures are essential to avoid the burden of disease, saving lives and costs. However, the current prevalence of optimal management of cardiovascular risk factors is far from ideal. In the real world, physicians are not succeeding in convincing their patients to assume healthy behaviors. According to the health belief model theory, if a patient feels he/she is vulnerable to one condition, it enhances the chance that this patient will engage medical recommendations to avoid this condition. However, a couple of articles, evaluating individual perception of risk, show that, in fact, subjects, children and adults, usually have a misperception of risk, with an optimistic bias about their risk. Physicians are missing the opportunity to really prevent the burden of cardiovascular disease and it is time to explore patients' behavior deeply. In the office, physicians should dedicate time to apply in practice the components of the health belief model. If a patient is unaware about his/her actual risk, doctors should detail these risks, graduating severity and anticipating possible implications on the patient's cardiovascular health. Physicians should appraise the accuracy of the patients' perception of own risk and in case of underestimation, work on calibrating this perception. But physicians should go ahead, trying to empower and engage the patient and his/her family into the treatment plan.

  4. Covariance Evaluation Methodology for Neutron Cross Sections

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.

    2008-09-01

    We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.

  5. Missing Data Problems

    OpenAIRE

    Pouliot, Guillaume

    2016-01-01

    Missing data problems are often best tackled by taking into consideration specificities of the data structure and data generating process. In this doctoral dissertation, I present a thorough study of two specific problems. The first problem is one of regression analysis with misaligned data; that is, when the geographic location of the dependent variable and that of some independent variable do not coincide. The misaligned independent variable is rainfall, and it can be successfully modeled a...

  6. Principal Component Analysis of Process Datasets with Missing Values

    Directory of Open Access Journals (Sweden)

    Kristen A. Severson

    2017-07-01

    Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.

  7. Multiple imputation of covariates by fully conditional specification: Accommodating the substantive model.

    Science.gov (United States)

    Bartlett, Jonathan W; Seaman, Shaun R; White, Ian R; Carpenter, James R

    2015-08-01

    Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation. Imputation of partially observed covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of multiple imputation may impute covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing multiple imputation, can be modified so that covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it with existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. Stata software implementing the approach is freely available. © The Author(s) 2014.

  8. Covariance-enhanced discriminant analysis.

    Science.gov (United States)

    Xu, Peirong; Zhu, J I; Zhu, Lixing; Li, Y I

    Linear discriminant analysis has been widely used to characterize or separate multiple classes via linear combinations of features. However, the high dimensionality of features from modern biological experiments defies traditional discriminant analysis techniques. Possible interfeature correlations present additional challenges and are often underused in modelling. In this paper, by incorporating possible interfeature correlations, we propose a covariance-enhanced discriminant analysis method that simultaneously and consistently selects informative features and identifies the corresponding discriminable classes. Under mild regularity conditions, we show that the method can achieve consistent parameter estimation and model selection, and can attain an asymptotically optimal misclassification rate. Extensive simulations have verified the utility of the method, which we apply to a renal transplantation trial.

  9. Coal handling for IPPs

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, K.

    2000-02-01

    Demand for seaborne steam coal in Asia is expected to increase. By 2010, for example, Japan alone is expected to double its coal-fired power generating electricity capacity. At end-FY 1999 an extra 13 IPPs should come on line. Demand for new materials handling equipment at ports will increase. In terms of scraper reclaimers for stockyard storing and homogenising systems for coal handling and blending, Gustac Schade of Germany is a world leader. Schade introduced the first cantilever scraper reclaimer system at Scholven power station of VKR in Germany in 1968. Later designs have incorporated portal scraper reclaimers. Systems are available for longtidunal and circular coal storage systems, both open and enclosed. 3 photos.

  10. Renal phosphate handling: Physiology

    Directory of Open Access Journals (Sweden)

    Narayan Prasad

    2013-01-01

    Full Text Available Phosphorus is a common anion. It plays an important role in energy generation. Renal phosphate handling is regulated by three organs parathyroid, kidney and bone through feedback loops. These counter regulatory loops also regulate intestinal absorption and thus maintain serum phosphorus concentration in physiologic range. The parathyroid hormone, vitamin D, Fibrogenic growth factor 23 (FGF23 and klotho coreceptor are the key regulators of phosphorus balance in body.

  11. Solid waste handling

    Energy Technology Data Exchange (ETDEWEB)

    Parazin, R.J.

    1995-05-31

    This study presents estimates of the solid radioactive waste quantities that will be generated in the Separations, Low-Level Waste Vitrification and High-Level Waste Vitrification facilities, collectively called the Tank Waste Remediation System Treatment Complex, over the life of these facilities. This study then considers previous estimates from other 200 Area generators and compares alternative methods of handling (segregation, packaging, assaying, shipping, etc.).

  12. Uranium hexafluoride handling. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    1991-12-31

    The United States Department of Energy, Oak Ridge Field Office, and Martin Marietta Energy Systems, Inc., are co-sponsoring this Second International Conference on Uranium Hexafluoride Handling. The conference is offered as a forum for the exchange of information and concepts regarding the technical and regulatory issues and the safety aspects which relate to the handling of uranium hexafluoride. Through the papers presented here, we attempt not only to share technological advances and lessons learned, but also to demonstrate that we are concerned about the health and safety of our workers and the public, and are good stewards of the environment in which we all work and live. These proceedings are a compilation of the work of many experts in that phase of world-wide industry which comprises the nuclear fuel cycle. Their experience spans the entire range over which uranium hexafluoride is involved in the fuel cycle, from the production of UF{sub 6} from the naturally-occurring oxide to its re-conversion to oxide for reactor fuels. The papers furnish insights into the chemical, physical, and nuclear properties of uranium hexafluoride as they influence its transport, storage, and the design and operation of plant-scale facilities for production, processing, and conversion to oxide. The papers demonstrate, in an industry often cited for its excellent safety record, continuing efforts to further improve safety in all areas of handling uranium hexafluoride. Selected papers were processed separately for inclusion in the Energy Science and Technology Database.

  13. A simple method for analyzing data from a randomized trial with a missing binary outcome

    Directory of Open Access Journals (Sweden)

    Freedman Laurence S

    2003-05-01

    Full Text Available Abstract Background Many randomized trials involve missing binary outcomes. Although many previous adjustments for missing binary outcomes have been proposed, none of these makes explicit use of randomization to bound the bias when the data are not missing at random. Methods We propose a novel approach that uses the randomization distribution to compute the anticipated maximum bias when missing at random does not hold due to an unobserved binary covariate (implying that missingness depends on outcome and treatment group. The anticipated maximum bias equals the product of two factors: (a the anticipated maximum bias if there were complete confounding of the unobserved covariate with treatment group among subjects with an observed outcome and (b an upper bound factor that depends only on the fraction missing in each randomization group. If less than 15% of subjects are missing in each group, the upper bound factor is less than .18. Results We illustrated the methodology using data from the Polyp Prevention Trial. We anticipated a maximum bias under complete confounding of .25. With only 7% and 9% missing in each arm, the upper bound factor, after adjusting for age and sex, was .10. The anticipated maximum bias of .25 × .10 =.025 would not have affected the conclusion of no treatment effect. Conclusion This approach is easy to implement and is particularly informative when less than 15% of subjects are missing in each arm.

  14. Parameter inference with estimated covariance matrices

    Science.gov (United States)

    Sellentin, Elena; Heavens, Alan F.

    2016-02-01

    When inferring parameters from a Gaussian-distributed data set by computing a likelihood, a covariance matrix is needed that describes the data errors and their correlations. If the covariance matrix is not known a priori, it may be estimated and thereby becomes a random object with some intrinsic uncertainty itself. We show how to infer parameters in the presence of such an estimated covariance matrix, by marginalizing over the true covariance matrix, conditioned on its estimated value. This leads to a likelihood function that is no longer Gaussian, but rather an adapted version of a multivariate t-distribution, which has the same numerical complexity as the multivariate Gaussian. As expected, marginalization over the true covariance matrix improves inference when compared with Hartlap et al.'s method, which uses an unbiased estimate of the inverse covariance matrix but still assumes that the likelihood is Gaussian.

  15. Facebook The Missing Manual

    CERN Document Server

    Veer, E

    2011-01-01

    Facebook's spreading about as far and fast as the Web itself: 500 million members and counting. But there's a world of fun packed into the site that most folks miss. With this bestselling guide, learn how to unlock Facebook's talents as personal website creator, souped-up address book, and bustling community forum. It's an eye-opening, timesaving tour, guaranteed to help you get the most out of your Facebook experience. Coverage includes: Get started, get connected. Signing up is easy, but the real payoff comes when you tap into networks of coworkers, classmates, and friends. Pick and choose

  16. Missing Data in Alcohol Clinical Trials with Binary Outcomes.

    Science.gov (United States)

    Hallgren, Kevin A; Witkiewitz, Katie; Kranzler, Henry R; Falk, Daniel E; Litten, Raye Z; O'Malley, Stephanie S; Anton, Raymond F

    2016-07-01

    Missing data are common in alcohol clinical trials for both continuous and binary end points. Approaches to handle missing data have been explored for continuous outcomes, yet no studies have compared missing data approaches for binary outcomes (e.g., abstinence, no heavy drinking days). This study compares approaches to modeling binary outcomes with missing data in the COMBINE study. We included participants in the COMBINE study who had complete drinking data during treatment and who were assigned to active medication or placebo conditions (N = 1,146). Using simulation methods, missing data were introduced under common scenarios with varying sample sizes and amounts of missing data. Logistic regression was used to estimate the effect of naltrexone (vs. placebo) in predicting any drinking and any heavy drinking outcomes at the end of treatment using 4 analytic approaches: complete case analysis (CCA), last observation carried forward (LOCF), the worst case scenario (WCS) of missing equals any drinking or heavy drinking, and multiple imputation (MI). In separate analyses, these approaches were compared when drinking data were manually deleted for those participants who discontinued treatment but continued to provide drinking data. WCS produced the greatest amount of bias in treatment effect estimates. MI usually yielded less biased estimates than WCS and CCA in the simulated data and performed considerably better than LOCF when estimating treatment effects among individuals who discontinued treatment. Missing data can introduce bias in treatment effect estimates in alcohol clinical trials. Researchers should utilize modern missing data methods, including MI, and avoid WCS and CCA when analyzing binary alcohol clinical trial outcomes. Copyright © 2016 by the Research Society on Alcoholism.

  17. Covariance-based maneuver optimization for NEO threats

    Science.gov (United States)

    Peterson, G.

    The Near Earth Object (NEO) conjunction analysis and mitigation problem is fundamentally the same as Earth-centered space traffic control, albeit on a larger scale and in different temporal and spatial frames. The Aerospace Corporation has been conducting conjunction detection and collision avoidance analysis for a variety of satellite systems in the Earth environment for over 3 years. As part of this process, techniques have been developed that are applicable to analyzing the NEO threat. In space traffic control operations in the Earth orbiting environment, dangerous conjunctions between satellites are determined using collision probability models, realistic covariances, and accurate trajectories in the software suite Collision Vision. Once a potentially dangerous conjunction (or series of conjunctions) is found, a maneuver solution is developed through the program DVOPT (DeltaV OPTimization) that will reduce the risk to a pre -defined acceptable level. DVOPT works by taking the primary's state vector at conjunction, back- propagating it to the time of the proposed burn, then applying the burn to the state vector, and forward-propagating back to the time of the original conjunction. The probability of collision is then re-computed based upon the new state vector and original covariances. This backwards-forwards propagation is coupled with a search algorithm to find the optimal burn solution as a function of time. Since the burns are small (typically cm/sec for Earth-centered space traffic control), Kepler's Equation was assumed for the backwards-forwards propagation with little loss in accuracy. The covariance-based DVOPT process can be easily expanded to cover heliocentric orbits and conjunctions between the Earth and an approaching object. It is shown that minimizing the burn to increase the miss distance between the conjuncting objects does not correspond to a burn solution that minimizes the probability of impact between the same two objects. Since a

  18. Preference Handling for Artificial Intelligence

    OpenAIRE

    Goldsmith, Judy; University of Kentucky; Junker, Ulrich; ILOG

    2009-01-01

    This article explains the benefits of preferences for AI systems and draws a picture of current AI research on preference handling. It thus provides an introduction to the topics covered by this special issue on preference handling.

  19. Operational semantics for signal handling

    Directory of Open Access Journals (Sweden)

    Maxim Strygin

    2012-08-01

    Full Text Available Signals are a lightweight form of interprocess communication in Unix. When a process receives a signal, the control flow is interrupted and a previously installed signal handler is run. Signal handling is reminiscent both of exception handling and concurrent interleaving of processes. In this paper, we investigate different approaches to formalizing signal handling in operational semantics, and compare them in a series of examples. We find the big-step style of operational semantics to be well suited to modelling signal handling. We integrate exception handling with our big-step semantics of signal handling, by adopting the exception convention as defined in the Definition of Standard ML. The semantics needs to capture the complex interactions between signal handling and exception handling.

  20. Treatment Effects with Many Covariates and Heteroskedasticity

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.

    The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...... then propose a new heteroskedasticity consistent standard error formula that is fully automatic and robust to both (conditional) heteroskedasticity of unknown form and the inclusion of possibly many covariates. We apply our findings to three settings: (i) parametric linear models with many covariates, (ii......) semiparametric semi-linear models with many technical regressors, and (iii) linear panel models with many fixed effects...

  1. Alternative analyses for handling incomplete follow-up in the intention-to-treat analysis: the randomized controlled trial of balloon kyphoplasty versus non-surgical care for vertebral compression fracture (FREE

    Directory of Open Access Journals (Sweden)

    Ranstam Jonas

    2012-03-01

    Full Text Available Abstract Background Clinical trial participants may be temporarily absent or withdraw from trials, leading to missing data. In intention-to-treat (ITT analyses, several approaches are used for handling the missing information - complete case (CC analysis, mixed-effects model (MM analysis, last observation carried forward (LOCF and multiple imputation (MI. This report discusses the consequences of applying the CC, LOCF and MI for the ITT analysis of published data (analysed using the MM method from the Fracture Reduction Evaluation (FREE trial. Methods The FREE trial was a randomised, non-blinded study comparing balloon kyphoplasty with non-surgical care for the treatment of patients with acute painful vertebral fractures. Patients were randomised to treatment (1:1 ratio, and stratified for gender, fracture aetiology, use of bisphosphonates and use of systemic steroids at the time of enrolment. Six outcome measures - Short-form 36 physical component summary (SF-36 PCS scale, EuroQol 5-Dimension Questionnaire (EQ-5D, Roland-Morris Disability (RMD score, back pain, number of days with restricted activity in last 2 weeks, and number of days in bed in last 2 weeks - were analysed using four methods for dealing with missing data: CC, LOCF, MM and MI analyses. Results There were no missing data in baseline covariates values, and only a few missing baseline values in outcome variables. The overall missing-response level increased during follow-up (1 month: 14.5%; 24 months: 28%, corresponding to a mean of 19% missing data during the entire period. Overall patterns of missing response across time were similar for each treatment group. Almost half of all randomised patients were not available for a CC analysis, a maximum of 4% were not included in the LOCF analysis, and all randomised patients were included in the MM and MI analyses. Improved estimates of treatment effect were observed with LOCF, MM and MI compared with CC; only MM provided improved

  2. Competing risks and time-dependent covariates

    DEFF Research Database (Denmark)

    Cortese, Giuliana; Andersen, Per K

    2010-01-01

    Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates, as classi......Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates......, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates....... In a multi-state framework, a first approach uses internal covariates to define additional (intermediate) transient states in the competing risks model. Another approach is to apply the landmark analysis as described by van Houwelingen [Scandinavian Journal of Statistics 2007, 34, 70-85] in order to study...

  3. Wikipedia the missing manual

    CERN Document Server

    Broughton, John

    2008-01-01

    Want to be part of the largest group-writing project in human history? Learn how to contribute to Wikipedia, the user-generated online reference for the 21st century. Considered more popular than eBay, Microsoft.com, and Amazon.com, Wikipedia servers respond to approximately 30,000 requests per second, or about 2.5 billion per day. It's become the first point of reference for people the world over who need a fact fast.If you want to jump on board and add to the content, Wikipedia: The Missing Manual is your first-class ticket. Wikipedia has more than 9 million entries in 250 languages, over 2

  4. CERN Sells its Electronic Document Handling System

    CERN Multimedia

    2001-01-01

    The EDH team. Left to right: Derek Mathieson, Rotislav Titov, Per Gunnar Jonsson, Ivica Dobrovicova, James Purvis. Missing from the photo is Jurgen De Jonghe. In a 1 MCHF deal announced this week, the British company Transacsys bought the rights to CERN's Electronic Document Handling (EDH) system, which has revolutionised the Laboratory's administrative procedures over the last decade. Under the deal, CERN and Transacsys will collaborate on developing EDH over the coming 12 months. CERN will provide manpower and expertise and will retain the rights to use EDH, which will also be available freely to other particle physics laboratories. This development is an excellent example of the active technology transfer policy CERN is currently pursuing. The negotiations were carried out through a fruitful collaboration between AS and ETT Divisions, following the recommendations of the Technology Advisory Board, and with the help of SPL Division. EDH was born in 1991 when John Ferguson and Achille Petrilli of AS Divisi...

  5. A hierarchical nest survival model integrating incomplete temporally varying covariates

    Science.gov (United States)

    Converse, Sarah J.; Royle, J. Andrew; Adler, Peter H.; Urbanek, Richard P.; Barzan, Jeb A.

    2013-01-01

    Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood-feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the

  6. Covariant diagrams for one-loop matching

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2016-10-15

    We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.

  7. An augmented probit model for missing predictable covariates in quantal bioassay with small sample size.

    Science.gov (United States)

    Follmann, Dean; Nason, Martha

    2011-09-01

    Quantal bioassay experiments relate the amount or potency of some compound; for example, poison, antibody, or drug to a binary outcome such as death or infection in animals. For infectious diseases, probit regression is commonly used for inference and a key measure of potency is given by the ID(P) , the amount that results in P% of the animals being infected. In some experiments, a validation set may be used where both direct and proxy measures of the dose are available on a subset of animals with the proxy being available on all. The proxy variable can be viewed as a messy reflection of the direct variable, leading to an errors-in-variables problem. We develop a model for the validation set and use a constrained seemingly unrelated regression (SUR) model to obtain the distribution of the direct measure conditional on the proxy. We use the conditional distribution to derive a pseudo-likelihood based on probit regression and use the parametric bootstrap for statistical inference. We re-evaluate an old experiment in 21 monkeys where neutralizing antibodies (nABs) to HIV were measured using an old (proxy) assay in all monkeys and with a new (direct) assay in a validation set of 11 who had sufficient stored plasma. Using our methods, we obtain an estimate of the ID(1) for the new assay, an important target for HIV vaccine candidates. In simulations, we compare the pseudo-likelihood estimates with regression calibration and a full joint likelihood approach. © 2011, The International Biometric Society No claim to original US Federal works.

  8. MissMech: An R Package for Testing Homoscedasticity, Multivariate Normality, and Missing Completely at Random (MCAR

    Directory of Open Access Journals (Sweden)

    Mortaza Jamshidian

    2014-01-01

    Full Text Available Researchers are often faced with analyzing data sets that are not complete. To prop- erly analyze such data sets requires the knowledge of the missing data mechanism. If data are missing completely at random (MCAR, then many missing data analysis techniques lead to valid inference. Thus, tests of MCAR are desirable. The package MissMech implements two tests developed by Jamshidian and Jalal (2010 for this purpose. These tests can be run using a function called TestMCARNormality. One of the tests is valid if data are normally distributed, and another test does not require any distributional assumptions for the data. In addition to testing MCAR, in some special cases, the function TestMCARNormality is also able to test whether data have a multivariate normal distribution. As a bonus, the functions in MissMech can also be used for the following additional tasks: (i test of homoscedasticity for several groups when data are completely observed, (ii perform the k-sample test of Anderson-Darling to determine whether k groups of univariate data come from the same distribution, (iii impute incomplete data sets using two methods, one where normality is assumed and one where no specific distributional assumptions are made, (iv obtain normal-theory maximum likelihood estimates for mean and covariance matrix when data are incomplete, along with their standard errors, and finally (v perform the Neymans test of uniformity. All of these features are explained in the paper, including examples.

  9. Missing value imputation for epistatic MAPs

    LENUS (Irish Health Repository)

    Ryan, Colm

    2010-04-20

    Abstract Background Epistatic miniarray profiling (E-MAPs) is a high-throughput approach capable of quantifying aggravating or alleviating genetic interactions between gene pairs. The datasets resulting from E-MAP experiments typically take the form of a symmetric pairwise matrix of interaction scores. These datasets have a significant number of missing values - up to 35% - that can reduce the effectiveness of some data analysis techniques and prevent the use of others. An effective method for imputing interactions would therefore increase the types of possible analysis, as well as increase the potential to identify novel functional interactions between gene pairs. Several methods have been developed to handle missing values in microarray data, but it is unclear how applicable these methods are to E-MAP data because of their pairwise nature and the significantly larger number of missing values. Here we evaluate four alternative imputation strategies, three local (Nearest neighbor-based) and one global (PCA-based), that have been modified to work with symmetric pairwise data. Results We identify different categories for the missing data based on their underlying cause, and show that values from the largest category can be imputed effectively. We compare local and global imputation approaches across a variety of distinct E-MAP datasets, showing that both are competitive and preferable to filling in with zeros. In addition we show that these methods are effective in an E-MAP from a different species, suggesting that pairwise imputation techniques will be increasingly useful as analogous epistasis mapping techniques are developed in different species. We show that strongly alleviating interactions are significantly more difficult to predict than strongly aggravating interactions. Finally we show that imputed interactions, generated using nearest neighbor methods, are enriched for annotations in the same manner as measured interactions. Therefore our method potentially

  10. Moderating the covariance between family member's substance use behavior.

    Science.gov (United States)

    Verhulst, Brad; Eaves, Lindon J; Neale, Michael C

    2014-07-01

    Twin and family studies implicitly assume that the covariation between family members remains constant across differences in age between the members of the family. However, age-specificity in gene expression for shared environmental factors could generate higher correlations between family members who are more similar in age. Cohort effects (cohort × genotype or cohort × common environment) could have the same effects, and both potentially reduce effect sizes estimated in genome-wide association studies where the subjects are heterogeneous in age. In this paper we describe a model in which the covariance between twins and non-twin siblings is moderated as a function of age difference. We describe the details of the model and simulate data using a variety of different parameter values to demonstrate that model fitting returns unbiased parameter estimates. Power analyses are then conducted to estimate the sample sizes required to detect the effects of moderation in a design of twins and siblings. Finally, the model is applied to data on cigarette smoking. We find that (1) the model effectively recovers the simulated parameters, (2) the power is relatively low and therefore requires large sample sizes before small to moderate effect sizes can be found reliably, and (3) the genetic covariance between siblings for smoking behavior decays very rapidly. Result 3 implies that, e.g., genome-wide studies of smoking behavior that use individuals assessed at different ages, or belonging to different birth-year cohorts may have had substantially reduced power to detect effects of genotype on cigarette use. It also implies that significant special twin environmental effects can be explained by age-moderation in some cases. This effect likely contributes to the missing heritability paradox.

  11. Compensating for Missing Data from Longitudinal Studies Using WinBUGS

    Directory of Open Access Journals (Sweden)

    Gretchen Carrigan

    2007-06-01

    Full Text Available Missing data is a common problem in survey based research. There are many packages that compensate for missing data but few can easily compensate for missing longitudinal data. WinBUGS compensates for missing data using multiple imputation, and is able to incorporate longitudinal structure using random effects. We demonstrate the superiority of longitudinal imputation over cross-sectional imputation using WinBUGS. We use example data from the Australian Longitudinal Study on Women’s Health. We give a SAS macro that uses WinBUGS to analyze longitudinal models with missing covariate date, and demonstrate its use in a longitudinal study of terminal cancer patients and their carers.

  12. Handling Stability of Tractor Semitrailer Based on Handling Diagram

    Directory of Open Access Journals (Sweden)

    Ren Yuan-yuan

    2012-01-01

    Full Text Available Handling instability is a serious threat to driving safety. In order to analyze the handling stability of a tractor semitrailer, a handling diagram can be used. In our research, considering the impact of multiple nonsteering rear axles and nonlinear characteristics of tires on vehicle handling stability, the handling equations are developed for description of stability of tractor semi-trailer. Then we obtain handling diagrams so as to study the influence of driving speed, loaded mass, and fifth wheel lead on vehicle handling stability. The analysis results show that the handling stability of a tractor semi-trailer when the tractor has two nonsteering rear axles is better than that when the tractor has only one nonsteering rear axle. While the stability in the former case is slightly influenced by driving speed and loaded mass, the latter is strongly influenced by both. The fifth wheel lead is found to only slightly influence handling stability for both tractor semi-trailers. Therefore, to ensure the driving safety of tractor semi-trailers when the tractor has only one nonsteering rear axle, much stricter restraints should be imposed on driving speed, and the loaded mass must not exceed the rated load of the trailer.

  13. Covariant quantizations in plane and curved spaces

    Energy Technology Data Exchange (ETDEWEB)

    Assirati, J.L.M. [University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil); Gitman, D.M. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); P.N. Lebedev Physical Institute, Moscow (Russian Federation); University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil)

    2017-07-15

    We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)

  14. New or ν missing energy

    DEFF Research Database (Denmark)

    Franzosi, Diogo Buarque; Frandsen, Mads T.; Shoemaker, Ian M.

    2016-01-01

    flavor structures. Monojet data alone can be used to infer the mass of the "missing particle" from the shape of the missing energy distribution. In particular, 13 TeV LHC data will have sensitivity to DM masses greater than $\\sim$ 1 TeV. In addition to the monojet channel, NSI can be probed in multi...

  15. Factor analysis and missing data

    NARCIS (Netherlands)

    Kamakura, WA; Wedel, M

    2000-01-01

    The authors study the estimation of factor models and the imputation of missing data and propose an approach that provides direct estimates of factor weights without the replacement of missing data with imputed values. First, the approach is useful in applications of factor analysis in the presence

  16. Missed Opportunities for Hepatitis A Vaccination, National Immunization Survey-Child, 2013.

    Science.gov (United States)

    Casillas, Shannon M; Bednarczyk, Robert A

    2017-08-01

    To quantify the number of missed opportunities for vaccination with hepatitis A vaccine in children and assess the association of missed opportunities for hepatitis A vaccination with covariates of interest. Weighted data from the 2013 National Immunization Survey of US children aged 19-35 months were used. Analysis was restricted to children with provider-verified vaccination history (n = 13 460). Missed opportunities for vaccination were quantified by determining the number of medical visits a child made when another vaccine was administered during eligibility for hepatitis A vaccine, but hepatitis A vaccine was not administered. Cross-sectional bivariate and multivariate polytomous logistic regression were used to assess the association of missed opportunities for vaccination with child and maternal demographic, socioeconomic, and geographic covariates. In 2013, 85% of children in our study population had initiated the hepatitis A vaccine series, and 60% received 2 or more doses. Children who received zero doses of hepatitis A vaccine had an average of 1.77 missed opportunities for vaccination compared with 0.43 missed opportunities for vaccination in those receiving 2 doses. Children with 2 or more missed opportunities for vaccination initiated the vaccine series 6 months later than children without missed opportunities. In the fully adjusted multivariate model, children who were younger, had ever received WIC benefits, or lived in a state with childcare entry mandates were at a reduced odds for 2 or more missed opportunities for vaccination; children living in the Northeast census region were at an increased odds. Missed opportunities for vaccination likely contribute to the poor coverage for hepatitis A vaccination in children; it is important to understand why children are not receiving the vaccine when eligible. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Covariate Balancing through Naturally Occurring Strata.

    Science.gov (United States)

    Alemi, Farrokh; ElRafey, Amr; Avramovic, Ivan

    2016-12-14

    To provide an alternative to propensity scoring (PS) for the common situation where there are interacting covariates. We used 1.3 million assessments of residents of the United States Veterans Affairs nursing homes, collected from January 1, 2000, through October 9, 2012. In stratified covariate balancing (SCB), data are divided into naturally occurring strata, where each stratum is an observed combination of the covariates. Within each stratum, cases with, and controls without, the target event are counted; controls are weighted to be as frequent as cases. This weighting procedure guarantees that covariates, or combination of covariates, are balanced, meaning they occur at the same rate among cases and controls. Finally, impact of the target event is calculated in the weighted data. We compare the performance of SCB, logistic regression (LR), and propensity scoring (PS) in simulated and real data. We examined the calibration of SCB and PS in predicting 6-month mortality from inability to eat, controlling for age, gender, and nine other disabilities for 296,051 residents in Veterans Affairs nursing homes. We also performed a simulation study, where outcomes were randomly generated from treatment, 10 covariates, and increasing number of covariate interactions. The accuracy of SCB, PS, and LR in recovering the simulated treatment effect was reported. In simulated environment, as the number of interactions among the covariates increased, SCB and properly specified LR remained accurate but pairwise LR and pairwise PS, the most common applications of these tools, performed poorly. In real data, application of SCB was practical. SCB was better calibrated than linear PS, the most common method of PS. In environments where covariates interact, SCB is practical and more accurate than common methods of applying LR and PS. © Health Research and Educational Trust.

  18. New transport and handling contract

    CERN Multimedia

    SC Department

    2008-01-01

    A new transport and handling contract entered into force on 1.10.2008. As with the previous contract, the user interface is the internal transport/handling request form on EDH: https://edh.cern.ch/Document/TransportRequest/ To ensure that you receive the best possible service, we invite you to complete the various fields as accurately as possible and to include a mobile telephone number on which we can reach you. You can follow the progress of your request (schedule, completion) in the EDH request routing information. We remind you that the following deadlines apply: 48 hours for the transport of heavy goods (up to 8 tonnes) or simple handling operations 5 working days for crane operations, transport of extra-heavy goods, complex handling operations and combined transport and handling operations in the tunnel. For all enquiries, the number to contact remains unchanged: 72202. Heavy Handling Section TS-HE-HH 72672 - 160319

  19. Trends in Modern Exception Handling

    Directory of Open Access Journals (Sweden)

    Marcin Kuta

    2003-01-01

    Full Text Available Exception handling is nowadays a necessary component of error proof information systems. The paper presents overview of techniques and models of exception handling, problems connected with them and potential solutions. The aspects of implementation of propagation mechanisms and exception handling, their effect on semantics and general program efficiency are also taken into account. Presented mechanisms were adopted to modern programming languages. Considering design area, formal methods and formal verification of program properties we can notice exception handling mechanisms are weakly present what makes a field for future research.

  20. Principled sure independence screening for Cox models with ultra-high-dimensional covariates

    OpenAIRE

    Zhao, Sihai Dave; Li, Yi

    2012-01-01

    It is rather challenging for current variable selectors to handle situations where the number of covariates under consideration is ultra-high. Consider a motivating clinical trial of the drug bortezomib for the treatment of multiple myeloma, where overall survival and expression levels of 44760 probesets were measured for each of 80 patients with the goal of identifying genes that predict survival after treatment. This dataset defies analysis even with regularized regression. Some remedies ha...

  1. Earth Observing System Covariance Realism Updates

    Science.gov (United States)

    Ojeda Romero, Juan A.; Miguel, Fred

    2017-01-01

    This presentation will be given at the International Earth Science Constellation Mission Operations Working Group meetings June 13-15, 2017 to discuss the Earth Observing System Covariance Realism updates.

  2. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-05

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.

  3. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design

  4. Covariate analysis of bivariate survival data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.

  5. Conversations with Miss Jane

    Directory of Open Access Journals (Sweden)

    Geneviève Fabre

    2006-05-01

    Full Text Available Considering the wide range of conversations in the autobiography, this essay will attempt to appraise the importance of these verbal exchanges in relation to the overall narrative structure of the book and to the prevalent oral tradition in Louisiana culture, as both an individual and communal expression. The variety of circumstances, the setting and staging, the interlocutors , and the complex intersection of time and place, of stories and History, will be examined; in these conversations with Miss Jane many actors participate, from  the interviewer-narrator, to most characters; even the reader becomes involved.Speaking, hearing, listening, keeping silent is an elaborate ritual that performs many functions; besides conveying news or rumors, it imparts information on the times and on the life of a “representative” woman whose existence - spanning a whole century- is both singular and emblematic. Most importantly this essay will analyse the resonance of an eventful and often dramatic era on her sensibility and conversely show how her evolving sensibility informs that history and draws attention to aspects that might have passed unnoticed or be forever silenced. Jane’s desire for liberty and justice is often challenged as she faces the possibilities of life or death.Conversations build up a complex, often contradictory, but compelling portrait: torn between silence and vehemence, between memories and the urge to meet the future, Jane summons body and mind to find her way through the maze of a fast changing world; self-willed and obstinate she claims her right to speak, to express with wit and wisdom her firm belief in the word, in the ability to express deep seated convictions and faith and a whole array of feelings and emotions.

  6. Missed Nursing Care in Pediatrics.

    Science.gov (United States)

    Lake, Eileen T; de Cordova, Pamela B; Barton, Sharon; Singh, Shweta; Agosto, Paula D; Ely, Beth; Roberts, Kathryn E; Aiken, Linda H

    2017-07-01

    A growing literature suggests that missed nursing care is common in hospitals and may contribute to poor patient outcomes. There has been scant empirical evidence in pediatric populations. Our objectives were to describe the frequency and patterns of missed nursing care in inpatient pediatric settings and to determine whether missed nursing care is associated with unfavorable work environments and high nurse workloads. A cross-sectional study using registered nurse survey data from 2006 to 2008 was conducted. Data from 2187 NICU, PICU, and general pediatric nurses in 223 hospitals in 4 US states were analyzed. For 12 nursing activities, nurses reported about necessary activities that were not done on their last shift because of time constraints. Nurses reported their patient assignment and rated their work environment. More than half of pediatric nurses had missed care on their previous shift. On average, pediatric nurses missed 1.5 necessary care activities. Missed care was more common in poor versus better work environments (1.9 vs 1.2; P < .01). For 9 of 12 nursing activities, the prevalence of missed care was significantly higher in the poor environments (P < .05). In regression models that controlled for nurse, nursing unit, and hospital characteristics, the odds that a nurse missed care were 40% lower in better environments and increased by 70% for each additional patient. Nurses in inpatient pediatric care settings that care for fewer patients each and practice in a professionally supportive work environment miss care less often, increasing quality of patient care. Copyright © 2017 by the American Academy of Pediatrics.

  7. Fully Bayesian inference under ignorable missingness in the presence of auxiliary covariates.

    Science.gov (United States)

    Daniels, M J; Wang, C; Marcus, B H

    2014-03-01

    In order to make a missing at random (MAR) or ignorability assumption realistic, auxiliary covariates are often required. However, the auxiliary covariates are not desired in the model for inference. Typical multiple imputation approaches do not assume that the imputation model marginalizes to the inference model. This has been termed "uncongenial" [Meng (1994, Statistical Science 9, 538-558)]. In order to make the two models congenial (or compatible), we would rather not assume a parametric model for the marginal distribution of the auxiliary covariates, but we typically do not have enough data to estimate the joint distribution well non-parametrically. In addition, when the imputation model uses a non-linear link function (e.g., the logistic link for a binary response), the marginalization over the auxiliary covariates to derive the inference model typically results in a difficult to interpret form for the effect of covariates. In this article, we propose a fully Bayesian approach to ensure that the models are compatible for incomplete longitudinal data by embedding an interpretable inference model within an imputation model and that also addresses the two complications described above. We evaluate the approach via simulations and implement it on a recent clinical trial. © 2013, The International Biometric Society.

  8. Assessment of Issue Handling Efficiency

    NARCIS (Netherlands)

    Luijten, B.; Visser, J.; Zaidman, A.

    2013-01-01

    We mined the issue database of GNOME to assess how issues are handled. How many issues are submitted and resolved? Does the backlog grow or decrease? How fast are issues resolved? Does issue resolution speed increase or decrease over time? In which subproject are issues handled most efficiently? To

  9. Basics for Handling Food Safely

    Science.gov (United States)

    ... Basics for Handling Food Safely Safe steps in food handling, cooking, and storage are essential to prevent foodborne illness. You can’t see, smell, or taste harmful bacteria that may cause illness. In every step of food preparation, follow the four Fight BAC! ® guidelines to ...

  10. How Retailers Handle Complaint Management

    DEFF Research Database (Denmark)

    Hansen, Torben; Wilke, Ricky; Zaichkowsky, Judy

    2009-01-01

    This article fills a gap in the literature by providing insight about the handling of complaint management (CM) across a large cross section of retailers in the grocery, furniture, electronic and auto sectors. Determinants of retailers’ CM handling are investigated and insight is gained...

  11. Missing needle during episiotomy repair

    Directory of Open Access Journals (Sweden)

    Joydeb Roychowdhury

    2008-01-01

    Full Text Available Breakage and missing of the episiotomy needle is not uncommon occurrence at the hands of the junior doctors. Retrieving it from deeper tissue planes following its migration can be a challenging task.

  12. Attenuation caused by infrequently updated covariates in survival analysis

    DEFF Research Database (Denmark)

    Andersen, Per Kragh; Liestøl, Knut

    2003-01-01

    Attenuation; Cox regression model; Measurement errors; Survival analysis; Time-dependent covariates......Attenuation; Cox regression model; Measurement errors; Survival analysis; Time-dependent covariates...

  13. A sparse Ising model with covariates.

    Science.gov (United States)

    Cheng, Jie; Levina, Elizaveta; Wang, Pei; Zhu, Ji

    2014-12-01

    There has been a lot of work fitting Ising models to multivariate binary data in order to understand the conditional dependency relationships between the variables. However, additional covariates are frequently recorded together with the binary data, and may influence the dependence relationships. Motivated by such a dataset on genomic instability collected from tumor samples of several types, we propose a sparse covariate dependent Ising model to study both the conditional dependency within the binary data and its relationship with the additional covariates. This results in subject-specific Ising models, where the subject's covariates influence the strength of association between the genes. As in all exploratory data analysis, interpretability of results is important, and we use ℓ1 penalties to induce sparsity in the fitted graphs and in the number of selected covariates. Two algorithms to fit the model are proposed and compared on a set of simulated data, and asymptotic results are established. The results on the tumor dataset and their biological significance are discussed in detail. © 2014, The International Biometric Society.

  14. Cross-covariance functions for multivariate geostatistics

    KAUST Repository

    Genton, Marc G.

    2015-05-01

    Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.

  15. Voest-Alpine materials handling

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-10-01

    The continuous excavation, mining and handling of materials and minerals is the focus of activities of Voest-Alpine Materials Handling (VAMH) in the area of open pit mining. The article gives several examples of use of VAMH's products in continuous open pit mining systems and also in stockyard systems, and at port and terminals. Materials handled include coal, iron ore and sulfur pellets. Details are given of customers (worldwide), the project background and scope of work and project duration. 34 photos.

  16. An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter

    KAUST Repository

    Song, Hajoon

    2010-07-01

    A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.

  17. Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library

    Energy Technology Data Exchange (ETDEWEB)

    Oblozinsky,P.; Oblozinsky,P.; Mattoon,C.M.; Herman,M.; Mughabghab,S.F.; Pigni,M.T.; Talou,P.; Hale,G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G

    2009-09-28

    Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10{sup -5} eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: {sup 23}Na and {sup 55}Mn where more detailed evaluation was done; improvements in major structural materials {sup 52}Cr, {sup 56}Fe and {sup 58}Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for {sup 23}Na and {sup 56}Fe. LANL contributed improved covariance data for {sup 235}U and {sup 239}Pu including prompt neutron fission spectra and completely new evaluation for {sup 240}Pu. New R-matrix evaluation for {sup 16}O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.

  18. Lunar Materials Handling System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lunar Materials Handling System (LMHS) is a method for transfer of bulk materials and products into and out of process equipment in support of lunar and Mars in...

  19. Lunar Materials Handling System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lunar Materials Handling System (LMHS) is a method for transfer of lunar soil into and out of process equipment in support of in situ resource utilization...

  20. Engineering solutions in materials handling

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-07-01

    Materials handling and earthmoving equipment produced by Krupp Engineering for the opencast coal industry, and coal stockyard management is described in relationship to use of Krupp equipment in South Africa, with particular reference to the Twistaraai plant. 4 photos.

  1. Tritium handling in vacuum systems

    Energy Technology Data Exchange (ETDEWEB)

    Gill, J.T. [Monsanto Research Corp., Miamisburg, OH (United States). Mound Facility; Coffin, D.O. [Los Alamos National Lab., NM (United States)

    1986-10-01

    This report provides a course in Tritium handling in vacuum systems. Topics presented are: Properties of Tritium; Tritium compatibility of materials; Tritium-compatible vacuum equipment; and Tritium waste treatment.

  2. Ergonomic material-handling device

    Science.gov (United States)

    Barsnick, Lance E.; Zalk, David M.; Perry, Catherine M.; Biggs, Terry; Tageson, Robert E.

    2004-08-24

    A hand-held ergonomic material-handling device capable of moving heavy objects, such as large waste containers and other large objects requiring mechanical assistance. The ergonomic material-handling device can be used with neutral postures of the back, shoulders, wrists and knees, thereby reducing potential injury to the user. The device involves two key features: 1) gives the user the ability to adjust the height of the handles of the device to ergonomically fit the needs of the user's back, wrists and shoulders; and 2) has a rounded handlebar shape, as well as the size and configuration of the handles which keep the user's wrists in a neutral posture during manipulation of the device.

  3. Air handling units for hospitals.

    Science.gov (United States)

    Amoroso, V; Gjestvang, R

    1989-10-01

    Air handling units should provide proper quality and conditioned air to various hospital areas. Unit capacity should be able to meet limited space functionality or load changes as well as any smoke control requirements. System components should be readily accessible and appropriate for spaces served. In summary, engineers should consider the following: Environmental design criteria for area being served Components desired Unit type required Economic issues affecting design. Using this approach, design engineers can design hospital air handling units methodically and logically.

  4. Stockyard machines for coal handling

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2008-07-01

    A little unexpectedly, coal has become something like the fuel of the future. This can also be seen from the large number of projects for coal handling facilities world wide. Most of the times, stacking and reclaiming equipment forms a major, and often quite impressive part of these facilities. The contribution under consideration provides examples of such equipment for handling of coal from Sandvik Mining and Construction.

  5. Civilsamfundets ABC: H for Handling

    DEFF Research Database (Denmark)

    Lund, Anker Brink; Meyer, Gitte

    2015-01-01

    Hvad er civilsamfundet? Anker Brink Lund og Gitte Meyer fra CBS Center for Civil Society Studies gennemgår civilsamfundet bogstav for bogstav. Vi er nået til H for Handling.......Hvad er civilsamfundet? Anker Brink Lund og Gitte Meyer fra CBS Center for Civil Society Studies gennemgår civilsamfundet bogstav for bogstav. Vi er nået til H for Handling....

  6. How to Organise Return Handling

    OpenAIRE

    Koster, René; van de Vendel, M.; Brito, Marisa

    2001-01-01

    textabstractAlready for a long time retailers take back products. In this paper we explore the factors contributing to the decision of combining vs. separating inbound and outbound flows during the return handling process. We do so through a comparative analysis of the operations in nine retailer warehouses, which can be divided in three groups: food retailers, department stores and mail order companies. We identify both aggravating factors and facilitating actions for return handling. Furthe...

  7. Forecasting Covariance Matrices: A Mixed Frequency Approach

    DEFF Research Database (Denmark)

    Halbleib, Roxana; Voev, Valeri

    This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... matrix dynamics. Our empirical results show that the new mixing approach provides superior forecasts compared to multivariate volatility specifications using single sources of information....

  8. Supergauge Field Theory of Covariant Heterotic Strings

    OpenAIRE

    Michio, KAKU; Physics Department, Osaka University : Physics Department, City College of the City University of New York

    1986-01-01

    We present the gauge covariant second quantized field theory for free heterotic strings, which is leading candidate for a unified theory of all known particles. Our action is invariant under the semi-direct product of the super Virasoro and the Kac-Moody E_8×E_8 or Spin(32)/Z_2 group. We derive the covariant action by path integrals in the same way that Feynman originally derived the Schrodinger equation. By adding an infinite number of auxiliary fields, we can also make the action explicitly...

  9. BAGEL: A non-ignorable missing value estimation method for mixed ...

    Indian Academy of Sciences (India)

    But, there exists a large gap in handling non-ignorable missing values in datasets with mixed attributes. With the intent of addressing this gap, this paper proposes a methodology called as Bayesian Genetic Algorithm (BAGEL) with hybridized Bayesian and Genetic Algorithm principles. In BAGEL, the initial population is ...

  10. Missing data in forest ecology and management: advances in quantitative methods [Preface

    Science.gov (United States)

    Tara Barrett; Matti. Maltomo

    2012-01-01

    In recent years, substantial progress has been made for handling missing data issues for applications in forest ecology and management, particularly in the area of imputation techniques. A session on this topic was held at the XXlll IUFRO World Congress in Seoul, South Korea, on August 23-28, 2010, resulting in this special issue of six papers that address recent...

  11. Application of Defence of Insanity in Nigerian Courts: The Missing Link

    African Journals Online (AJOL)

    The research technique of content analysis of insanity defences in Nigeria shows that there is a missing link. Results: That broken link is the application of forensic psychology using a battery of standardized instruments, validated and culture free, handled by unbiased and incorruptible forensic experts. Conclusion: it was ...

  12. Activities on covariance estimation in Japanese Nuclear Data Committee

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-03-01

    Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)

  13. Covariant Deformation Quantization of Free Fields

    OpenAIRE

    Harrivel, Dikanaina

    2006-01-01

    We define covariantly a deformation of a given algebra, then we will see how it can be related to a deformation quantization of a class of observables in Quantum Field Theory. Then we will investigate the operator order related to this deformation quantization.

  14. Observed Score Linear Equating with Covariates

    Science.gov (United States)

    Branberg, Kenny; Wiberg, Marie

    2011-01-01

    This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…

  15. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-11-30

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  16. Optimal covariate designs theory and applications

    CERN Document Server

    Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar

    2015-01-01

    This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...

  17. (co)variances for growth and efficiency

    African Journals Online (AJOL)

    42, 295. CANTET, R.J.C., KRESS, D.D., ANDERSON, D.C., DOORNBOS, D.E., BURFENING, P.J. &. BLACKWELL, R.L., 1988. Direct and maternal variances and covariances and maternal phenotypic effects on preweaning growth of beef cattle. J. Anim. Sci. 66, 648. CUNNINGHAM. E.P., MOON, R.A. & GJEDREN, T., 1970.

  18. Covariant perturbation theory and chiral superpropagators

    CERN Document Server

    Ecker, G

    1972-01-01

    The authors use a covariant formulation of perturbation theory for the non-linear chiral invariant pion model to define chiral superpropagators leading to S-matrix elements which are independent of the choice of the pion field coordinates. The relation to the standard definition of chiral superpropagators is discussed. (11 refs).

  19. Galilean Covariance and the Gravitational Field

    OpenAIRE

    Ulhoa, S. C.; Khanna, F. C.; Santana, A.E.

    2009-01-01

    The paper is concerned with the development of a gravitational field theory having locally a covariant version of the Galilei group. We show that this Galilean gravity can be used to study the advance of perihelion of a planet, following in parallel with the result of the (relativistic) theory of general relativity in the post-Newtonian approximation.

  20. On translation-covariant quantum Markov equations

    Science.gov (United States)

    Holevo, A. S.

    1995-04-01

    The structure of quantum Markov control equations with unbounded generators and covariant with respect to 1) irreducible representation of the Weyl CCR on R^d and 2) representation of the group of R^d, is completely described via non-commutative Levy-Khinchin-type formulae. The existence and uniqueness of solutions for such equations is briefly discussed.

  1. Droid X The Missing Manual

    CERN Document Server

    Gralla, Preston

    2011-01-01

    Get the most from your Droid X right away with this entertaining Missing Manual. Veteran tech author Preston Gralla offers a guided tour of every feature, with lots of expert tips and tricks along the way. You'll learn how to use calling and texting features, take and share photos, enjoy streaming music and video, and much more. Packed with full-color illustrations, this engaging book covers everything from getting started to advanced features and troubleshooting. Unleash the power of Motorola's hot new device with Droid X: The Missing Manual. Get organized. Import your contacts and sync wit

  2. Motorola Xoom The Missing Manual

    CERN Document Server

    Gralla, Preston

    2011-01-01

    Motorola Xoom is the first tablet to rival the iPad, and no wonder with all of the great features packed into this device. But learning how to use everything can be tricky-and Xoom doesn't come with a printed guide. That's where this Missing Manual comes in. Gadget expert Preston Gralla helps you master your Xoom with step-by-step instructions and clear explanations. As with all Missing Manuals, this book offers refreshing, jargon-free prose and informative illustrations. Use your Xoom as an e-book reader, music player, camcorder, and phoneKeep in touch with email, video and text chat, and so

  3. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    Science.gov (United States)

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  4. Unravelling Lorentz Covariance and the Spacetime Formalism

    Directory of Open Access Journals (Sweden)

    Cahill R. T.

    2008-10-01

    Full Text Available We report the discovery of an exact mapping from Galilean time and space coordinates to Minkowski spacetime coordinates, showing that Lorentz covariance and the space-time construct are consistent with the existence of a dynamical 3-space, and absolute motion. We illustrate this mapping first with the standard theory of sound, as vibrations of a medium, which itself may be undergoing fluid motion, and which is covariant under Galilean coordinate transformations. By introducing a different non-physical class of space and time coordinates it may be cast into a form that is covariant under Lorentz transformations wherein the speed of sound is now the invariant speed. If this latter formalism were taken as fundamental and complete we would be lead to the introduction of a pseudo-Riemannian spacetime description of sound, with a metric characterised by an invariant speed of sound. This analysis is an allegory for the development of 20th century physics, but where the Lorentz covariant Maxwell equations were constructed first, and the Galilean form was later constructed by Hertz, but ignored. It is shown that the Lorentz covariance of the Maxwell equations only occurs because of the use of non-physical space and time coordinates. The use of this class of coordinates has confounded 20th century physics, and resulted in the existence of a allowing dynamical 3-space being overlooked. The discovery of the dynamics of this 3-space has lead to the derivation of an extended gravity theory as a quantum effect, and confirmed by numerous experiments and observations

  5. Unravelling Lorentz Covariance and the Spacetime Formalism

    Directory of Open Access Journals (Sweden)

    Cahill R. T.

    2008-10-01

    Full Text Available We report the discovery of an exact mapping from Galilean time and space coordinates to Minkowski spacetime coordinates, showing that Lorentz covariance and the space- time construct are consistent with the existence of a dynamical 3-space, and “absolute motion”. We illustrate this mapping first with the standard theory of sound, as vibra- tions of a medium, which itself may be undergoing fluid motion, and which is covari- ant under Galilean coordinate transformations. By introducing a different non-physical class of space and time coordinates it may be cast into a form that is covariant under “Lorentz transformations” wherein the speed of sound is now the “invariant speed”. If this latter formalism were taken as fundamental and complete we would be lead to the introduction of a pseudo-Riemannian “spacetime” description of sound, with a metric characterised by an “invariant speed of sound”. This analysis is an allegory for the development of 20th century physics, but where the Lorentz covariant Maxwell equa- tions were constructed first, and the Galilean form was later constructed by Hertz, but ignored. It is shown that the Lorentz covariance of the Maxwell equations only occurs because of the use of non-physical space and time coordinates. The use of this class of coordinates has confounded 20th century physics, and resulted in the existence of a “flowing” dynamical 3-space being overlooked. The discovery of the dynamics of this 3-space has lead to the derivation of an extended gravity theory as a quantum effect, and confirmed by numerous experiments and observations

  6. Econometric analysis of realized covariation: high frequency based covariance, regression, and correlation in financial economics

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, N.

    2004-01-01

    This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....

  7. Managing distance and covariate information with point-based clustering

    Directory of Open Access Journals (Sweden)

    Peter A. Whigham

    2016-09-01

    Full Text Available Abstract Background Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley’s K and applied to the problem of clustering with deliberate self-harm (DSH, is presented. Methods Point-based Monte-Carlo simulation of Ripley’s K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years’ emergency hospital presentations (n = 136 in a New Zealand town (population ~50,000. Study area was defined by residential (housing land parcels representing a finite set of possible point addresses. Results Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Conclusions Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley’s K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for

  8. Gastric cancer missed at endoscopy

    African Journals Online (AJOL)

    Ahmed Gado

    2012-09-21

    Sep 21, 2012 ... fore endoscopy taking into account risk factors for cancer and the clinical presentation. Careful examination of the stomach during endoscopy should be performed in order not to miss any lesion. All gastric ulcers must be biopsied and a repeat endoscopy be performed following a course of acid suppres-.

  9. Did you miss the eclipse?

    CERN Multimedia

    CERN Bulletin

    2015-01-01

    Not wanting to miss a moment of the beautiful celestial dance that played out on Friday, 20 March, Jens Roder of CERN’s PH group took to the Jura mountains, where he got several shots of the event. Here are a selection of his photos, which he was kind enough to share with the Bulletin and its readers.  

  10. Linear covariance analysis for gimbaled pointing systems

    Science.gov (United States)

    Christensen, Randall S.

    Linear covariance analysis has been utilized in a wide variety of applications. Historically, the theory has made significant contributions to navigation system design and analysis. More recently, the theory has been extended to capture the combined effect of navigation errors and closed-loop control on the performance of the system. These advancements have made possible rapid analysis and comprehensive trade studies of complicated systems ranging from autonomous rendezvous to vehicle ascent trajectory analysis. Comprehensive trade studies are also needed in the area of gimbaled pointing systems where the information needs are different from previous applications. It is therefore the objective of this research to extend the capabilities of linear covariance theory to analyze the closed-loop navigation and control of a gimbaled pointing system. The extensions developed in this research include modifying the linear covariance equations to accommodate a wider variety of controllers. This enables the analysis of controllers common to gimbaled pointing systems, with internal states and associated dynamics as well as actuator command filtering and auxiliary controller measurements. The second extension is the extraction of power spectral density estimates from information available in linear covariance analysis. This information is especially important to gimbaled pointing systems where not just the variance but also the spectrum of the pointing error impacts the performance. The extended theory is applied to a model of a gimbaled pointing system which includes both flexible and rigid body elements as well as input disturbances, sensor errors, and actuator errors. The results of the analysis are validated by direct comparison to a Monte Carlo-based analysis approach. Once the developed linear covariance theory is validated, analysis techniques that are often prohibitory with Monte Carlo analysis are used to gain further insight into the system. These include the creation

  11. The handling of chemical data

    CERN Document Server

    Lark, P D; Bosworth, R C L

    1968-01-01

    The Handling of Chemical Data deals with how measurements, such as those arrived at from chemical experimentation, are handled. The book discusses the different kinds of measurements and their specific dimensional characteristics by starting with the origin and presentation of chemical data. The text explains the units, fixed points, and relationships found between scales, the concept of dimensions, the presentation of quantitative data (whether in a tabular or graphical form), and some uses of empirical equations. The book also explains the relationship between two variables, and how equatio

  12. A covariance NMR toolbox for MATLAB and OCTAVE.

    Science.gov (United States)

    Short, Timothy; Alzapiedi, Leigh; Brüschweiler, Rafael; Snyder, David

    2011-03-01

    The Covariance NMR Toolbox is a new software suite that provides a streamlined implementation of covariance-based analysis of multi-dimensional NMR data. The Covariance NMR Toolbox uses the MATLAB or, alternatively, the freely available GNU OCTAVE computer language, providing a user-friendly environment in which to apply and explore covariance techniques. Covariance methods implemented in the toolbox described here include direct and indirect covariance processing, 4D covariance, generalized indirect covariance (GIC), and Z-matrix transform. In order to provide compatibility with a wide variety of spectrometer and spectral analysis platforms, the Covariance NMR Toolbox uses the NMRPipe format for both input and output files. Additionally, datasets small enough to fit in memory are stored as arrays that can be displayed and further manipulated in a versatile manner within MATLAB or OCTAVE. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Covariant holography of a tachyonic accelerating universe

    Energy Technology Data Exchange (ETDEWEB)

    Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)

    2014-08-15

    We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)

  14. Covariance and the hierarchy of frame bundles

    Science.gov (United States)

    Estabrook, Frank B.

    1987-01-01

    This is an essay on the general concept of covariance, and its connection with the structure of the nested set of higher frame bundles over a differentiable manifold. Examples of covariant geometric objects include not only linear tensor fields, densities and forms, but affinity fields, sectors and sector forms, higher order frame fields, etc., often having nonlinear transformation rules and Lie derivatives. The intrinsic, or invariant, sets of forms that arise on frame bundles satisfy the graded Cartan-Maurer structure equations of an infinite Lie algebra. Reduction of these gives invariant structure equations for Lie pseudogroups, and for G-structures of various orders. Some new results are introduced for prolongation of structure equations, and for treatment of Riemannian geometry with higher-order moving frames. The use of invariant form equations for nonlinear field physics is implicitly advocated.

  15. Torsion and geometrostasis in covariant superstrings

    Energy Technology Data Exchange (ETDEWEB)

    Zachos, C.

    1985-01-01

    The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.

  16. Batch Covariance Relaxation (BCR) Adaptive Processing.

    Science.gov (United States)

    1981-08-01

    techniques dictates the need for processing flexibility which may be met most easily by a digital mechanization. The effort conducted addresses the...essential aspects of Batch Covariance Relaxation (BCR) adaptive processing applied to a digital adaptive array processing. In contrast to dynamic... libarary , RADAR:LIB. An extensive explanation as to how to use these programs is given. It is shown how the output of each is used as part of the input for

  17. Covariance Kernels from Bayesian Generative Models

    OpenAIRE

    Seeger, Matthias

    2002-01-01

    We propose the framework of mutual information kernels for learning covariance kernels, as used in Support Vector machines and Gaussian process classifiers, from unlabeled task data using Bayesian techniques. We describe an implementation of this framework which uses variational Bayesian mixtures of factor analyzers in order to attack classification problems in high-dimensional spaces where labeled data is sparse, but unlabeled data is abundant.

  18. ANL Critical Assembly Covariance Matrix Generation

    Energy Technology Data Exchange (ETDEWEB)

    McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-15

    This report discusses the generation of a covariance matrix for selected critical assemblies that were carried out by Argonne National Laboratory (ANL) using four critical facilities-all of which are now decommissioned. The four different ANL critical facilities are: ZPR-3 located at ANL-West (now Idaho National Laboratory- INL), ZPR-6 and ZPR-9 located at ANL-East (Illinois) and ZPPr located at ANL-West.

  19. Linear Covariance Analysis for a Lunar Lander

    Science.gov (United States)

    Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael

    2017-01-01

    A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.

  20. On conservativity of covariant dynamical semigroups

    Science.gov (United States)

    Holevo, A. S.

    1993-10-01

    The notion of form-generator of a dynamical semigroup is introduced and used to give a criterion for the conservativity (preservation of the identity) of covariant dynamical semigroups. It allows to reduce the problem of construction of conservative dynamical semigroups to familiar problems of non-explosion for Markov processes and construction of a contraction semigroup in a Hilbert space. Some new classes of unbounded generators, related to the Levy-Khinchin formula, are described.

  1. Covariance tracking: architecture optimizations for embedded systems

    Science.gov (United States)

    Romero, Andrés; Lacassagne, Lionel; Gouiffès, Michèle; Zahraee, Ali Hassan

    2014-12-01

    Covariance matching techniques have recently grown in interest due to their good performances for object retrieval, detection, and tracking. By mixing color and texture information in a compact representation, it can be applied to various kinds of objects (textured or not, rigid or not). Unfortunately, the original version requires heavy computations and is difficult to execute in real time on embedded systems. This article presents a review on different versions of the algorithm and its various applications; our aim is to describe the most crucial challenges and particularities that appeared when implementing and optimizing the covariance matching algorithm on a variety of desktop processors and on low-power processors suitable for embedded systems. An application of texture classification is used to compare different versions of the region descriptor. Then a comprehensive study is made to reach a higher level of performance on multi-core CPU architectures by comparing different ways to structure the information, using single instruction, multiple data (SIMD) instructions and advanced loop transformations. The execution time is reduced significantly on two dual-core CPU architectures for embedded computing: ARM Cortex-A9 and Cortex-A15 and Intel Penryn-M U9300 and Haswell-M 4650U. According to our experiments on covariance tracking, it is possible to reach a speedup greater than ×2 on both ARM and Intel architectures, when compared to the original algorithm, leading to real-time execution.

  2. Development of covariance capabilities in EMPIRE code

    Energy Technology Data Exchange (ETDEWEB)

    Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.

    2008-06-24

    The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.

  3. APPLICATION OF RESTART COVARIANCE MATRIX ADAPTATION EVOLUTION STRATEGY (RCMA-ES TO GENERATION EXPANSION PLANNING PROBLEM

    Directory of Open Access Journals (Sweden)

    K. Karthikeyan

    2012-10-01

    Full Text Available This paper describes the application of an evolutionary algorithm, Restart Covariance Matrix Adaptation Evolution Strategy (RCMA-ES to the Generation Expansion Planning (GEP problem. RCMA-ES is a class of continuous Evolutionary Algorithm (EA derived from the concept of self-adaptation in evolution strategies, which adapts the covariance matrix of a multivariate normal search distribution. The original GEP problem is modified by incorporating Virtual Mapping Procedure (VMP. The GEP problem of a synthetic test systems for 6-year, 14-year and 24-year planning horizons having five types of candidate units is considered. Two different constraint-handling methods are incorporated and impact of each method has been compared. In addition, comparison and validation has also made with dynamic programming method.

  4. 7 CFR 982.7 - To handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false To handle. 982.7 Section 982.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... Order Regulating Handling Definitions § 982.7 To handle. To handle means to sell, consign, transport or...

  5. 7 CFR 984.13 - To handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false To handle. 984.13 Section 984.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 984.13 To handle. To handle means to pack, sell, consign, transport, or...

  6. 7 CFR 993.13 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 993.13 Section 993.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... Regulating Handling Definitions § 993.13 Handle. Handle means to receive, package, sell, consign, transport...

  7. Exception handling for sensor fusion

    Science.gov (United States)

    Chavez, G. T.; Murphy, Robin R.

    1993-08-01

    This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.

  8. TNO reticle handling test platform

    NARCIS (Netherlands)

    Crowcombe, W.E.; Hollemans, C.L.; Fritz, E.C.; Donck, J.C.J. van der; Koster, N.B.

    2014-01-01

    Particle free handling of EUV reticles is a major concern in industry. For reaching economically feasible yield levels, it is reported that Particle-per-Reticle-Pass (PRP) levels should be better than 0.0001 for particles larger than 18 nm. Such cleanliness levels are yet to be reported for current

  9. Liberalisation of municipal waste handling

    DEFF Research Database (Denmark)

    Busck, Ole Gunni

    2006-01-01

    Liberalisation of municipal waste handling: How are sustainable practices pursued? In the process of liberalization of public services in Europe contracting out the collection of municipal waste has surged. Research in Denmark has shown that municipalities in general have pursued a narrow policy ...

  10. How to Organise Return Handling

    NARCIS (Netherlands)

    M.B.M. de Koster (René); M. van de Vendel; M.P. de Brito (Marisa)

    2001-01-01

    textabstractAlready for a long time retailers take back products. In this paper we explore the factors contributing to the decision of combining vs. separating inbound and outbound flows during the return handling process. We do so through a comparative analysis of the operations in nine retailer

  11. Estimation of Covariances on Prompt Fission Neutron Spectra and Impact of the PFNS Model on the Vessel Fluence

    Directory of Open Access Journals (Sweden)

    Berge Léonie

    2016-01-01

    Full Text Available As the need for precise handling of nuclear data covariances grows ever stronger, no information about covariances of prompt fission neutron spectra (PFNS are available in the evaluated library JEFF-3.2, although present in ENDF/B-VII.1 and JENDL-4.0 libraries for the main fissile isotopes. The aim of this work is to provide an estimation of covariance matrices related to PFNS, in the frame of some commonly used models for the evaluated files, such as the Maxwellian spectrum, the Watt spectrum, or the Madland-Nix spectrum. The evaluation of PFNS through these models involves an adjustment of model parameters to available experimental data, and the calculation of the spectrum variance-covariance matrix arising from experimental uncertainties. We present the results for thermal neutron induced fission of 235U. The systematic experimental uncertainties are propagated via the marginalization technique available in the CONRAD code. They are of great influence on the final covariance matrix, and therefore, on the spectrum uncertainty band width. In addition to this covariance estimation work, we have also investigated the importance on a reactor calculation of the fission spectrum model choice. A study of the vessel fluence depending on the PFNS model is presented. This is done through the propagation of neutrons emitted from a fission source in a simplified PWR using the TRIPOLI-4® code. This last study includes thermal fission spectra from the FIFRELIN Monte-Carlo code dedicated to the simulation of prompt particles emission during fission.

  12. Tomato handling practices in restaurants.

    Science.gov (United States)

    Kirkland, Elizabeth; Green, Laura R; Stone, Carmily; Reimann, Dave; Nicholas, Dave; Mason, Ryan; Frick, Roberta; Coleman, Sandra; Bushnell, Lisa; Blade, Henry; Radke, Vincent; Selman, Carol

    2009-08-01

    In recent years, multiple outbreaks of Salmonella infection have been associated with fresh tomatoes. Investigations have indicated that tomato contamination likely occurred early in the farm-to-consumer chain, although tomato consumption occurred mostly in restaurants. Researchers have hypothesized that tomato handling practices in restaurants may contribute to these outbreaks. However, few empirical data exist on how restaurant workers handle tomatoes. This study was conducted to examine tomato handling practices in restaurants. Members of the Environmental Health Specialists Network (EHS-Net) observed tomato handling practices in 449 restaurants. The data indicated that handling tomatoes appropriately posed a challenge to many restaurants. Produce-only cutting boards were not used on 49% of tomato cutting observations, and gloves were not worn in 36% of tomato cutting observations. Although tomatoes were washed under running water as recommended in most (82%) of the washing observations, tomatoes were soaked in standing water, a practice not recommended by the U.S. Food and Drug Administration (FDA), in 18% of observations, and the temperature differential between the wash water and tomatoes did not meet FDA guidelines in 21% of observations. About half of all batches of cut tomatoes in holding areas were above 41 degrees F (5 degrees C), the temperature recommended by the FDA. The maximum holding time for most (73%) of the cut tomatoes held above 41 degrees F exceeded the FDA recommended holding time of 4 h for unrefrigerated tomatoes (i.e., tomatoes held above 41 degrees F). The information provided by this study can be used to inform efforts to develop interventions and thus prevent tomato-associated illness outbreaks.

  13. Regime switching state-space models applied to psychological processes: handling missing data and making inferences

    NARCIS (Netherlands)

    Hamaker, E.L.; Grasman, R.P.P.P.

    2012-01-01

    Many psychological processes are characterized by recurrent shifts between distinct regimes or states. Examples that are considered in this paper are the switches between different states associated with premenstrual syndrome, hourly fluctuations in affect during a major depressive episode, and

  14. Decoding of human hand actions to handle missing limbs in Neuroprosthetics

    Directory of Open Access Journals (Sweden)

    Jovana eBelic

    2015-02-01

    Full Text Available The only way we can interact with the world is through movements, and our primary interactions are via the hands, thus any loss of hand function has immediate impact on our quality of life. However, to date it has not been systematically assessed how coordination in the hand's joints affects every day actions. This is important for two fundamental reasons. Firstly, to understand the representations and computations underlying motor control in-the-wild situations, and secondly to develop smarter controllers for prosthetic hands that have the same functionality as natural limbs. In this work we exploit the correlation structure of our hand and finger movements in daily-life. The novelty of our idea is that instead of averaging variability out, we take the view that the structure of variability may contain valuable information about the task being performed. We asked seven subjects to interact in 17 daily-life situations, and quantified behaviour in a principled manner using CyberGlove body sensor networks that, after accurate calibration, track all major joints of the hand. Our key findings are: 1. We confirmed that hand control in daily-life tasks is very low-dimensional, with four to five dimensions being sufficient to explain 80-90% of the variability in the natural movement data. 2. We established a universally applicable measure of manipulative complexity that allowed us to measure and compare limb movements across tasks. We used Bayesian latent variable models to model the low-dimensional structure of finger joint angles in natural actions. 3. This allowed us to build a naïve classifier that within the first 1000ms of action initiation (from a flat hand start configuration predicted which of the 17 actions was going to be executed - enabling us to reliably predict the action intention from very short-time-scale initial data, further revealing the foreseeable nature of hand movements for control of neuroprosthetics and tele operation purposes. 4. Using the Exp

  15. Decoding of human hand actions to handle missing limbs in neuroprosthetics.

    Science.gov (United States)

    Belić, Jovana J; Faisal, A Aldo

    2015-01-01

    The only way we can interact with the world is through movements, and our primary interactions are via the hands, thus any loss of hand function has immediate impact on our quality of life. However, to date it has not been systematically assessed how coordination in the hand's joints affects every day actions. This is important for two fundamental reasons. Firstly, to understand the representations and computations underlying motor control "in-the-wild" situations, and secondly to develop smarter controllers for prosthetic hands that have the same functionality as natural limbs. In this work we exploit the correlation structure of our hand and finger movements in daily-life. The novelty of our idea is that instead of averaging variability out, we take the view that the structure of variability may contain valuable information about the task being performed. We asked seven subjects to interact in 17 daily-life situations, and quantified behavior in a principled manner using CyberGlove body sensor networks that, after accurate calibration, track all major joints of the hand. Our key findings are: (1) We confirmed that hand control in daily-life tasks is very low-dimensional, with four to five dimensions being sufficient to explain 80-90% of the variability in the natural movement data. (2) We established a universally applicable measure of manipulative complexity that allowed us to measure and compare limb movements across tasks. We used Bayesian latent variable models to model the low-dimensional structure of finger joint angles in natural actions. (3) This allowed us to build a naïve classifier that within the first 1000 ms of action initiation (from a flat hand start configuration) predicted which of the 17 actions was going to be executed-enabling us to reliably predict the action intention from very short-time-scale initial data, further revealing the foreseeable nature of hand movements for control of neuroprosthetics and tele operation purposes. (4) Using the Expectation-Maximization algorithm on our latent variable model permitted us to reconstruct with high accuracy (neuroprosthetics.

  16. Bayes reconstruction of missing teeth

    DEFF Research Database (Denmark)

    Sporring, Jon; Jensen, Katrine Hommelhoff

    2008-01-01

     We propose a method for restoring the surface of tooth crowns in a 3D model of a human denture, so that the pose and anatomical features of the tooth will work well for chewing. This is achieved by including information about the position and anatomy of the other teeth in the mouth. Our system...... contains two major parts: A statistical model of a selection of tooth shapes and a reconstruction of missing data. We use a training set consisting of 3D scans of dental cast models obtained with a laser scanner, and we have build a model of the shape variability of the teeth, their neighbors...... regularization of the log-likelihood estimate based on differential geometrical properties of teeth surfaces, and we show general conditions under which this may be considered a Bayes prior.Finally we use Bayes method to propose the reconstruction of missing data, for e.g. finding the most probable shape...

  17. HOSPITAL VARIATION IN MISSED NURSING CARE

    OpenAIRE

    Kalisch, Beatrice J.; Tschannen, Dana; Lee, Hyunhwa; Friese, Christopher R.

    2011-01-01

    Quality of nursing care across hospitals is variable, and this variation can result in poor patient outcomes. One aspect of quality nursing care is the amount of necessary care omitted. This paper reports on the extent and type of nursing care missed and the reasons for missed care. The MISSCARE Survey was administered to nursing staff (n = 4086) who provide direct patient care in ten acute care hospitals. Missed nursing care patterns, as well as reasons for missing care (labor resources, mat...

  18. Missed Immunization Opportunities among Children in Enugn

    African Journals Online (AJOL)

    Vaccine-SpecyficMissed Qyporlunities. Vaccine Number who Missed % of Total. BCG 3 i 5.0. OPVO - 6 i 10.0. oPv1 10 16.7. 0Pv2 19 31.7. OPV3 28 . 46.7. DPT1 13 21.7. DPT2 21 35.0. DPT3 29 48.3. Measles 42 70.0 missed OPVo . At the other extreme, 42 (70 percent) missed measles immunization, followed by DPT3.

  19. Covariance differences of linealy representable sequences in hilbert ...

    African Journals Online (AJOL)

    TThe paper introduces the concepts of covariance differences of a sequence and establishes its relationship with the covariance function. One of the main results of this paper is the criteria of linear representability of sequences in Hilbert spaces.

  20. Methods for Mediation Analysis with Missing Data

    Science.gov (United States)

    Zhang, Zhiyong; Wang, Lijuan

    2013-01-01

    Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…

  1. Development of software for handling ship's pharmacy.

    Science.gov (United States)

    Nittari, Giulio; Peretti, Alessandro; Sibilio, Fabio; Ioannidis, Nicholas; Amenta, Francesco

    2016-01-01

    Ships are required to carry a given amount of medicinal products and medications depending on the flag and the type of vessel. These medicines are stored in the so called ship's "medicine chest" or more properly - a ship pharmacy. Owing to the progress of medical sciences and to the increase in the mean age of seafarers employed on board ships, the number of pharmaceutical products and medical devices required by regulations to be carried on board ships is increasing. This may make handling of the ship's medicine chest a problem primarily on large ships sailing on intercontinental routes due to the difficulty in identifying the correspondence between medicines obtained abroad with those available at the national market. To minimise these problems a tool named Pharmacy Ship (acronym: PARSI) has been developed. The application PARSI is based on a database containing the information about medicines and medical devices required by different countries regulations. In the first application the system was standardised to comply with the Italian regulations issued on the 1st October, 2015 which entered into force on the 18 January 2016. Thanks to PARSI it was possible to standardize the inventory procedures, facilitate the work of maritime health authorities and make it easier for the crew, not professional in the field, to handle the 'medicine chest' correctly by automating the procedures for medicines management. As far as we know there are no other similar tools available at the moment. The application of the software, as well as the automation of different activities, currently carried out manually, will help manage (qualitatively and quantitatively) the ship's pharmacy. The system developed in this study has proved to be an effective tool which serves to guarantee the compliance of the ship pharmacy with regulations of the flag state in terms of medicinal products and medications. Sharing the system with the Telemedical Maritime Assistance Service may result in

  2. High-dimensional covariance estimation with high-dimensional data

    CERN Document Server

    Pourahmadi, Mohsen

    2013-01-01

    Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac

  3. Eddy covariance based methane flux in Sundarbans mangroves, India

    Indian Academy of Sciences (India)

    Eddy covariance based methane flux in Sundarbans mangroves, India ... Eddy covariance; mangrove forests; methane flux; Sundarbans. ... In order to quantify the methane flux in mangroves, an eddy covariance flux tower was recently erected in the largest unpolluted and undisturbed mangrove ecosystem in Sundarbans ...

  4. Earth Observation System Flight Dynamics System Covariance Realism

    Science.gov (United States)

    Zaidi, Waqar H.; Tracewell, David

    2016-01-01

    This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.

  5. Inferring Meta-covariates in Classification

    Science.gov (United States)

    Harris, Keith; McMillan, Lisa; Girolami, Mark

    This paper develops an alternative method for gene selection that combines model based clustering and binary classification. By averaging the covariates within the clusters obtained from model based clustering, we define “meta-covariates” and use them to build a probit regression model, thereby selecting clusters of similarly behaving genes, aiding interpretation. This simultaneous learning task is accomplished by an EM algorithm that optimises a single likelihood function which rewards good performance at both classification and clustering. We explore the performance of our methodology on a well known leukaemia dataset and use the Gene Ontology to interpret our results.

  6. Cosmology of a covariant Galilean field.

    Science.gov (United States)

    De Felice, Antonio; Tsujikawa, Shinji

    2010-09-10

    We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.

  7. Anomalies in covariant W-gravity

    Science.gov (United States)

    Ceresole, Anna T.; Frau, Marialuisa; McCarthy, Jim; Lerda, Alberto

    1991-08-01

    We consider free scalar matter covariantly coupled to background W-gravity. Expanding to second order in the W-gravity fields, we study the appropriate anomalous Ward-Takahashi identities and find the counterterms which maintain diffeomorphism invariance and its W-analogue. We see that a redefinition of the vielbein transformation rule under W-diffeomorphism is required in order to cancel nonlocal contributions to the anomaly. Moreover, we explicitly write all gauge invariances at this order. Some consequences of these results for the chiral gauge quantization are discussed. On leave of absence from Dipartimento di Fisica Teorica, Università di Torino, Turin, Italy.

  8. Gallilei covariant quantum mechanics in electromagnetic fields

    Directory of Open Access Journals (Sweden)

    H. E. Wilhelm

    1985-01-01

    Full Text Available A formulation of the quantum mechanics of charged particles in time-dependent electromagnetic fields is presented, in which both the Schroedinger equation and wave equations for the electromagnetic potentials are Galilei covariant, it is shown that the Galilean relativity principle leads to the introduction of the electromagnetic substratum in which the matter and electromagnetic waves propagate. The electromagnetic substratum effects are quantitatively significant for quantum mechanics in reference frames, in which the substratum velocity w is in magnitude comparable with the velocity of light c. The electromagnetic substratum velocity w occurs explicitly in the wave equations for the electromagnetic potentials but not in the Schroedinger equation.

  9. Minimal covariant observables identifying all pure states

    Energy Technology Data Exchange (ETDEWEB)

    Carmeli, Claudio, E-mail: claudio.carmeli@gmail.com [D.I.M.E., Università di Genova, Via Cadorna 2, I-17100 Savona (Italy); I.N.F.N., Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy); Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku (Finland); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); I.N.F.N., Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy)

    2013-09-02

    It has been recently shown by Heinosaari, Mazzarella and Wolf (2013) [1] that an observable that identifies all pure states of a d-dimensional quantum system has minimally 4d−4 outcomes or slightly less (the exact number depending on d). However, no simple construction of this type of minimal observable is known. We investigate covariant observables that identify all pure states and have minimal number of outcomes. It is shown that the existence of this kind of observables depends on the dimension of the Hilbert space.

  10. Handling Protest Responses in Contingent Valuation Surveys.

    Science.gov (United States)

    Pennington, Mark; Gomes, Manuel; Donaldson, Cam

    2017-08-01

    Protest responses, whereby respondents refuse to state the value they place on the health gain, are commonly encountered in contingent valuation (CV) studies, and they tend to be excluded from analyses. Such an approach will be biased if protesters differ from non-protesters on characteristics that predict their responses. The Heckman selection model has been commonly used to adjust for protesters, but its underlying assumptions may be implausible in this context. We present a multiple imputation (MI) approach to appropriately address protest responses in CV studies, and compare it with the Heckman selection model. This study exploits data from the multinational EuroVaQ study, which surveyed respondents' willingness-to-pay (WTP) for a Quality Adjusted Life Year (QALY). Here, our simulation study assesses the relative performance of MI and Heckman selection models across different realistic settings grounded in the EuroVaQ study, including scenarios with different proportions of missing data and non-response mechanisms. We then illustrate the methods in the EuroVaQ study for estimating mean WTP for a QALY gain. We find that MI provides lower bias and mean squared error compared with the Heckman approach across all considered scenarios. The simulations suggest that the Heckman approach can lead to considerable underestimation or overestimation of mean WTP due to violations in the normality assumption, even after log-transforming the WTP responses. The case study illustrates that protesters are associated with a lower mean WTP for a QALY gain compared with non-protesters, but that the results differ according to method for handling protesters. MI is an appropriate method for addressing protest responses in CV studies.

  11. Covariance analysis for evaluating head trackers

    Science.gov (United States)

    Kang, Donghoon

    2017-10-01

    Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.

  12. Stochastic precipitation generator with hidden state covariates

    Science.gov (United States)

    Kim, Yongku; Lee, GyuWon

    2017-08-01

    Time series of daily weather such as precipitation, minimum temperature and maximum temperature are commonly required for various fields. Stochastic weather generators constitute one of the techniques to produce synthetic daily weather. The recently introduced approach for stochastic weather generators is based on generalized linear modeling (GLM) with covariates to account for seasonality and teleconnections (e.g., with the El Niño). In general, stochastic weather generators tend to underestimate the observed interannual variance of seasonally aggregated variables. To reduce this overdispersion, we incorporated time series of seasonal dry/wet indicators in the GLM weather generator as covariates. These seasonal time series were local (or global) decodings obtained by a hidden Markov model of seasonal total precipitation and implemented in the weather generator. The proposed method is applied to time series of daily weather from Seoul, Korea and Pergamino, Argentina. This method provides a straightforward translation of the uncertainty of the seasonal forecast to the corresponding conditional daily weather statistics.

  13. Covariates of alcohol consumption among career firefighters.

    Science.gov (United States)

    Piazza-Gardner, A K; Barry, A E; Chaney, E; Dodd, V; Weiler, R; Delisle, A

    2014-12-01

    Little is known about rates of alcohol consumption in career firefighters. To assess the quantity and frequency of alcohol consumption among career firefighters and the covariates that influence consumption levels. A convenience sample of career firefighters completed an online, self-administered, health assessment survey. Hierarchical binary logistic regression assessed the ability of several covariates to predict binge drinking status. The majority of the sample (n = 160) consumed alcohol (89%), with approximately one-third (34%) having a drinking binge in the past 30 days. The regression model explained 13-18% of the variance in binge drinking status and correctly classified 71% of cases. Race (P firefighters were 1.08 times less likely to binge drink (95% CI: 0.87-0.97). Drinking levels observed in this study exceed those of the general adult population, including college students. Thus, it appears that firefighters represent an at-risk drinking group. Further investigations addressing reasons for alcohol use and abuse among firefighters are warranted. This study and subsequent research will provide information necessary for the development and testing of tailored interventions aimed at reducing firefighter alcohol consumption. © The Author 2014. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. MISS- Mice on International Space Station

    Science.gov (United States)

    Falcetti, G. C.; Schiller, P.

    2005-08-01

    The use of rodents for scientific research to bridge the gap between cellular biology and human physiology is a new challenge within the history of successful developments of biological facilities. The ESA funded MISS Phase A/B study is aimed at developing a design concept for an animal holding facility able to support experimentation with mice on board the International Space Station (ISS).The MISS facility is composed of two main parts:1. The MISS Rack to perform scientific experiments onboard the ISS.2. The MISS Animals Transport Container (ATC) totransport animals from ground to orbit and vice- versa.The MISS facility design takes into account guidelines and recommendations used for mice well-being in ground laboratories. A summary of the MISS Rack and MISS ATC design concept is hereafter provided.

  15. Windows 7 The Missing Manual

    CERN Document Server

    Pogue, David

    2010-01-01

    In early reviews, geeks raved about Windows 7. But if you're an ordinary mortal, learning what this new system is all about will be challenging. Fear not: David Pogue's Windows 7: The Missing Manual comes to the rescue. Like its predecessors, this book illuminates its subject with reader-friendly insight, plenty of wit, and hardnosed objectivity for beginners as well as veteran PC users. Windows 7 fixes many of Vista's most painful shortcomings. It's speedier, has fewer intrusive and nagging screens, and is more compatible with peripherals. Plus, Windows 7 introduces a slew of new features,

  16. What We’re Missing

    OpenAIRE

    Camille R. Whitney; Jing Liu

    2017-01-01

    For schools and teachers to help students develop knowledge and skills, students need to show up to class. Yet absenteeism is prevalent, especially in secondary schools. This study uses a rich data set tracking class attendance by day for over 50,000 middle and high school students from an urban district in academic years 2007–2008 through 2012–2013. Our results extend and modify the extant findings on absenteeism that have been based almost exclusively on full-day absenteeism, missing class-...

  17. Easily missed thoracolumbar spine fractures

    Energy Technology Data Exchange (ETDEWEB)

    Bernstein, Mark [NYU Langone Medical Center/Bellevue Hospital, 550 1st Avenue, IRM-234, New York, NY 10016 (United States)], E-mail: mark.bernstein@nyumc.org

    2010-04-15

    Thoracolumbar spine fractures are common and can be difficult to diagnose. Many of these fractures are associated with extraspinal injuries and are subtle on imaging further contributing to diagnostic delay or misdiagnosis. Missed fractures are associated with increased neurologic injury and resulting morbidity. Careful and thorough workup of the multitrauma patient with dedicated spinal imaging is necessary to identify these injuries. This article reviews the major thoracolumbar spine fractures and imaging findings with attention drawn to subtle and easily overlooked features of these injuries.

  18. HMSRP Hawaiian Monk Seal Handling Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains records for all handling and measurement of Hawaiian monk seals since 1981. Live seals are handled and measured during a variety of events...

  19. Noisy covariance matrices and portfolio optimization II

    Science.gov (United States)

    Pafka, Szilárd; Kondor, Imre

    2003-03-01

    Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the

  20. What We’re Missing

    Directory of Open Access Journals (Sweden)

    Camille R. Whitney

    2017-04-01

    Full Text Available For schools and teachers to help students develop knowledge and skills, students need to show up to class. Yet absenteeism is prevalent, especially in secondary schools. This study uses a rich data set tracking class attendance by day for over 50,000 middle and high school students from an urban district in academic years 2007–2008 through 2012–2013. Our results extend and modify the extant findings on absenteeism that have been based almost exclusively on full-day absenteeism, missing class-by-class absences. Notably, part-day absenteeism is responsible for as many classes missed as full-day absenteeism, raising chronic absenteeism from 9% to 24% of secondary-grades students. Incorporating part-day absences sharply increases the chronic absenteeism gap between underrepresented minority students and their peers. Both full- and part-day absenteeism show a discrete jump at the point of transition from middle school to high school, but full-day absenteeism then declines whereas part-day absenteeism remains high in Grades 10 and 11 and increases again in Grade 12. Whereas 55% of full-day absences are unexcused, 92% of part-day absences are unexcused. Absenteeism from individual classes varies considerably by time of day but less by class subject matter.

  1. Taking stock of coal handling

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-03-15

    Turnkey contracts have the potential to allow blue chip equipment manufacturers opportunities to improve the overall efficiency of stockyard management. Taim Weser has secured a contract to supply two stockyard machinery systems for a 3800 m euro expansion of the Repsol YPF Cartegena Refinery in Spain's Murcia region. ThyssenKrupp Fordertechnik (TKF) has handed over the coal handling plant at the 3 x 700 mw Jimah power station in Malaysia, following a similar turnkey contract undertaken for the Tanjung-Bin power plant project. 3 figs.

  2. Coal handling: a complex process

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-02-01

    The market for coal handling equipment is large and varied. A great many companies compete for market share, offering myriad types of equipment, including stackers and reclaimers, stockyard systems, blending beds, storage systems, conveyors, shiploaders and unloaders and much more. Getting coal from the ground to the end user is a long and complicated process, and provides seemingly endless possibilities for companies with an eye for innovation. In this article, just a few of the many companies involved in this vast market report on recent developments and the current situation. 1 tab., 10 photos

  3. Feedstock storage, handling and processing

    Energy Technology Data Exchange (ETDEWEB)

    Egg, R.P.; Coble, C.G.; Engler, C.R. (Texas A and M Univ., College Station, TX (United States). Dept. of Agricultural Engineering); Lewis, D.H. (Texas A and M Univ., College Station, TX (United States). Dept. of Veterinary Microbiology and Parasitology)

    1993-01-01

    This paper is a review of the technology and research covering components of a methane from biomass system between the field and the digester. It deals primarily with sorghum as a feedstock and focuses on research conducted by the Texas Agricultural Experiment Station. Subjects included in this paper are harvesting, hay storage, ansiling, materials handling, pumping and hydraulic characteristics, hydraulic conductivity, pressure/density relationship, and biological pretreatment. This paper is not a comprehensive design manual; however, design equations and coefficients for sorghum are presented, where available, along with references describing the development and application of design models. (author)

  4. 21 CFR 820.140 - Handling.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Handling. 820.140 Section 820.140 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES QUALITY SYSTEM REGULATION Handling, Storage, Distribution, and Installation § 820.140 Handling. Each...

  5. Covariant Hyperbolization of Force-free Electrodynamics

    CERN Document Server

    Carrasco, Federico

    2016-01-01

    Force-Free Flectrodynamics (FFE) is a non-linear system of equations modeling the evolution of the electromagnetic field, in the presence of a magnetically dominated relativistic plasma. This configuration arises on several astrophysical scenarios, which represent exciting laboratories to understand physics in extreme regimes. We show that this system, when restricted to the correct constraint submanifold, is symmetric hyperbolic. In numerical applications is not feasible to keep the system in that submanifold, and so, it is necessary to analyze its structure first in the tangent space of that submanifold and then in a whole neighborhood of it. As already shown by Pfeiffer, a direct (or naive) formulation of this system (in the whole tangent space) results in a weakly hyperbolic system of evolution equations for which well-possednes for the initial value formulation does not follows. Using the generalized symmetric hyperbolic formalism due to Geroch, we introduce here a covariant hyperbolization for the FFE s...

  6. Covariant perturbations in the gonihedric string model

    Science.gov (United States)

    Rojas, Efraín

    2017-11-01

    We provide a covariant framework to study classically the stability of small perturbations on the so-called gonihedric string model by making precise use of variational techniques. The local action depends on the square root of the quadratic mean extrinsic curvature of the worldsheet swept out by the string, and is reparametrization invariant. A general expression for the worldsheet perturbations, guided by Jacobi equations without any early gauge fixing, is obtained. This is manifested through a set of highly coupled nonlinear differential partial equations where the perturbations are described by scalar fields, Φi, living in the worldsheet. This model contains, as a special limit, to the linear model in the mean extrinsic curvature. In such a case the Jacobi equations specialize to a single wave-like equation for Φ.

  7. EMPIRE ULTIMATE EXPANSION: RESONANCES AND COVARIANCES.

    Energy Technology Data Exchange (ETDEWEB)

    HERMAN,M.; MUGHABGHAB, S.F.; OBLOZINSKY, P.; ROCHMAN, D.; PIGNI, M.T.; KAWANO, T.; CAPOTE, R.; ZERKIN, V.; TRKOV, A.; SIN, M.; CARSON, B.V.; WIENKE, H. CHO, Y.-S.

    2007-04-22

    The EMPIRE code system is being extended to cover the resolved and unresolved resonance region employing proven methodology used for the production of new evaluations in the recent Atlas of Neutron Resonances. Another directions of Empire expansion are uncertainties and correlations among them. These include covariances for cross sections as well as for model parameters. In this presentation we concentrate on the KALMAN method that has been applied in EMPIRE to the fast neutron range as well as to the resonance region. We also summarize role of the EMPIRE code in the ENDF/B-VII.0 development. Finally, large scale calculations and their impact on nuclear model parameters are discussed along with the exciting perspectives offered by the parallel supercomputing.

  8. Covariant non-commutative space–time

    Directory of Open Access Journals (Sweden)

    Jonathan J. Heckman

    2015-05-01

    Full Text Available We introduce a covariant non-commutative deformation of 3+1-dimensional conformal field theory. The deformation introduces a short-distance scale ℓp, and thus breaks scale invariance, but preserves all space–time isometries. The non-commutative algebra is defined on space–times with non-zero constant curvature, i.e. dS4 or AdS4. The construction makes essential use of the representation of CFT tensor operators as polynomials in an auxiliary polarization tensor. The polarization tensor takes active part in the non-commutative algebra, which for dS4 takes the form of so(5,1, while for AdS4 it assembles into so(4,2. The structure of the non-commutative correlation functions hints that the deformed theory contains gravitational interactions and a Regge-like trajectory of higher spin excitations.

  9. A covariant approach to entropic dynamics

    Science.gov (United States)

    Ipek, Selman; Abedi, Mohammad; Caticha, Ariel

    2017-06-01

    Entropic Dynamics (ED) is a framework for constructing dynamical theories of inference using the tools of inductive reasoning. A central feature of the ED framework is the special focus placed on time. In [2] a global entropic time was used to derive a quantum theory of relativistic scalar fields. This theory, however, suffered from a lack of explicit or manifest Lorentz symmetry. In this paper we explore an alternative formulation in which the relativistic aspects of the theory are manifest. The approach we pursue here is inspired by the methods of Dirac, Kuchař, and Teitelboim in their development of covariant Hamiltonian approaches. The key ingredient here is the adoption of a local notion of entropic time, which allows compatibility with an arbitrary notion of simultaneity. However, in order to ensure that the evolution does not depend on the particular sequence of hypersurfaces, we must impose a set of constraints that guarantee a consistent evolution.

  10. A Dynamic Time Warping based covariance function for Gaussian Processes signature identification

    Science.gov (United States)

    Silversides, Katherine L.; Melkumyan, Arman

    2016-11-01

    Modelling stratiform deposits requires a detailed knowledge of the stratigraphic boundaries. In Banded Iron Formation (BIF) hosted ores of the Hamersley Group in Western Australia these boundaries are often identified using marker shales. Both Gaussian Processes (GP) and Dynamic Time Warping (DTW) have been previously proposed as methods to automatically identify marker shales in natural gamma logs. However, each method has different advantages and disadvantages. We propose a DTW based covariance function for the GP that combines the flexibility of the DTW with the probabilistic framework of the GP. The three methods are tested and compared on their ability to identify two natural gamma signatures from a Marra Mamba type iron ore deposit. These tests show that while all three methods can identify boundaries, the GP with the DTW covariance function combines and balances the strengths and weaknesses of the individual methods. This method identifies more positive signatures than the GP with the standard covariance function, and has a higher accuracy for identified signatures than the DTW. The combined method can handle larger variations in the signature without requiring multiple libraries, has a probabilistic output and does not require manual cut-off selections.

  11. HTML5 The Missing Manual

    CERN Document Server

    MacDonald, Matthew

    2011-01-01

    HTML5 is more than a markup language-it's a dozen independent web standards all rolled into one. Until now, all it's been missing is a manual. With this thorough, jargon-free guide, you'll learn how to build web apps that include video tools, dynamic drawings, geolocation, offline web apps, drag-and-drop, and many other features. HTML5 is the future of the Web, and with this book you'll reach it quickly. The important stuff you need to know: Structure web pages in a new way. Learn how HTML5 helps make web design tools and search engines work smarter.Add audio and video without plugins. Build

  12. Safety of Cargo Aircraft Handling Procedure

    Directory of Open Access Journals (Sweden)

    Daniel Hlavatý

    2017-07-01

    Full Text Available The aim of this paper is to get acquainted with the ways how to improve the safety management system during cargo aircraft handling. The first chapter is dedicated to general information about air cargo transportation. This includes the history or types of cargo aircraft handling, but also the means of handling. The second part is focused on detailed description of cargo aircraft handling, including a description of activities that are performed before and after handling. The following part of this paper covers a theoretical interpretation of safety, safety indicators and legislative provisions related to the safety of cargo aircraft handling. The fourth part of this paper analyzes the fault trees of events which might occur during handling. The factors found by this analysis are compared with safety reports of FedEx. Based on the comparison, there is a proposal on how to improve the safety management in this transportation company.

  13. Transfer Area Mechanical Handling Calculation

    Energy Technology Data Exchange (ETDEWEB)

    B. Dianda

    2004-06-23

    This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use

  14. Missing binary data extraction challenges from Cochrane reviews in mental health and Campbell reviews with implications for empirical research.

    Science.gov (United States)

    Spineli, Loukia M

    2017-09-29

    Tο report challenges encountered during the extraction process from Cochrane reviews in mental health and Campbell reviews and to indicate their implications on the empirical performance of different methods to handle missingness. We used a collection of meta-analyses on binary outcomes collated from a previous work on missing outcome data. To evaluate the accuracy of their extraction, we developed specific criteria pertaining to the reporting of missing outcome data in systematic reviews. Using the most popular methods to handle missing binary outcome data, we investigated the implications of the accuracy of the extracted meta-analysis on the random-effects meta-analysis results. Of 113 meta-analyses from Cochrane reviews, 60 (53%) were judged as "unclearly" extracted (ie, no information on the outcome of completers but available information on how missing participants were handled) and 42 (37%) as "unacceptably" extracted (ie, no information on the outcome of completers as well as no information on how missing participants were handled). For the remaining meta-analyses, it was judged that data were "acceptably" extracted (ie, information on the completers' outcome was provided for all trials). Overall, "unclear" extraction overestimated the magnitude of the summary odds ratio and the between-study variance and additionally inflated the uncertainty of both meta-analytical parameters. The only eligible Campbell review was judged as "unclear." Depending on the extent of missingness, the reporting quality of the systematic reviews can greatly affect the accuracy of the extracted meta-analyses and by extent, the empirical performance of different methods to handle missingness. Copyright © 2017 John Wiley & Sons, Ltd.

  15. AFCI-2.0 Library of Neutron Cross Section Covariances

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M.; Herman,M.; Oblozinsky,P.; Mattoon,C.; Pigni,M.; Hoblit,S.; Mughabghab,S.F.; Sonzogni,A.; Talou,P.; Chadwick,M.B.; Hale.G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G.

    2011-06-26

    Neutron cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The primary purpose of the library is to provide covariances for the Advanced Fuel Cycle Initiative (AFCI) data adjustment project, which is focusing on the needs of fast advanced burner reactors. The covariances refer to central values given in the 2006 release of the U.S. neutron evaluated library ENDF/B-VII. The preliminary version (AFCI-2.0beta) has been completed in October 2010 and made available to the users for comments. In the final 2.0 release, covariances for a few materials were updated, in particular new LANL evaluations for {sup 238,240}Pu and {sup 241}Am were adopted. BNL was responsible for covariances for structural materials and fission products, management of the library and coordination of the work, while LANL was in charge of covariances for light nuclei and for actinides.

  16. Evaluation of Multiple Imputation in Missing Data Analysis: An Application on Repeated Measurement Data in Animal Science

    Directory of Open Access Journals (Sweden)

    Gazel Ser

    2015-12-01

    Full Text Available The purpose of this study was to evaluate the performance of multiple imputation method in case that missing observation structure is at random and completely at random from the approach of general linear mixed model. The application data of study was consisted of a total 77 heads of Norduz ram lambs at 7 months of age. After slaughtering, pH values measured at five different time points were determined as dependent variable. In addition, hot carcass weight, muscle glycogen level and fasting durations were included as independent variables in the model. In the dependent variable without missing observation, two missing observation structures including Missing Completely at Random (MCAR and Missing at Random (MAR were created by deleting the observations at certain rations (10% and 25%. After that, in data sets that have missing observation structure, complete data sets were obtained using MI (multiple imputation. The results obtained by applying general linear mixed model to the data sets that were completed using MI method were compared to the results regarding complete data. In the mixed model which was applied to the complete data and MI data sets, results whose covariance structures were the same and parameter estimations and standard estimations were rather close to the complete data are obtained. As a result, in this study, it was ensured that reliable information was obtained in mixed model in case of choosing MI as imputation method in missing observation structure and rates of both cases.

  17. Intelligent packaging and material handling

    Science.gov (United States)

    Hall, Ernest L.; Shell, Richard; Slutzky, Gale D.

    1991-02-01

    The problem of palletizing (stacking on a pallet) randomly arriving mixed size and content parcels is an important task in most distribution warehouses. Today this task requires human interaction for a solution however recently several attempts have been made to automate the solution. The purpose of this paper is to present an overview of the problem an expert system approach and an estimate of the key subproblems which have been identified which are necessary for a solution. The concepts of space filling and emptying as encountered in warehousing are briefly described. Also brief descriptions of two generations of a robotic system for mixed parcel palletizing are presented. The results with these test systems indicate that automatic parcel handling at speeds comparable to humans is feasible however further work is required to obtain a robust solution.

  18. Covariant Spectator Theory: Foundations and Applications A Mini-Review of the Covariant Spectator Theory

    Energy Technology Data Exchange (ETDEWEB)

    Alfred Stadler, Franz Gross

    2010-10-01

    We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.

  19. Data Selection for Within-Class Covariance Estimation

    Science.gov (United States)

    2016-09-08

    covariance matrix training data collection in real world applications. Index Terms— channel compensation, i-vectors, within-class covariance...normalized to have zero mean and unit variance. Finally, the utterance feature vectors were converted to i-vectors using a 2048-order Universal...Background Model (UBM) and a rank-600 total variability (T) matrix . The estimated within-class covariance matrix was computed via [1] 1 1 1 1

  20. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    Science.gov (United States)

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  1. “The Missing Link”

    Directory of Open Access Journals (Sweden)

    Kamesha Spates

    2012-07-01

    Full Text Available Critical examinations of epistemology argue that White men have established the guidelines for scientific knowledge. Because other groups were never allotted the opportunity to contribute to the immense knowledge base, the Western scientific knowledge base remains deficient. The author calls for a more inclusive knowledge base that includes the voices of Black women in the field of psychology. This inclusion is critical to better equip mental health clinicians to handle the unique needs of this population. This article offers a historical analysis of the intricate relationship between race and scientific knowledge. The author examines how the close-knit relationship between race and science has directly influenced the existing scientific knowledge gaps surrounding Black women in the field of psychology and calls for literature to offer a more comprehensive view of Black women’s experiences.

  2. Selection and evolution of causally covarying traits.

    Science.gov (United States)

    Morrissey, Michael B

    2014-06-01

    When traits cause variation in fitness, the distribution of phenotype, weighted by fitness, necessarily changes. The degree to which traits cause fitness variation is therefore of central importance to evolutionary biology. Multivariate selection gradients are the main quantity used to describe components of trait-fitness covariation, but they quantify the direct effects of traits on (relative) fitness, which are not necessarily the total effects of traits on fitness. Despite considerable use in evolutionary ecology, path analytic characterizations of the total effects of traits on fitness have not been formally incorporated into quantitative genetic theory. By formally defining "extended" selection gradients, which are the total effects of traits on fitness, as opposed to the existing definition of selection gradients, a more intuitive scheme for characterizing selection is obtained. Extended selection gradients are distinct quantities, differing from the standard definition of selection gradients not only in the statistical means by which they may be assessed and the assumptions required for their estimation from observational data, but also in their fundamental biological meaning. Like direct selection gradients, extended selection gradients can be combined with genetic inference of multivariate phenotypic variation to provide quantitative prediction of microevolutionary trajectories. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.

  3. General Covariance from the Quantum Renormalization Group

    CERN Document Server

    Shyam, Vasudev

    2016-01-01

    The Quantum renormalization group (QRG) is a realisation of holography through a coarse graining prescription that maps the beta functions of a quantum field theory thought to live on the `boundary' of some space to holographic actions in the `bulk' of this space. A consistency condition will be proposed that translates into general covariance of the gravitational theory in the $D + 1$ dimensional bulk. This emerges from the application of the QRG on a planar matrix field theory living on the $D$ dimensional boundary. This will be a particular form of the Wess--Zumino consistency condition that the generating functional of the boundary theory needs to satisfy. In the bulk, this condition forces the Poisson bracket algebra of the scalar and vector constraints of the dual gravitational theory to close in a very specific manner, namely, the manner in which the corresponding constraints of general relativity do. A number of features of the gravitational theory will be fixed as a consequence of this form of the Po...

  4. Bayesian Inference for Multivariate Meta-regression with a Partially Observed Within-Study Sample Covariance Matrix

    Science.gov (United States)

    Yao, Hui; Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2015-01-01

    Summary Multivariate meta-regression models are commonly used in settings where the response variable is naturally multi-dimensional. Such settings are common in cardiovascular and diabetes studies where the goal is to study cholesterol levels once a certain medication is given. In this setting, the natural multivariate endpoint is Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). In this paper, we examine study level (aggregate) multivariate meta-data from 26 Merck sponsored double-blind, randomized, active or placebo-controlled clinical trials on adult patients with primary hypercholesterolemia. Our goal is to develop a methodology for carrying out Bayesian inference for multivariate meta-regression models with study level data when the within-study sample covariance matrix S for the multivariate response data is partially observed. Specifically, the proposed methodology is based on postulating a multivariate random effects regression model with an unknown within-study covariance matrix Σ in which we treat the within-study sample correlations as missing data, the standard deviations of the within-study sample covariance matrix S are assumed observed, and given Σ, S follows a Wishart distribution. Thus, we treat the off-diagonal elements of S as missing data, and these missing elements are sampled from the appropriate full conditional distribution in a Markov chain Monte Carlo (MCMC) sampling scheme via a novel transformation based on partial correlations. We further propose several structures (models) for Σ, which allow for borrowing strength across different treatment arms and trials. The proposed methodology is assessed using simulated as well as real data, and the results are shown to be quite promising. PMID:26257452

  5. PhyloPars: estimation of missing parameter values using phylogeny.

    Science.gov (United States)

    Bruggeman, Jorn; Heringa, Jaap; Brandt, Bernd W

    2009-07-01

    A wealth of information on metabolic parameters of a species can be inferred from observations on species that are phylogenetically related. Phylogeny-based information can complement direct empirical evidence, and is particularly valuable if experiments on the species of interest are not feasible. The PhyloPars web server provides a statistically consistent method that combines an incomplete set of empirical observations with the species phylogeny to produce a complete set of parameter estimates for all species. It builds upon a state-of-the-art evolutionary model, extended with the ability to handle missing data. The resulting approach makes optimal use of all available information to produce estimates that can be an order of magnitude more accurate than ad-hoc alternatives. Uploading a phylogeny and incomplete feature matrix suffices to obtain estimates of all missing values, along with a measure of certainty. Real-time cross-validation provides further insight in the accuracy and bias expected for estimated values. The server allows for easy, efficient estimation of metabolic parameters, which can benefit a wide range of fields including systems biology and ecology. PhyloPars is available at: http://www.ibi.vu.nl/programs/phylopars/.

  6. Investigating the treatment of missing data in an Olympiad-type test – the case of the selection validity in the South African Mathematics Olympiad

    Directory of Open Access Journals (Sweden)

    Caroline Long

    2016-05-01

    Full Text Available The purpose of the South African Mathematics Olympiad is to generate interest in mathematics and to identify the most talented mathematical minds. Our focus is on how the handling of missing data affects the selection of the ‘best’ contestants. Two approaches handling missing data, applying the Rasch model, are described. The issue of guessing is investigated through a tailored analysis. We present two microanalyses to illustate how missing data may impact selection; the first investigates groups of contestants that may miss selection under particular conditions; the second focuses on two contestants each of whom answer 14 items correctly. This comparison raises questions about the proportion of correct to incorrect answers. Recommendations are made for future scoring of the test, which include reconsideration of negative marking and weighting as well as considering the inclusion of 150 or 200 contestants as opposed to 100 contestants for participation in the final round.

  7. Methods to Minimize Zero-Missing Phenomenon

    DEFF Research Database (Denmark)

    da Silva, Filipe Miguel Faria; Bak, Claus Leth; Gudmundsdottir, Unnur Stella

    2010-01-01

    With the increasing use of high-voltage AC cables at transmission levels, phenomena such as current zero-missing start to appear more often in transmission systems. Zero-missing phenomenon can occur when energizing cable lines with shunt reactors. This may considerably delay the opening of the ci...

  8. Determinants Of Missed Opportunities For Immunization Among ...

    African Journals Online (AJOL)

    Missed opportunities in immunization had been a global public health obstacle to the attainment of the Millennium Development Goal of reducing child mortality by two-thirds by 2015. Studies have also shown that missed opportunities in both routine and supplementary immunization contribute significantly to the low ...

  9. How efficient is estimation with missing data?

    DEFF Research Database (Denmark)

    Karadogan, Seliz; Marchegiani, Letizia; Hansen, Lars Kai

    2011-01-01

    In this paper, we present a new evaluation approach for missing data techniques (MDTs) where the efficiency of those are investigated using listwise deletion method as reference. We experiment on classification problems and calculate misclassification rates (MR) for different missing data percent...

  10. National Center for Missing and Exploited Children

    Science.gov (United States)

    ... Have you experienced the tragedy of a missing child? We're here to help. Learn More >> × KidSmartz New Parent Tips to Help Kids Set Physical Boundaries! Download Resources >> × Subscribe EN SP Blog Media About Us Contact Us Legal T&C Careers Copyright © National Center for Missing & Exploited Children. All ...

  11. A transactional model for automatic exception handling

    OpenAIRE

    Cabral, Bruno Miguel Brás

    2009-01-01

    Tese de doutoramento em Engenharia Informática apresentada à Fac. de Ciências e Tecnologia da Univ. de Coimbra Exception handling mechanisms have been around for more than 30 years. Although modern exceptions systems are not very different from the early models, the large majority of modern programming languages rely on exception handling constructs for dealing with errors and abnormal situations. Exceptions have several advantages over other error handling mechanisms, such as the return o...

  12. Enclosure for handling high activity materials

    Energy Technology Data Exchange (ETDEWEB)

    Jimeno de Osso, F.

    1977-07-01

    One of the most important problems that are met at the laboratories producing and handling radioisotopes is that of designing, building and operating enclosures suitable for the safe handling of active substances. With this purpose in mind, an enclosure has been designed and built for handling moderately high activities under a shielding made of 150 mm thick lead. In this report a description is given of those aspects that may be of interest to people working in this field. (Author)

  13. Scheduling of outbound luggage handling at airports

    DEFF Research Database (Denmark)

    Barth, Torben C.; Pisinger, David

    2012-01-01

    This article considers the outbound luggage handling problem at airports. The problem is to assign handling facilities to outbound flights and decide about the handling start time. This dynamic, near real-time assignment problem is part of the daily airport operations. Quality, efficiency......). Another solution method is a decomposition approach. The problem is divided into different subproblems and solved in iterative steps. The different solution approaches are tested on real world data from Frankfurt Airport....

  14. The ATLAS Missing ET trigger

    CERN Document Server

    Beauchemin, P; The ATLAS collaboration

    2010-01-01

    Over the last few months, the ATLAS detector collected 900 GeV LHC collision events which allowed for the study the performance of the ATLAS Trigger and Data Acquisition system (TDAQ). With the 7 TeV collision data collected recently, the performance studies of the trigger system are critical for a successful physics program. In particular a large spectrum of physics results will rely on the capacity of the ATLAS TDAQ system to collect events based on the estimate of the missing transverse energy (MET) contained in each event. The MET trigger would be, for example, the primary trigger to be used in new physics searches for processes involving new weakly interacting particles, which could account for the astronomically observed dark matter. In addition to discovery perspectives, the MET trigger can also be used in combination with other triggers to control the rate of signatures involving low energy objects. For example, the MET trigger is necessary in order to measure non-boosted W in the tau channel. Finally...

  15. Handling in the stockyard: what price responsibility?

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-08-01

    Stockyard owners are trying to minimise manpower and maximise equipment capability with any purchase of handling equipment, whilst maintaining a safe and dependable system. In recent years, stockyards have gained importance, due to the demand for greater tonnages. The article looks at new ways by which stockyard owners can hand over responsibilities for the facility`s handling operation. It goes on to describe Phase II of an impressive coal handling and storage system in Portsines terminal in Portugal, and other large schemes to enlarge operations for handling and unloading coal, iron ore, lead, zinc and limestone from rail cars. 5 photos.

  16. Cell handling using microstructured membranes

    Science.gov (United States)

    Irimia, Daniel

    2013-01-01

    Gentle and precise handling of cell suspensions is essential for scientific research and clinical diagnostic applications. Although different techniques for cell analysis at the micro-scale have been proposed, many still require that preliminary sample preparation steps be performed off the chip. Here we present a microstructured membrane as a new microfluidic design concept, enabling the implementation of common sample preparation procedures for suspensions of eukaryotic cells in lab-on-a-chip devices. We demonstrate the novel capabilities for sample preparation procedures by the implementation of metered sampling of nanoliter volumes of whole blood, concentration increase up to three orders of magnitude of sparse cell suspension, and circumferentially uniform, sequential exposure of cells to reagents. We implemented these functions by using microstructured membranes that are pneumatically actuated and allowed to reversibly decouple the flow of fluids and the displacement of eukaryotic cells in suspensions. Furthermore, by integrating multiple structures on the same membrane, complex sequential procedures are possible using a limited number of control steps. PMID:16511616

  17. Imposed quasi-normality in covariance structure analysis

    NARCIS (Netherlands)

    Koning, Ruud H.; Neudecker, H.; Wansbeek, T.

    1993-01-01

    In the analysis of covariance structures, the distance between an observed covariance matrix S of order k x k and C(6) E(S) is minimized by searching over the 8-space. The criterion leading to a best asymptotically normal (BAN) estimator of 0 is found by minimizing the difference between vecS and

  18. Empirical Performance of Covariates in Education Observational Studies

    Science.gov (United States)

    Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate

    2017-01-01

    This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…

  19. A three domain covariance framework for EEG/MEG data

    NARCIS (Netherlands)

    Ros, B.P.; Bijma, F.; de Gunst, M.C.M.; de Munck, J.C.

    2015-01-01

    In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three

  20. Electron localization functions and local measures of the covariance

    Indian Academy of Sciences (India)

    The electron localization measure proposed by Becke and Edgecombe is shown to be related to the covariance of the electron pair distribution. Just as with the electron localization function, the local covariance does not seem to be, in and of itself, a useful quantity for elucidating shell structure. A function of the local ...

  1. Using transformation algorithms to estimate (co)variance ...

    African Journals Online (AJOL)

    ... to multiple traits by the use of canonical transformations. A computing strategy is developed for use on large data sets employing two different REML algorithms for the estimation of (co)variance components. Results from a simulation study indicate that (co)variance components can be estimated efficiently at a low cost on ...

  2. Validity of covariance models for the analysis of geographical variation

    DEFF Research Database (Denmark)

    Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio

    2014-01-01

    1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained...

  3. Propensity score matching and unmeasured covariate imbalance: A simulation study

    NARCIS (Netherlands)

    Ali, M. Sanni|info:eu-repo/dai/nl/345709497; Groenwold, Rolf H.H.; Belitser, Svetlana V.; Hoes, Arno W.; De Boer, A.|info:eu-repo/dai/nl/075097346; Klungel, Olaf H.|info:eu-repo/dai/nl/181447649

    2014-01-01

    Background: Selecting covariates for adjustment or inclusion in propensity score (PS) analysis is a trade-off between reducing confounding bias and a risk of amplifying residual bias by unmeasured confounders. Objectives: To assess the covariate balancing properties of PS matching with respect to

  4. Sparse subspace clustering for data with missing entries and high-rank matrix completion.

    Science.gov (United States)

    Fan, Jicong; Chow, Tommy W S

    2017-09-01

    Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Evaluation of digital soil mapping approaches with large sets of environmental covariates

    Directory of Open Access Journals (Sweden)

    M. Nussbaum

    2018-01-01

    Full Text Available The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM approaches relating soil data (responses to environmental data (covariates face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest by mapping the effective soil depth available to plants (SD, pH, soil organic matter (SOM, effective cation exchange capacity (ECEC, clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses. Models were built from 300–500 environmental covariates by selecting linear models through (1 grouped lasso and (2 an ad hoc stepwise procedure for robust external-drift kriging (georob. For (3 geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM. We further used two tree-based methods: (4 boosted regression trees (BRTs and (5 random forest (RF. Lastly, we computed (6 weighted model averages (MAs from the predictions obtained from methods 1–5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3–6 % of all covariates. Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1–5 (28 of 48 responses, but was outcompeted by MA for 14 of these 28

  6. Evaluation of digital soil mapping approaches with large sets of environmental covariates

    Science.gov (United States)

    Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas

    2018-01-01

    The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over

  7. Newton law in covariant unimodular $F(R)$ gravity

    CERN Document Server

    Nojiri, S; Oikonomou, V K

    2016-01-01

    We propose a covariant ghost-free unimodular $F(R)$ gravity theory, which contains a three-form field and study its structure using the analogy of the proposed theory with a quantum system which describes a charged particle in uniform magnetic field. Newton's law in non-covariant unimodular $F(R)$ gravity as well as in unimodular Einstein gravity is derived and it is shown to be just the same as in General Relativity. The derivation of Newton's law in covariant unimodular $F(R)$ gravity shows that it is modified precisely in the same way as in the ordinary $F(R)$ theory. We also demonstrate that the cosmology of a Friedmann-Robertson-Walker background, is equivalent in the non-covariant and covariant formulations of unimodular $F(R)$ theory.

  8. Parametric Covariance Model for Horizon-Based Optical Navigation

    Science.gov (United States)

    Hikes, Jacob; Liounis, Andrew J.; Christian, John A.

    2016-01-01

    This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.

  9. Developing a framework to review near-miss maternal morbidity in India: a structured review and key stakeholder analysis.

    Science.gov (United States)

    Bhattacharyya, Sanghita; Srivastava, Aradhana; Knight, Marian

    2014-11-13

    In India there is a thrust towards promoting institutional delivery, resulting in problems of overcrowding and compromise to quality of care. Review of near-miss obstetric events has been suggested to be useful to investigate health system functioning, complementing maternal death reviews. The aim of this project was to identify the key elements required for a near-miss review programme for India. A structured review was conducted to identify methods used in assessing near-miss cases. The findings of the structured review were used to develop a suggested framework for conducting near-miss reviews in India. A pool of experts in near-miss review methods in low and middle income countries (LMICs) was identified for vetting the framework developed. Opinions were sought about the feasibility of implementing near-miss reviews in India, the processes to be followed, factors that made implementation successful and the associated challenges. A draft of the framework was revised based on the experts' opinions. Five broad methods of near-miss case review/audit were identified: Facility-based near-miss case review, confidential enquiries, criterion-based clinical audit, structured case review (South African Model) and home-based interviews. The opinion of the 11 stakeholders highlighted that the methods that a facility adopts should depend on the type and number of cases the facility handles, availability and maintenance of a good documentation system, and local leadership and commitment of staff. A proposed framework for conducting near-miss reviews was developed that included a combination of criterion-based clinical audit and near-miss review methods. The approach allowed for development of a framework for researchers and planners seeking to improve quality of maternal care not only at the facility level but also beyond, encompassing community health workers and referral. Further work is needed to evaluate the implementation of this framework to determine its efficacy in

  10. Missed opportunities in child healthcare

    Directory of Open Access Journals (Sweden)

    Linda Jonker

    2014-01-01

    Full Text Available Background: Various policies in health, such as Integrated Management of Childhood Illnesses, were introduced to enhance integrated service delivery in child healthcare. During clinical practice the researcher observed that integrated services may not be rendered.Objectives: This article describes the experiences of mothers that utilised comprehensive child health services in the Cape Metropolitan area of South Africa. Services included treatment for diseases; preventative interventions such as immunisation; and promotive interventions, such as improvement in nutrition and promotion of breastfeeding.Method: A qualitative, descriptive phenomenological approach was applied to explore the experiences and perceptions of mothers and/or carers utilising child healthcare services. Thirty percent of the clinics were selected purposively from the total population. A convenience purposive non-probability sampling method was applied to select 17 mothers who met the criteria and gave written consent. Interviews were conducted and recorded digitally using an interview guide. The data analysis was done using Tesch’s eight step model.Results: Findings of the study indicated varied experiences. Not all mothers received information about the Road to Health book or card. According to the mothers, integrated child healthcare services were not practised. The consequences were missed opportunities in immunisation, provision of vitamin A, absence of growth monitoring, feeding assessment and provision of nutritional advice.Conclusion: There is a need for simple interventions such as oral rehydration, early recognition and treatment of diseases, immunisation, growth monitoring and appropriate nutrition advice. These services were not offered diligently. Such interventions could contribute to reducing the incidence of child morbidity and mortality.

  11. Moderating the Covariance Between Family Member’s Substance Use Behavior

    Science.gov (United States)

    Eaves, Lindon J.; Neale, Michael C.

    2014-01-01

    Twin and family studies implicitly assume that the covariation between family members remains constant across differences in age between the members of the family. However, age-specificity in gene expression for shared environmental factors could generate higher correlations between family members who are more similar in age. Cohort effects (cohort × genotype or cohort × common environment) could have the same effects, and both potentially reduce effect sizes estimated in genome-wide association studies where the subjects are heterogeneous in age. In this paper we describe a model in which the covariance between twins and non-twin siblings is moderated as a function of age difference. We describe the details of the model and simulate data using a variety of different parameter values to demonstrate that model fitting returns unbiased parameter estimates. Power analyses are then conducted to estimate the sample sizes required to detect the effects of moderation in a design of twins and siblings. Finally, the model is applied to data on cigarette smoking. We find that (1) the model effectively recovers the simulated parameters, (2) the power is relatively low and therefore requires large sample sizes before small to moderate effect sizes can be found reliably, and (3) the genetic covariance between siblings for smoking behavior decays very rapidly. Result 3 implies that, e.g., genome-wide studies of smoking behavior that use individuals assessed at different ages, or belonging to different birth-year cohorts may have had substantially reduced power to detect effects of genotype on cigarette use. It also implies that significant special twin environmental effects can be explained by age-moderation in some cases. This effect likely contributes to the missing heritability paradox. PMID:24647834

  12. An initial investigation of the GOCE error variance-covariance matrices in the context of the GOCE user toolbox project

    DEFF Research Database (Denmark)

    Bingham, Rory J.; Tscherning, Christian; Knudsen, Per

    2011-01-01

    The availability of the full error variance-covariance matrices for the GOCE gravity field models is an important feature of the GOCE mission. Potentially, it will allow users to evaluate the accuracy of a geoid or mean dynamic topography (MDT) derived from the gravity field model at any particular...... location, design optimal filters to remove errors from the surfaces, and rigorously assimilate a geoid/MDT into ocean models, or otherwise combine the GOCE gravity field with other data. Here we present an initial investigation into the error characteristics of the GOCE gravity field models...... assimilation is provided. Finally, we consider some of the practical issues relating to the handling of the huge files containing the error variance-covariance information....

  13. Creating Web Sites The Missing Manual

    CERN Document Server

    MacDonald, Matthew

    2006-01-01

    Think you have to be a technical wizard to build a great web site? Think again. For anyone who wants to create an engaging web site--for either personal or business purposes--Creating Web Sites: The Missing Manual demystifies the process and provides tools, techniques, and expert guidance for developing a professional and reliable web presence. Like every Missing Manual, you can count on Creating Web Sites: The Missing Manual to be entertaining and insightful and complete with all the vital information, clear-headed advice, and detailed instructions you need to master the task at hand. Autho

  14. 7 CFR 917.6 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 917.6 Section 917.6 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and..., That the term handle shall not include the sale of fruit on the tree, the transportation within the...

  15. 7 CFR 926.9 - Handle.

    Science.gov (United States)

    2010-01-01

    ... the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... REQUIREMENTS APPLICABLE TO CRANBERRIES NOT SUBJECT TO THE CRANBERRY MARKETING ORDER § 926.9 Handle. Handle means to can, freeze, dehydrate, acquire, sell, consign, deliver, or transport (except as a common or...

  16. Flexible point handles metaphor for character deformation

    NARCIS (Netherlands)

    Luo, Z.; Veltkamp, R.C.; Egges, J.

    2015-01-01

    Skinning using point handles has experimentally shown its effectiveness for stretching, twisting, and supple deformations [Jacobson et al. 2011] which are difficult to achieve using rigid bones. However, point handles are much less effective for limbs bending since their influence weights vary over

  17. Handling Kids in Crisis with Care

    Science.gov (United States)

    Bushinski, Cari

    2018-01-01

    The Handle with Care program helps schools help students who experience trauma. While at the scene of an event like a domestic violence call, drug raid, or car accident, law enforcement personnel determine the names and school of any children present. They notify that child's school to "handle ___ with care" the next day, and the school…

  18. Survey of postharvest handling, preservation and processing ...

    African Journals Online (AJOL)

    Despite the important contribution of camel milk to food security for pastoralists in Kenya, little is known about the postharvest handling, preservation and processing practices. In this study, existing postharvest handling, preservation and processing practices for camel milk by pastoralists in Isiolo, Kenya were assessed ...

  19. DDOS ATTACK DETECTION SIMULATION AND HANDLING MECHANISM

    Directory of Open Access Journals (Sweden)

    Ahmad Sanmorino

    2013-11-01

    Full Text Available In this study we discuss how to handle DDoS attack that coming from the attacker by using detection method and handling mechanism. Detection perform by comparing number of packets and number of flow. Whereas handling mechanism perform by limiting or drop the packets that detected as a DDoS attack. The study begins with simulation on real network, which aims to get the real traffic data. Then, dump traffic data obtained from the simulation used for detection method on our prototype system called DASHM (DDoS Attack Simulation and Handling Mechanism. From the result of experiment that has been conducted, the proposed method successfully detect DDoS attack and handle the incoming packet sent by attacker.

  20. A frailty-contagion model for multi-site hourly precipitation driven by atmospheric covariates

    Science.gov (United States)

    Koch, Erwan; Naveau, Philippe

    2015-04-01

    Accurate stochastic simulations of hourly precipitation are needed for impact studies at local spatial scales. Statistically, hourly precipitation data represent a difficult challenge. They are non-negative, skewed, heavy tailed, contain a lot of zeros (dry hours) and they have complex temporal structures (e.g., long persistence of dry episodes). Inspired by frailty-contagion approaches used in finance and insurance, we propose a multi-site precipitation simulator that, given appropriate regional atmospheric variables, can simultaneously handle dry events and heavy rainfall periods. One advantage of our model is its conceptual simplicity in its dynamical structure. In particular, the temporal variability is represented by a common factor based on a few classical atmospheric covariates like temperatures, pressures and others. Our inference approach is tested on simulated data and applied on measurements made in the northern part of French Brittany.

  1. Graphing survival curve estimates for time-dependent covariates.

    Science.gov (United States)

    Schultz, Lonni R; Peterson, Edward L; Breslau, Naomi

    2002-01-01

    Graphical representation of statistical results is often used to assist readers in the interpretation of the findings. This is especially true for survival analysis where there is an interest in explaining the patterns of survival over time for specific covariates. For fixed categorical covariates, such as a group membership indicator, Kaplan-Meier estimates (1958) can be used to display the curves. For time-dependent covariates this method may not be adequate. Simon and Makuch (1984) proposed a technique that evaluates the covariate status of the individuals remaining at risk at each event time. The method takes into account the change in an individual's covariate status over time. The survival computations are the same as the Kaplan-Meier method, in that the conditional survival estimates are the function of the ratio of the number of events to the number at risk at each event time. The difference between the two methods is that the individuals at risk within each level defined by the covariate is not fixed at time 0 in the Simon and Makuch method as it is with the Kaplan-Meier method. Examples of how the two methods can differ for time dependent covariates in Cox proportional hazards regression analysis are presented.

  2. Graphical representation of covariant-contravariant modal formulae

    Directory of Open Access Journals (Sweden)

    Miguel Palomino

    2011-08-01

    Full Text Available Covariant-contravariant simulation is a combination of standard (covariant simulation, its contravariant counterpart and bisimulation. We have previously studied its logical characterization by means of the covariant-contravariant modal logic. Moreover, we have investigated the relationships between this model and that of modal transition systems, where two kinds of transitions (the so-called may and must transitions were combined in order to obtain a simple framework to express a notion of refinement over state-transition models. In a classic paper, Boudol and Larsen established a precise connection between the graphical approach, by means of modal transition systems, and the logical approach, based on Hennessy-Milner logic without negation, to system specification. They obtained a (graphical representation theorem proving that a formula can be represented by a term if, and only if, it is consistent and prime. We show in this paper that the formulae from the covariant-contravariant modal logic that admit a "graphical" representation by means of processes, modulo the covariant-contravariant simulation preorder, are also the consistent and prime ones. In order to obtain the desired graphical representation result, we first restrict ourselves to the case of covariant-contravariant systems without bivariant actions. Bivariant actions can be incorporated later by means of an encoding that splits each bivariant action into its covariant and its contravariant parts.

  3. Covariance of maximum likelihood evolutionary distances between sequences aligned pairwise

    Directory of Open Access Journals (Sweden)

    Dessimoz Christophe

    2008-06-01

    Full Text Available Abstract Background The estimation of a distance between two biological sequences is a fundamental process in molecular evolution. It is usually performed by maximum likelihood (ML on characters aligned either pairwise or jointly in a multiple sequence alignment (MSA. Estimators for the covariance of pairs from an MSA are known, but we are not aware of any solution for cases of pairs aligned independently. In large-scale analyses, it may be too costly to compute MSAs every time distances must be compared, and therefore a covariance estimator for distances estimated from pairs aligned independently is desirable. Knowledge of covariances improves any process that compares or combines distances, such as in generalized least-squares phylogenetic tree building, orthology inference, or lateral gene transfer detection. Results In this paper, we introduce an estimator for the covariance of distances from sequences aligned pairwise. Its performance is analyzed through extensive Monte Carlo simulations, and compared to the well-known variance estimator of ML distances. Our covariance estimator can be used together with the ML variance estimator to form covariance matrices. Conclusion The estimator performs similarly to the ML variance estimator. In particular, it shows no sign of bias when sequence divergence is below 150 PAM units (i.e. above ~29% expected sequence identity. Above that distance, the covariances tend to be underestimated, but then ML variances are also underestimated.

  4. Galaxy-galaxy lensing estimators and their covariance properties

    Science.gov (United States)

    Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose

    2017-11-01

    We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.

  5. Parsimonious covariate selection for a multicategory ordered response.

    Science.gov (United States)

    Hsu, Wan-Hsiang; DiRienzo, A Gregory

    2017-12-01

    We propose a flexible continuation ratio (CR) model for an ordinal categorical response with potentially ultrahigh dimensional data that characterizes the unique covariate effects at each response level. The CR model is the logit of the conditional discrete hazard function for each response level given covariates. We propose two modeling strategies, one that keeps the same covariate set for each hazard function but allows regression coefficients to arbitrarily change with response level, and one that allows both the set of covariates and their regression coefficients to arbitrarily change with response. Evaluating a covariate set is accomplished by using the nonparametric bootstrap to estimate prediction error and their robust standard errors that do not rely on proper model specification. To help with interpretation of the selected covariate set, we flexibly estimate the conditional cumulative distribution function given the covariates using the separate hazard function models. The goodness-of-fit of our flexible CR model is assessed with graphical and numerical methods based on the cumulative sum of residuals. Simulation results indicate the methods perform well in finite samples. An application to B-cell acute lymphocytic leukemia data is provided.

  6. Many Primary Care Docs May Miss Prediabetes

    Science.gov (United States)

    ... https://medlineplus.gov/news/fullstory_167370.html Many Primary Care Docs May Miss Prediabetes Fewer than 1 in ... 2017 MONDAY, July 24, 2017 (HealthDay News) -- Most primary care doctors can't identify all 11 risk factors ...

  7. Clustering with Missing Values: No Imputation Required

    Science.gov (United States)

    Wagstaff, Kiri

    2004-01-01

    Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.

  8. Missed Radiation Therapy and Cancer Recurrence

    Science.gov (United States)

    Patients who miss radiation therapy sessions during cancer treatment have an increased risk of their disease returning, even if they eventually complete their course of radiation treatment, according to a new study.

  9. Covariate-adjusted measures of discrimination for survival data

    DEFF Research Database (Denmark)

    White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth

    2015-01-01

    statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...

  10. Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning

    1996-01-01

    In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...... for multi-variate systems to an ARMAV model. The covariance equivalent model structure is also considered when the number of channels are different from the number of degrees offreedom to be modelled. Finally, it is reviewed how to estimate an ARMAV model from sampled data....

  11. Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning

    In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continuous-time system excited by Gaussian white noise. This result is generalized...... for multivariate systems to an ARMAV model. The covariance equivalent model structure is also considered when the number of channels are different from the number of degrees of freedom to be modelled. Finally, it is reviewed how to estimate an ARMAV model from sampled data....

  12. AppleScript The Missing Manual

    CERN Document Server

    Goldstein, Adam

    2009-01-01

    AppleScript: The Missing Manual is every beginner's guide to learning the Macintosh's ultimate scripting tool: AppleScript. Through dozens of hands-on scripting examples, this comprehensive guide ensures that anyone including novices can learn how to control Mac applications in timesaving and innovative ways. Thanks to AppleScript: The Missing Manual, the path from regular Mac fan to seasoned scripter has never been easier.

  13. Analysis of missing data with random forests

    OpenAIRE

    Hapfelmeier, Alexander

    2012-01-01

    Random Forests are widely used for data prediction and interpretation purposes. They show many appealing characteristics, such as the ability to deal with high dimensional data, complex interactions and correlations. Furthermore, missing values can easily be processed by the built-in procedure of surrogate splits. However, there is only little knowledge about the properties of recursive partitioning in missing data situations. Therefore, extensive simulation studies and empir...

  14. Wikipedia Reader's Guide: The Missing Manual

    CERN Document Server

    Broughton, John

    2008-01-01

    Wikipedia Reader's Guide: The Missing Manual gives you the essential tools for getting the most out of Wikipedia. As a supplement to Wikipedia: The Missing Manual, this handbook provides a basic road map to the largest online collaborative encyclopedia. You'll learn the best ways to search Wikipedia for the information you need, how to navigate the encyclopedia by category, and what to do if you spot an error in an article.

  15. MISSE 1 and 2 Tray Temperature Measurements

    Science.gov (United States)

    Harvey, Gale A.; Kinard, William H.

    2006-01-01

    The Materials International Space Station Experiment (MISSE 1 & 2) was deployed August 10,2001 and retrieved July 30,2005. This experiment is a co-operative endeavor by NASA-LaRC. NASA-GRC, NASA-MSFC, NASA-JSC, the Materials Laboratory at the Air Force Research Laboratory, and the Boeing Phantom Works. The objective of the experiment is to evaluate performance, stability, and long term survivability of materials and components planned for use by NASA and DOD on future LEO, synchronous orbit, and interplanetary space missions. Temperature is an important parameter in the evaluation of space environmental effects on materials. The MISSE 1 & 2 had autonomous temperature data loggers to measure the temperature of each of the four experiment trays. The MISSE tray-temperature data loggers have one external thermistor data channel, and a 12 bit digital converter. The MISSE experiment trays were exposed to the ISS space environment for nearly four times the nominal design lifetime for this experiment. Nevertheless, all of the data loggers provided useful temperature measurements of MISSE. The temperature measurement system has been discussed in a previous paper. This paper presents temperature measurements of MISSE payload experiment carriers (PECs) 1 and 2 experiment trays.

  16. A Comparison of Three Gap Filling Techniques for Eddy Covariance Net Carbon Fluxes in Short Vegetation Ecosystems

    Directory of Open Access Journals (Sweden)

    Xiaosong Zhao

    2015-01-01

    Full Text Available Missing data is an inevitable problem when measuring CO2, water, and energy fluxes between biosphere and atmosphere by eddy covariance systems. To find the optimum gap-filling method for short vegetations, we review three-methods mean diurnal variation (MDV, look-up tables (LUT, and nonlinear regression (NLR for estimating missing values of net ecosystem CO2 exchange (NEE in eddy covariance time series and evaluate their performance for different artificial gap scenarios based on benchmark datasets from marsh and cropland sites in China. The cumulative errors for three methods have no consistent bias trends, which ranged between −30 and +30 mgCO2 m−2 from May to October at three sites. To reduce sum bias in maximum, combined gap-filling methods were selected for short vegetation. The NLR or LUT method was selected after plant rapidly increasing in spring and before the end of plant growing, and MDV method was used to the other stage. The sum relative error (SRE of optimum method ranged between −2 and +4% for four-gap level at three sites, except for 55% gaps at soybean site, which also obviously reduced standard deviation of error.

  17. Anchor handling tug operations: a practical guide to the operation of modern anchor handling tugs engaged in anchor handling and towing operations

    National Research Council Canada - National Science Library

    Clark, I.C; Hancox, M

    2012-01-01

    ... --Turning and manoeuvring modern anchor handling vessels --The AHTS design and towing operations --The dangers of very high speed loads during deep water anchor handling operations --The dangers...

  18. Treatments of Missing Values in Large National Data Affect Conclusions: The Impact of Multiple Imputation on Arthroplasty Research.

    Science.gov (United States)

    Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Su, Edwin P; Grauer, Jonathan N

    2018-03-01

    Despite the advantages of large, national datasets, one continuing concern is missing data values. Complete case analysis, where only cases with complete data are analyzed, is commonly used rather than more statistically rigorous approaches such as multiple imputation. This study characterizes the potential selection bias introduced using complete case analysis and compares the results of common regressions using both techniques following unicompartmental knee arthroplasty. Patients undergoing unicompartmental knee arthroplasty were extracted from the 2005 to 2015 National Surgical Quality Improvement Program. As examples, the demographics of patients with and without missing preoperative albumin and hematocrit values were compared. Missing data were then treated with both complete case analysis and multiple imputation (an approach that reproduces the variation and associations that would have been present in a full dataset) and the conclusions of common regressions for adverse outcomes were compared. A total of 6117 patients were included, of which 56.7% were missing at least one value. Younger, female, and healthier patients were more likely to have missing preoperative albumin and hematocrit values. The use of complete case analysis removed 3467 patients from the study in comparison with multiple imputation which included all 6117 patients. The 2 methods of handling missing values led to differing associations of low preoperative laboratory values with commonly studied adverse outcomes. The use of complete case analysis can introduce selection bias and may lead to different conclusions in comparison with the statistically rigorous multiple imputation approach. Joint surgeons should consider the methods of handling missing values when interpreting arthroplasty research. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. How the NWC handles software as product

    Energy Technology Data Exchange (ETDEWEB)

    Vinson, D.

    1997-11-01

    This tutorial provides a hands-on view of how the Nuclear Weapons Complex project should be handling (or planning to handle) software as a product in response to Engineering Procedure 401099. The SQAS has published the document SQAS96-002, Guidelines for NWC Processes for Handling Software Product, that will be the basis for the tutorial. The primary scope of the tutorial is on software products that result from weapons and weapons-related projects, although the information presented is applicable to many software projects. Processes that involve the exchange, review, or evaluation of software product between or among NWC sites, DOE, and external customers will be described.

  20. Ergonomics: safe patient handling and mobility.

    Science.gov (United States)

    Hallmark, Beth; Mechan, Patricia; Shores, Lynne

    2015-03-01

    This article reviews and investigates the issues surrounding ergonomics, with a specific focus on safe patient handling and mobility. The health care worker of today faces many challenges, one of which is related to the safety of patients. Safe patient handling and mobility is on the forefront of the movement to improve patient safety. This article reviews the risks associated with patient handling and mobility, and informs the reader of current evidence-based practice relevant to this area of care. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Handling of bulk solids theory and practice

    CERN Document Server

    Shamlou, P A

    1990-01-01

    Handling of Bulk Solids provides a comprehensive discussion of the field of solids flow and handling in the process industries. Presentation of the subject follows classical lines of separate discussions for each topic, so each chapter is self-contained and can be read on its own. Topics discussed include bulk solids flow and handling properties; pressure profiles in bulk solids storage vessels; the design of storage silos for reliable discharge of bulk materials; gravity flow of particulate materials from storage vessels; pneumatic transportation of bulk solids; and the hazards of solid-mater

  2. Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds

    Science.gov (United States)

    Mitra, Arpita

    2017-12-01

    The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.

  3. AFCI-2.0 Neutron Cross Section Covariance Library

    Energy Technology Data Exchange (ETDEWEB)

    Herman, M.; Herman, M; Oblozinsky, P.; Mattoon, C.M.; Pigni, M.; Hoblit, S.; Mughabghab, S.F.; Sonzogni, A.; Talou, P.; Chadwick, M.B.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Yount, P.G.

    2011-03-01

    The cross section covariance library has been under development by BNL-LANL collaborative effort over the last three years. The project builds on two covariance libraries developed earlier, with considerable input from BNL and LANL. In 2006, international effort under WPEC Subgroup 26 produced BOLNA covariance library by putting together data, often preliminary, from various sources for most important materials for nuclear reactor technology. This was followed in 2007 by collaborative effort of four US national laboratories to produce covariances, often of modest quality - hence the name low-fidelity, for virtually complete set of materials included in ENDF/B-VII.0. The present project is focusing on covariances of 4-5 major reaction channels for 110 materials of importance for power reactors. The work started under Global Nuclear Energy Partnership (GNEP) in 2008, which changed to Advanced Fuel Cycle Initiative (AFCI) in 2009. With the 2011 release the name has changed to the Covariance Multigroup Matrix for Advanced Reactor Applications (COMMARA) version 2.0. The primary purpose of the library is to provide covariances for AFCI data adjustment project, which is focusing on the needs of fast advanced burner reactors. Responsibility of BNL was defined as developing covariances for structural materials and fission products, management of the library and coordination of the work; LANL responsibility was defined as covariances for light nuclei and actinides. The COMMARA-2.0 covariance library has been developed by BNL-LANL collaboration for Advanced Fuel Cycle Initiative applications over the period of three years, 2008-2010. It contains covariances for 110 materials relevant to fast reactor R&D. The library is to be used together with the ENDF/B-VII.0 central values of the latest official release of US files of evaluated neutron cross sections. COMMARA-2.0 library contains neutron cross section covariances for 12 light nuclei (coolants and moderators), 78 structural

  4. Empirical State Error Covariance Matrix for Batch Estimation

    Science.gov (United States)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  5. Reducing mouse anxiety during handling: effect of experience with handling tunnels.

    Science.gov (United States)

    Gouveia, Kelly; Hurst, Jane L

    2013-01-01

    Handling stress is a well-recognised source of variation in animal studies that can also compromise the welfare of research animals. To reduce background variation and maximise welfare, methods that minimise handling stress should be developed and used wherever possible. Recent evidence has shown that handling mice by a familiar tunnel that is present in their home cage can minimise anxiety compared with standard tail handling. As yet, it is unclear whether a tunnel is required in each home cage to improve response to handling. We investigated the influence of prior experience with home tunnels among two common strains of laboratory mice: ICR(CD-1) and C57BL/6. We compared willingness to approach the handler and anxiety in an elevated plus maze test among mice picked up by the tail, by a home cage tunnel or by an external tunnel shared between cages. Willingness to interact with the handler was much greater for mice handled by a tunnel, even when this was unfamiliar, compared to mice picked up by the tail. Once habituated to handling, C57BL/6 mice were most interactive towards a familiar home tunnel, whereas the ICR strain showed strong interaction with all tunnel handling regardless of any experience of a home cage tunnel. Mice handled by a home cage or external tunnel showed less anxiety in an elevated plus maze than those picked up by the tail. This study shows that using a tunnel for routine handling reduces anxiety among mice compared to tail handling regardless of prior familiarity with tunnels. However, as home cage tunnels can further improve response to handling in some mice, we recommend that mice are handled with a tunnel provided in their home cage where possible as a simple practical method to minimise handling stress.

  6. Data Selection for Within-Class Covariance Estimation

    Science.gov (United States)

    2016-09-08

    Data Selection for Within- Class Covariance Estimation Elliot Singer1, Tyler Campbell,2, and Douglas Reynolds1 1 Massachusetts Institute of...Technology Lincoln Laboratory 2 Rensselaer Polytechnic Institute es@ll.mit.edu, tylercampbell@mac.com, dar@ll.mit.edu Abstract * Methods for performing...NIST evaluations to train the within- class and across- class covariance matrices required by these techniques, little attention has been paid to the

  7. Covariant Noether charge for higher dimensional Chern-Simons terms

    Energy Technology Data Exchange (ETDEWEB)

    Azeyanagi, Tatsuo [Département de Physique, Ecole Normale Supérieure, CNRS,24 rue Lhomond, 75005 Paris (France); Loganayagam, R. [School of Natural Sciences, Institute for Advanced Study,1 Einstein Drive, Princeton, NJ 08540 (United States); Ng, Gim Seng [Center for the Fundamental Laws of Nature, Harvard University,17 Oxford St, Cambridge, MA 02138 (United States); Rodriguez, Maria J. [Center for the Fundamental Laws of Nature, Harvard University,17 Oxford St, Cambridge, MA 02138 (United States); Institut de Physique Théorique, Orme des Merisiers batiment 774,Point courrier 136, CEA/DSM/IPhT, CEA/Saclay,F-91191 Gif-sur-Yvette Cedex (France)

    2015-05-07

    We construct a manifestly covariant differential Noether charge for theories with Chern-Simons terms in higher dimensional spacetimes. This is in contrast to Tachikawa’s extension of the standard Lee-Iyer-Wald formalism which results in a non-covariant differential Noether charge for Chern-Simons terms. On a bifurcation surface, our differential Noether charge integrates to the Wald-like entropy formula proposed by Tachikawa in (arXiv:hep-th/0611141v2).

  8. Extreme Covariant Observables for Type I Symmetry Groups

    Science.gov (United States)

    Holevo, Alexander S.; Pellonpää, Juha-Pekka

    2009-06-01

    The structure of covariant observables—normalized positive operator measures (POMs)—is studied in the case of a type I symmetry group. Such measures are completely determined by kernels which are measurable fields of positive semidefinite sesquilinear forms. We produce the minimal Kolmogorov decompositions for the kernels and determine those which correspond to the extreme covariant observables. Illustrative examples of the extremals in the case of the Abelian symmetry group are given.

  9. Modeling Portfolio Defaults using Hidden Markov Models with Covariates

    OpenAIRE

    Banachewicz, Konrad; van der Vaart, Aad; Lucas, André

    2006-01-01

    We extend the Hidden Markov Model for defaults of Crowder, Davis, and Giampieri (2005) to include covariates. The covariates enhance the prediction of transition probabilities from high to low default regimes. To estimate the model, we extend the EM estimating equations to account for the time varying nature of the conditional likelihoods due to sample attrition and extension. Using empirical U.S. default data, we find that GDP growth, the term structure of interest rates and stock market ret...

  10. Visualization and assessment of spatio-temporal covariance properties

    KAUST Repository

    Huang, Huang

    2017-11-23

    Spatio-temporal covariances are important for describing the spatio-temporal variability of underlying random fields in geostatistical data. For second-order stationary random fields, there exist subclasses of covariance functions that assume a simpler spatio-temporal dependence structure with separability and full symmetry. However, it is challenging to visualize and assess separability and full symmetry from spatio-temporal observations. In this work, we propose a functional data analysis approach that constructs test functions using the cross-covariances from time series observed at each pair of spatial locations. These test functions of temporal lags summarize the properties of separability or symmetry for the given spatial pairs. We use functional boxplots to visualize the functional median and the variability of the test functions, where the extent of departure from zero at all temporal lags indicates the degree of non-separability or asymmetry. We also develop a rank-based nonparametric testing procedure for assessing the significance of the non-separability or asymmetry. Essentially, the proposed methods only require the analysis of temporal covariance functions. Thus, a major advantage over existing approaches is that there is no need to estimate any covariance matrix for selected spatio-temporal lags. The performances of the proposed methods are examined by simulations with various commonly used spatio-temporal covariance models. To illustrate our methods in practical applications, we apply it to real datasets, including weather station data and climate model outputs.

  11. A three domain covariance framework for EEG/MEG data.

    Science.gov (United States)

    Roś, Beata P; Bijma, Fetsje; de Gunst, Mathisca C M; de Munck, Jan C

    2015-10-01

    In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. Our covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, like in combined EEG-fMRI experiments in which the correlation between EEG and fMRI signals is investigated. We use a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. We apply our method to real EEG and MEG data sets. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. The Performance Analysis Based on SAR Sample Covariance Matrix

    Directory of Open Access Journals (Sweden)

    Esra Erten

    2012-03-01

    Full Text Available Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given.

  13. Computational protein design quantifies structural constraints on amino acid covariation.

    Directory of Open Access Journals (Sweden)

    Noah Ollikainen

    Full Text Available Amino acid covariation, where the identities of amino acids at different sequence positions are correlated, is a hallmark of naturally occurring proteins. This covariation can arise from multiple factors, including selective pressures for maintaining protein structure, requirements imposed by a specific function, or from phylogenetic sampling bias. Here we employed flexible backbone computational protein design to quantify the extent to which protein structure has constrained amino acid covariation for 40 diverse protein domains. We find significant similarities between the amino acid covariation in alignments of natural protein sequences and sequences optimized for their structures by computational protein design methods. These results indicate that the structural constraints imposed by protein architecture play a dominant role in shaping amino acid covariation and that computational protein design methods can capture these effects. We also find that the similarity between natural and designed covariation is sensitive to the magnitude and mechanism of backbone flexibility used in computational protein design. Our results thus highlight the necessity of including backbone flexibility to correctly model precise details of correlated amino acid changes and give insights into the pressures underlying these correlations.

  14. Random sampling and validation of covariance matrices of resonance parameters

    Science.gov (United States)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  15. Aerobot Sampling and Handling System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Honeybee Robotics proposes to: ?Derive and document the functional and technical requirements for Aerobot surface sampling and sample handling across a range of...

  16. GeoLab Sample Handling System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop  a robotic sample handling/ manipulator system for the GeoLab glovebox. This work leverages from earlier GeoLab work and a 2012 collaboration with a...

  17. Handling knowledge on osteoporosis - a qualitative study

    DEFF Research Database (Denmark)

    Nielsen, Dorthe; Huniche, Lotte; Brixen, Kim

    2013-01-01

    Scand J Caring Sci; 2012 Handling knowledge on osteoporosis - a qualitative study The aim of this qualitative study was to increase understanding of the importance of osteoporosis information and knowledge for patients' ways of handling osteoporosis in their everyday lives. Interviews were...... performed with 14 patients recruited from two English university hospitals and 12 patients from a Danish university hospital. Critical psychology was used as a theoretical framework for the data analysis, which aimed at shedding light on patients' ways of conducting everyday life with osteoporosis....... The themes that emerged from the analysis showed that life conditions influenced the way in which risk, pain and osteoporosis were handled. Everyday life was also influenced by patients' attitude to treatment. The patients who were experiencing emotional difficulties in handling osteoporosis were not those...

  18. Management of transport and handling contracts

    CERN Document Server

    Rühl, I

    2004-01-01

    This paper shall outline the content, application and management strategies for the various contracts related to transport and handling activities. In total, the two sections Logistics and Handling Maintenance are in charge of 27 (!) contracts ranging from small supply contracts to big industrial support contracts. The activities as well as the contracts can generally be divided into four main topics "Vehicle Fleet Management"; "Supply, Installation and Commissioning of Lifting and Hoisting Equipment"; "Equipment Maintenance" and "Industrial Support for Transport and Handling". Each activity and contract requires different approaches and permanent adaptation to the often changing CERN's requirements. In particular, the management and the difficulties experienced with the contracts E072 "Maintenance of lifting and hoisting equipment", F420 "Supply of seven overhead traveling cranes for LHC" and S090/S103 "Industrial support for transport and handling" will be explained in detail.

  19. Live-trapping and handling brown bear

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This paper reports techniques developed to live trap and handle brown bears on the Kodiak National Wildlife Refuge. The brown bears (Ursus middendorffi) on the...

  20. Safety Training: "Manual Handling" course in September

    CERN Multimedia

    Safety Training, HSE Unit

    2016-01-01

    The next "Manual Handling" course will be given, in French, on 26 September 2016. This course is designed for anyone required to carry out manual handling of loads in the course of their work.   The main objective of this course is to adopt and apply the basic principles of physical safety and economy of effort. There are places available. If you are interested in following this course, please fill an EDH training request via our catalogue. 

  1. Analysis of longitudinal data from animals with missing values using SPSS.

    Science.gov (United States)

    Duricki, Denise A; Soleman, Sara; Moon, Lawrence D F

    2016-06-01

    Testing of therapies for disease or injury often involves the analysis of longitudinal data from animals. Modern analytical methods have advantages over conventional methods (particularly when some data are missing), yet they are not used widely by preclinical researchers. Here we provide an easy-to-use protocol for the analysis of longitudinal data from animals, and we present a click-by-click guide for performing suitable analyses using the statistical package IBM SPSS Statistics software (SPSS). We guide readers through the analysis of a real-life data set obtained when testing a therapy for brain injury (stroke) in elderly rats. If a few data points are missing, as in this example data set (for example, because of animal dropout), repeated-measures analysis of covariance may fail to detect a treatment effect. An alternative analysis method, such as the use of linear models (with various covariance structures), and analysis using restricted maximum likelihood estimation (to include all available data) can be used to better detect treatment effects. This protocol takes 2 h to carry out.

  2. Analysis of longitudinal data from animals where some data are missing in SPSS

    Science.gov (United States)

    Duricki, DA; Soleman, S; Moon, LDF

    2017-01-01

    Testing of therapies for disease or injury often involves analysis of longitudinal data from animals. Modern analytical methods have advantages over conventional methods (particularly where some data are missing) yet are not used widely by pre-clinical researchers. We provide here an easy to use protocol for analysing longitudinal data from animals and present a click-by-click guide for performing suitable analyses using the statistical package SPSS. We guide readers through analysis of a real-life data set obtained when testing a therapy for brain injury (stroke) in elderly rats. We show that repeated measures analysis of covariance failed to detect a treatment effect when a few data points were missing (due to animal drop-out) whereas analysis using an alternative method detected a beneficial effect of treatment; specifically, we demonstrate the superiority of linear models (with various covariance structures) analysed using Restricted Maximum Likelihood estimation (to include all available data). This protocol takes two hours to follow. PMID:27196723

  3. A Modified General Location Model for Noncompliance with Missing Data: Revisiting the New York City School Choice Scholarship Program Using Principal Stratification

    Science.gov (United States)

    Jin, Hui; Barnard, John; Rubin, Donald B.

    2010-01-01

    Missing data, especially when coupled with noncompliance, are a challenge even in the setting of randomized experiments. Although some existing methods can address each complication, it can be difficult to handle both of them simultaneously. This is true in the example of the New York City School Choice Scholarship Program, where both the…

  4. Specialization and Flexibility in Port Cargo Handling

    Directory of Open Access Journals (Sweden)

    Hakkı KİŞİ

    2016-11-01

    Full Text Available Cargo handling appears to be the fundamental function of ports. In this context, the question of type of equipment and capacity rate need to be tackled with respect to cargo handling principles. The purpose of this study is to discuss the types of equipment to be used in ports, relating the matter to costs and capacity. The question is studied with a basic economic theoretical approach. Various conditions like port location, size, resources, cargo traffic, ships, etc. are given parameters to dictate the type and specification of the cargo handling equipment. Besides, a simple approach in the context of cost capacity relation can be useful in deciding whether to use specialized or flexible equipment. Port equipment is sometimes expected to be flexible to handle various types of cargo as many as possible and sometimes to be specialized to handle one specific type of cargo. The cases that might be suitable for those alternatives are discussed from an economic point of view in this article. Consequently, effectiveness and efficiency criteria play important roles in determining the handling equipment in ports.

  5. Religious Serpent Handling and Community Relations.

    Science.gov (United States)

    Williamson, W Paul; Hood, Ralph W

    2015-01-01

    Christian serpent handling sects of Appalachia comprise a community that has long been mischaracterized and marginalized by the larger communities surrounding them. To explore this dynamic, this article traces the emergence of serpent handling in Appalachia and the emergence of anti-serpent-handling state laws, which eventually failed to curb the practice, as local communities gave serpent handling groups support. We present two studies to consider for improving community relations with serpent handling sects. In study 1, we present data relating the incidence of reported serpent-bite deaths with the rise of anti-serpent-handling laws and their eventual abatement, based on increasing acceptance of serpent handlers by the larger community. Study 2 presents interview data on serpent bites and death that provide explanations for these events from the cultural and religious perspective. We conclude that first-hand knowledge about serpent handlers, and other marginalized groups, helps to lessen suspicion and allows them to be seen as not much different, which are tendencies that are important for promoting inter-community harmony.

  6. ML-MG: Multi-label Learning with Missing Labels Using a Mixed Graph

    KAUST Repository

    Wu, Baoyuan

    2015-12-07

    This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i.e. some of their labels are missing). To handle missing labels, we propose a unified model of label dependencies by constructing a mixed graph, which jointly incorporates (i) instance-level similarity and class co-occurrence as undirected edges and (ii) semantic label hierarchy as directed edges. Unlike most MLML methods, We formulate this learning problem transductively as a convex quadratic matrix optimization problem that encourages training label consistency and encodes both types of label dependencies (i.e. undirected and directed edges) using quadratic terms and hard linear constraints. The alternating direction method of multipliers (ADMM) can be used to exactly and efficiently solve this problem. To evaluate our proposed method, we consider two popular applications (image and video annotation), where the label hierarchy can be derived from Wordnet. Experimental results show that our method achieves a significant improvement over state-of-the-art methods in performance and robustness to missing labels.

  7. Handling of damaged spent fuel at Ignalina NPP

    Energy Technology Data Exchange (ETDEWEB)

    Ziehm, Ronny [NUKEM Technologies GmbH (Germany); Bechtel, Sascha [Hoefer und Bechtel GmbH (Germany)

    2012-11-01

    The Ignalina Nuclear Power Plant (INPP) is situated in the north-eastern part of Lithuania close to the borders with Latvia and Belarus and on the shore of Lake Druksiai. It is approximately 120 km from the capital city Vilnius. The power plant has two RMBK type water cooled graphite moderated pressure tube reactors each of design capacity 1500MW(e). The start of operation of the Unit 1 was in 1983 and of the Unit 2 in 1987. In the period 1987 - 1991 (i.e. Soviet period) a small proportion of the existing spent nuclear fuel suffered minor to major damages. In the frame of decommissioning of INPP it is necessary that this damaged fuel is retrieved from the storage pools and stored in an interim spent fuel store. NUKEM Technologies GmbH (Germany) as part of a consortium with GNS mbH (Germany) was awarded the contract for an Interim Spent Fuel Storage Facility (B1- ISFSF). This contract includes the design, procurement, manufacturing, supply and installation of a damaged fuel handling system (DFHS). Objective of this DFHS is the safe handling of spent nuclear fuel with major damages, which result in rupture of the cladding and potential loss of fuel pellets from within the cladding. Typical damages are bent fuel bundle skeletons, broken fuel rods, missing or damaged end plugs, very small gaps between fuel bundles, bent central rods between fuel bundles. The presented concept is designed for Ignalina NPP. However, the design is developed more generally to solve these problems with damaged fuel at other nuclear power plants applying these proven techniques. (orig.)

  8. Substituting missing data in compositional analysis

    Energy Technology Data Exchange (ETDEWEB)

    Real, Carlos, E-mail: carlos.real@usc.es [Area de Ecologia, Departamento de Biologia Celular y Ecologia, Escuela Politecnica Superior, Universidad de Santiago de Compostela, 27002 Lugo (Spain); Angel Fernandez, J.; Aboal, Jesus R.; Carballeira, Alejo [Area de Ecologia, Departamento de Biologia Celular y Ecologia, Facultad de Biologia, Universidad de Santiago de Compostela, 15782 Santiago de Compostela (Spain)

    2011-10-15

    Multivariate analysis of environmental data sets requires the absence of missing values or their substitution by small values. However, if the data is transformed logarithmically prior to the analysis, this solution cannot be applied because the logarithm of a small value might become an outlier. Several methods for substituting the missing values can be found in the literature although none of them guarantees that no distortion of the structure of the data set is produced. We propose a method for the assessment of these distortions which can be used for deciding whether to retain or not the samples or variables containing missing values and for the investigation of the performance of different substitution techniques. The method analyzes the structure of the distances among samples using Mantel tests. We present an application of the method to PCDD/F data measured in samples of terrestrial moss as part of a biomonitoring study. - Highlights: > Missing values in multivariate data sets must be substituted prior to analysis. > The substituted values can modify the structure of the data set. > We developed a method to estimate the magnitude of the alterations. > The method is simple and based on the Mantel test. > The method allowed the identification of problematic variables in a sample data set. - A method is presented for the assessment of the possible distortions in multivariate analysis caused by the substitution of missing values.

  9. [Imputing missing data in public health: general concepts and application to dichotomous variables].

    Science.gov (United States)

    Hernández, Gilma; Moriña, David; Navarro, Albert

    The presence of missing data in collected variables is common in health surveys, but the subsequent imputation thereof at the time of analysis is not. Working with imputed data may have certain benefits regarding the precision of the estimators and the unbiased identification of associations between variables. The imputation process is probably still little understood by many non-statisticians, who view this process as highly complex and with an uncertain goal. To clarify these questions, this note aims to provide a straightforward, non-exhaustive overview of the imputation process to enable public health researchers ascertain its strengths. All this in the context of dichotomous variables which are commonplace in public health. To illustrate these concepts, an example in which missing data is handled by means of simple and multiple imputation is introduced. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  10. Planned Missing Data Designs in Educational Psychology Research

    Science.gov (United States)

    Rhemtulla, Mijke; Hancock, Gregory R.

    2016-01-01

    Although missing data are often viewed as a challenge for applied researchers, in fact missing data can be highly beneficial. Specifically, when the amount of missing data on specific variables is carefully controlled, a balance can be struck between statistical power and research costs. This article presents the issue of planned missing data by…

  11. Bayesian adjustment for covariate measurement errors: a flexible parametric approach.

    Science.gov (United States)

    Hossain, Shahadut; Gustafson, Paul

    2009-05-15

    In most epidemiological investigations, the study units are people, the outcome variable (or the response) is a health-related event, and the explanatory variables are usually environmental and/or socio-demographic factors. The fundamental task in such investigations is to quantify the association between the explanatory variables (covariates/exposures) and the outcome variable through a suitable regression model. The accuracy of such quantification depends on how precisely the relevant covariates are measured. In many instances, we cannot measure some of the covariates accurately. Rather, we can measure noisy (mismeasured) versions of them. In statistical terminology, mismeasurement in continuous covariates is known as measurement errors or errors-in-variables. Regression analyses based on mismeasured covariates lead to biased inference about the true underlying response-covariate associations. In this paper, we suggest a flexible parametric approach for avoiding this bias when estimating the response-covariate relationship through a logistic regression model. More specifically, we consider the flexible generalized skew-normal and the flexible generalized skew-t distributions for modeling the unobserved true exposure. For inference and computational purposes, we use Bayesian Markov chain Monte Carlo techniques. We investigate the performance of the proposed flexible parametric approach in comparison with a common flexible parametric approach through extensive simulation studies. We also compare the proposed method with the competing flexible parametric method on a real-life data set. Though emphasis is put on the logistic regression model, the proposed method is unified and is applicable to the other generalized linear models, and to other types of non-linear regression models as well. (c) 2009 John Wiley & Sons, Ltd.

  12. MISS WORLD DALAM KAJIAN FILSAFAT ILMU

    Directory of Open Access Journals (Sweden)

    Suyahmo Suyahmo

    2013-06-01

    Full Text Available Miss World, ontologically substantial, emphasizing the value of beauty, and this beauty are universal values, in which every person, especially women always crave the beauty value. When the value of beauty actualized in the geographic region of Indonesia, by itself can not be separated from the standard values and norms, standards prevailing philosophy and ideology. The existence of this standard philosophy and ideology, gave birth to a difference in perception between the assesment that agree and disagree Miss World was held in Indonesia. Against such a distinction, though philosophically and ideologically considered not reflect the culture of the nation, but goverment policy that he had, at last Miss World still be held. These considerations, based on the more practical value, pragmatic interests, because this contest is deemed able to bring benefits to the nation, especially in terms of economy, tourism, and others.

  13. Missed injury and the tertiary trauma survey.

    Science.gov (United States)

    Thomson, Charles B; Greaves, Ian

    2008-01-01

    Missed injury in the context of major trauma remains a persistent problem, both from a clinical and medico-legal point-of-view. Estimates of the incidence vary widely, dependent on the precise parameters of the studied population, the definition of missed injury and the extent of follow-up, but may be as high as 38%. The tertiary survey, in which formal repeated examination of the patient is undertaken after initial resuscitation and treatment have taken place, has been suggested as a way of identifying injuries not found at presentation. This paper appraises the concept of the tertiary survey, and also reviews the literature on missed injury in order to identify the risk factors, the types of injury and the reasons for error.

  14. Management of missing maxillary lateral incisors.

    Science.gov (United States)

    Sabri, R

    1999-01-01

    Missing maxillary lateral incisors create an esthetic problem with specific orthodontic and prosthetic considerations. The purpose of this article is to describe treatment protocols and problems encountered in the management of this disorder. The two common treatment options are orthodontic space opening for future restorations or orthodontic space closure using canines to replace the missing maxillary lateral incisors. The required amount of space opening and the various prosthetic options are discussed. The methods for reshaping canines in orthodontic space closure and building them up to simulate lateral incisors also are described. The indications, advantages and disadvantages of both treatment modalities are outlined to help clinicians make decisions in borderline situations. Teamwork between the orthodontist, general practitioner and restorative dentist is important when analyzing factors related to individual patients and establishing overall treatment plans. This also will allow treatment modalities and the various options for replacing missing maxillary lateral incisors in space opening to be discussed between team members and the patient.

  15. Near-misses and future disaster preparedness.

    Science.gov (United States)

    Dillon, Robin L; Tinsley, Catherine H; Burns, William J

    2014-10-01

    Disasters garner attention when they occur, and organizations commonly extract valuable lessons from visible failures, adopting new behaviors in response. For example, the United States saw numerous security policy changes following the September 11 terrorist attacks and emergency management and shelter policy changes following Hurricane Katrina. But what about those events that occur that fall short of disaster? Research that examines prior hazard experience shows that this experience can be a mixed blessing. Prior experience can stimulate protective measures, but sometimes prior experience can deceive people into feeling an unwarranted sense of safety. This research focuses on how people interpret near-miss experiences. We demonstrate that when near-misses are interpreted as disasters that did not occur and thus provide the perception that the system is resilient to the hazard, people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions. On the other hand, if near-misses can be recognized and interpreted as disasters that almost happened and thus provide the perception that the system is vulnerable to the hazard, this will counter the basic "near-miss" effect and encourage mitigation. In this article, we use these distinctions between resilient and vulnerable near-misses to examine how people come to define an event as either a resilient or vulnerable near-miss, as well as how this interpretation influences their perceptions of risk and their future preparedness behavior. Our contribution is in highlighting the critical role that people's interpretation of the prior experience has on their subsequent behavior and in measuring what shapes this interpretation. © 2014 Society for Risk Analysis.

  16. Missed opportunities to diagnose child physical abuse.

    Science.gov (United States)

    Thorpe, Elizabeth L; Zuckerbraun, Noel S; Wolford, Jennifer E; Berger, Rachel P

    2014-11-01

    This study aimed to determine the incidence of missed opportunities to diagnose abuse in a cohort of children with healing abusive fractures and to identify patterns present during previous medical visits, which could lead to an earlier diagnosis of abuse. This is a retrospective descriptive study of a 7-year consecutive sample of children diagnosed with child abuse at a single children's hospital. Children who had a healing fracture diagnosed on skeletal survey and a diagnosis of child abuse were included. We further collected data for the medical visit that lead to the diagnosis of child abuse and any previous medical visits that the subjects had during the 6 months preceding the diagnosis of abuse. All previous visits were classified as either a potential missed opportunity to diagnose abuse or as an unrelated previous visit, and the differences were analyzed. Median age at time of abuse diagnosis was 3.9 months. Forty-eight percent (37/77) of the subjects had at least 1 previous visit, and 33% (25/77) of those had at least 1 missed previous visit. Multiple missed previous visits for the same symptoms were recorded in 7 (25%) of these patients. The most common reason for presentation at missed previous visit was a physical examination sign suggestive of trauma (ie, bruising, swelling). Missed previous visits occurred across all care settings. One-third of young children with healing abusive fractures had previous medical visits where the diagnosis of abuse was not recognized. These children most commonly had signs of trauma on physical examination at the previous visits.

  17. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  18. On testing the missing at random assumption

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2006-01-01

    Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption. In this ......Most approaches to learning from incomplete data are based on the assumption that unobserved values are missing at random (mar). While the mar assumption, as such, is not testable, it can become testable in the context of other distributional assumptions, e.g. the naive Bayes assumption...

  19. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    Science.gov (United States)

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  20. Structural and Maturational Covariance in Early Childhood Brain Development.

    Science.gov (United States)

    Geng, Xiujuan; Li, Gang; Lu, Zhaohua; Gao, Wei; Wang, Li; Shen, Dinggang; Zhu, Hongtu; Gilmore, John H

    2017-03-01

    Brain structural covariance networks (SCNs) composed of regions with correlated variation are altered in neuropsychiatric disease and change with age. Little is known about the development of SCNs in early childhood, a period of rapid cortical growth. We investigated the development of structural and maturational covariance networks, including default, dorsal attention, primary visual and sensorimotor networks in a longitudinal population of 118 children after birth to 2 years old and compared them with intrinsic functional connectivity networks. We found that structural covariance of all networks exhibit strong correlations mostly limited to their seed regions. By Age 2, default and dorsal attention structural networks are much less distributed compared with their functional maps. The maturational covariance maps, however, revealed significant couplings in rates of change between distributed regions, which partially recapitulate their functional networks. The structural and maturational covariance of the primary visual and sensorimotor networks shows similar patterns to the corresponding functional networks. Results indicate that functional networks are in place prior to structural networks, that correlated structural patterns in adult may arise in part from coordinated cortical maturation, and that regional co-activation in functional networks may guide and refine the maturation of SCNs over childhood development. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Network-level structural covariance in the developing brain.

    Science.gov (United States)

    Zielinski, Brandon A; Gennatas, Efstathios D; Zhou, Juan; Seeley, William W

    2010-10-19

    Intrinsic or resting state functional connectivity MRI and structural covariance MRI have begun to reveal the adult human brain's multiple network architectures. How and when these networks emerge during development remains unclear, but understanding ontogeny could shed light on network function and dysfunction. In this study, we applied structural covariance MRI techniques to 300 children in four age categories (early childhood, 5-8 y; late childhood, 8.5-11 y; early adolescence, 12-14 y; late adolescence, 16-18 y) to characterize gray matter structural relationships between cortical nodes that make up large-scale functional networks. Network nodes identified from eight widely replicated functional intrinsic connectivity networks served as seed regions to map whole-brain structural covariance patterns in each age group. In general, structural covariance in the youngest age group was limited to seed and contralateral homologous regions. Networks derived using primary sensory and motor cortex seeds were already well-developed in early childhood but expanded in early adolescence before pruning to a more restricted topology resembling adult intrinsic connectivity network patterns. In contrast, language, social-emotional, and other cognitive networks were relatively undeveloped in younger age groups and showed increasingly distributed topology in older children. The so-called default-mode network provided a notable exception, following a developmental trajectory more similar to the primary sensorimotor systems. Relationships between functional maturation and structural covariance networks topology warrant future exploration.

  2. A simple procedure for the comparison of covariance matrices

    Science.gov (United States)

    2012-01-01

    Background Comparing the covariation patterns of populations or species is a basic step in the evolutionary analysis of quantitative traits. Here I propose a new, simple method to make this comparison in two population samples that is based on comparing the variance explained in each sample by the eigenvectors of its own covariance matrix with that explained by the covariance matrix eigenvectors of the other sample. The rationale of this procedure is that the matrix eigenvectors of two similar samples would explain similar amounts of variance in the two samples. I use computer simulation and morphological covariance matrices from the two morphs in a marine snail hybrid zone to show how the proposed procedure can be used to measure the contribution of the matrices orientation and shape to the overall differentiation. Results I show how this procedure can detect even modest differences between matrices calculated with moderately sized samples, and how it can be used as the basis for more detailed analyses of the nature of these differences. Conclusions The new procedure constitutes a useful resource for the comparison of covariance matrices. It could fill the gap between procedures resulting in a single, overall measure of differentiation, and analytical methods based on multiple model comparison not providing such a measure. PMID:23171139

  3. Effective Teaching Practices in Handling Non Readers

    Directory of Open Access Journals (Sweden)

    Jacklyn S. Dacalos

    2016-08-01

    Full Text Available The study determined the effective teaching practices in handling nonreaders. This seeks to answer the following objectives: describe the adjustments, effective strategies, and scaffolds utilized by teachers in handling nonreaders; differentiate the teachers’ reading adjustments, strategies and scaffolds in teaching nonreaders; analyze the teaching reading efficiency of nonreaders using effective teaching reading strategies; and find significant correlation of nonreaders’ grades and reading teachers’ reading adjustments, strategies and scaffolds. This study utilized mixed methods of research. Case studies of five public schools teachers were selected as primary subjects, who were interviewed in handling nonreaders in the areas of adjustments, strategies, and reading scaffolds. Actual teaching observation was conducted according to the five subjects’ most convenient time. In ascertaining the nonreaders’ academic performance, the students’ grades in English subject was analyzed using T-Test within subject design. Handling nonreaders in order to read and understand better in the lesson is an arduous act, yet; once done with effectiveness and passion, it yielded a great amount of learning success. Effective teaching practices in handling nonreaders comprised the use of teachers’ adjustments, strategies, and scaffolds to establish reading mastery, exposing them to letter sounds, short stories, and the use of follow-up. WH questions enhanced their reading performance significantly. Variations of reading teachers’ nature as: an enabler, a facilitator, a humanist, a behaviorist, and an expert, as regards to their teaching practices, were proven significant to students’ reading effectiveness.

  4. Rotorcraft handling-qualities design criteria development

    Science.gov (United States)

    Aiken, Edwin W.; Lebacqz, J. Victor; Chen, Robert T. N.; Key, David L.

    1988-01-01

    Joint NASA/Army efforts at the Ames Research Center to develop rotorcraft handling-qualities design criteria began in earnest in 1975. Notable results were the UH-1H VSTOLAND variable stability helicopter, the VFA-2 camera-and-terrain-board simulator visual system, and the generic helicopter real-time mathematical model, ARMCOP. An initial series of handling-qualities studies was conducted to assess the effects of rotor design parameters, interaxis coupling, and various levels of stability and control augmentation. The ability to conduct in-flight handling-qualities research was enhanced by the development of the NASA/Army CH-47 variable-stability helicopter. Research programs conducted using this vehicle include vertical-response investigations, hover augmentation systems, and the effects of control-force characteristics. The handling-qualities data base was judged to be sufficient to allow an update of the military helicopter handling-qualities specification, MIL-H-8501. These efforts, including not only the in-house experimental work but also contracted research and collaborative programs performed under the auspices of various international agreements. The report concludes by reviewing the topics that are currently most in need of work, and the plans for addressing these topics.

  5. The MISSE-9 Polymers and Composites Experiment Being Flown on the MISSE-Flight Facility

    Science.gov (United States)

    De Groh, Kim K.; Banks, Bruce A.

    2017-01-01

    Materials on the exterior of spacecraft in low Earth orbit (LEO) are subject to extremely harsh environmental conditions, including various forms of radiation (cosmic rays, ultraviolet, x-ray, and charged particle radiation), micrometeoroids and orbital debris, temperature extremes, thermal cycling, and atomic oxygen (AO). These environmental exposures can result in erosion, embrittlement and optical property degradation of susceptible materials, threatening spacecraft performance and durability. To increase our understanding of space environmental effects such as AO erosion and radiation induced embrittlement of spacecraft materials, NASA Glenn has developed a series of experiments flown as part of the Materials International Space Station Experiment (MISSE) missions on the exterior of the International Space Station (ISS). These experiments have provided critical LEO space environment durability data such as AO erosion yield values for many materials and mechanical properties changes after long term space exposure. In continuing these studies, a new Glenn experiment has been proposed, and accepted, for flight on the new MISSE-Flight Facility (MISSE-FF). This experiment is called the Polymers and Composites Experiment and it will be flown as part of the MISSE-9 mission, the inaugural mission of MISSE-FF. Figure 1 provides an artist rendition of MISSE-FF ISS external platform. The MISSE-FF is manifested for launch on SpaceX-13.

  6. Extreme eigenvalues of sample covariance and correlation matrices

    DEFF Research Database (Denmark)

    Heiny, Johannes

    of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals......This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a p-dimensional heavy-tailed time series when p converges to infinity together with the sample size n. We generalize the growth rates of p existing in the literature. Assuming a regular variation condition with tail index

  7. Extreme eigenvalues of sample covariance and correlation matrices

    DEFF Research Database (Denmark)

    Heiny, Johannes

    2017-01-01

    dimension of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals......This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a $p$-dimensional heavy-tailed time series when $p$ converges to infinity together with the sample size $n$. We generalize the growth rates of $p$ existing in the literature. Assuming a regular variation condition with tail index $\\alpha

  8. Femtosecond Studies Of Coulomb Explosion Utilizing Covariance Mapping

    CERN Document Server

    Card, D A

    2000-01-01

    The studies presented herein elucidate details of the Coulomb explosion event initiated through the interaction of molecular clusters with an intense femtosecond laser beam (≥1 PW/cm2). Clusters studied include ammonia, titanium-hydrocarbon, pyridine, and 7-azaindole. Covariance analysis is presented as a general technique to study the dynamical processes in clusters and to discern whether the fragmentation channels are competitive. Positive covariance determinations identify concerted processes such as the concomitant explosion of protonated cluster ions of asymmetrical size. Anti- covariance mapping is exploited to distinguish competitive reaction channels such as the production of highly charged nitrogen atoms formed at the expense of the protonated members of a cluster ion ensemble. This technique is exemplified in each cluster system studied. Kinetic energy analyses, from experiment and simulation, are presented to fully understand the Coulomb explosion event. A cutoff study strongly suggests that...

  9. Sparse reduced-rank regression with covariance estimation

    KAUST Repository

    Chen, Lisha

    2014-12-08

    Improving the predicting performance of the multiple response regression compared with separate linear regressions is a challenging question. On the one hand, it is desirable to seek model parsimony when facing a large number of parameters. On the other hand, for certain applications it is necessary to take into account the general covariance structure for the errors of the regression model. We assume a reduced-rank regression model and work with the likelihood function with general error covariance to achieve both objectives. In addition we propose to select relevant variables for reduced-rank regression by using a sparsity-inducing penalty, and to estimate the error covariance matrix simultaneously by using a similar penalty on the precision matrix. We develop a numerical algorithm to solve the penalized regression problem. In a simulation study and real data analysis, the new method is compared with two recent methods for multivariate regression and exhibits competitive performance in prediction and variable selection.

  10. Disjunct eddy covariance technique for trace gas flux measurements

    Science.gov (United States)

    Rinne, H. J. I.; Guenther, A. B.; Warneke, C.; de Gouw, J. A.; Luxembourg, S. L.

    A new approach for eddy covariance flux measurements is developed and applied for trace gas fluxes in the atmospheric surface layer. In disjunct eddy covariance technique, quick samples with a relatively long time interval between them are taken instead of continuously sampling air. This subset of the time series together with vertical wind velocity data at corresponding sampling times can be correlated to give a flux. The disjunct eddy sampling gives more time to analyze the trace gas concentrations and thus makes eddy covariance measurements possible using slower sensors. In this study a proton-transfer-reaction mass spectrometer with response time of about 1 second was used with a disjunct eddy sampler to measure fluxes of volatile organic compounds from an alfalfa field. The measured day-time maximum methanol fluxes ranged from 1 mg m-2 h-1 from uncut alfalfa to 8 mg m-2 h-1 from freshly cut alfalfa. Night-time fluxes were around zero.

  11. Structure of irreducibly covariant quantum channels for finite groups

    Science.gov (United States)

    Mozrzymas, Marek; Studziński, Michał; Datta, Nilanjana

    2017-05-01

    We obtain an explicit characterization of linear maps, in particular, quantum channels, which are covariant with respect to an irreducible representation (U) of a finite group (G), whenever U ⊗Uc is simply reducible (with Uc being the contragradient representation). Using the theory of group representations, we obtain the spectral decomposition of any such linear map. The eigenvalues and orthogonal projections arising in this decomposition are expressed entirely in terms of representation characteristics of the group G. This in turn yields necessary and sufficient conditions on the eigenvalues of any such linear map for it to be a quantum channel. We also obtain a wide class of quantum channels which are irreducibly covariant by construction. For two-dimensional irrreducible representations of the symmetric group S(3), and the quaternion group Q, we also characterize quantum channels which are both irreducibly covariant and entanglement breaking.

  12. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared with the proport......This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  13. A Lorentz-Covariant Connection for Canonical Gravity

    Directory of Open Access Journals (Sweden)

    Marc Geiller

    2011-08-01

    Full Text Available We construct a Lorentz-covariant connection in the context of first order canonical gravity with non-vanishing Barbero-Immirzi parameter. To do so, we start with the phase space formulation derived from the canonical analysis of the Holst action in which the second class constraints have been solved explicitly. This allows us to avoid the use of Dirac brackets. In this context, we show that there is a ''unique'' Lorentz-covariant connection which is commutative in the sense of the Poisson bracket, and which furthermore agrees with the connection found by Alexandrov using the Dirac bracket. This result opens a new way toward the understanding of Lorentz-covariant loop quantum gravity.

  14. Extreme eigenvalues of sample covariance and correlation matrices

    DEFF Research Database (Denmark)

    Heiny, Johannes

    2017-01-01

    This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... that the extreme eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high...... dimension of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals...

  15. Extreme eigenvalues of sample covariance and correlation matrices

    DEFF Research Database (Denmark)

    Heiny, Johannes

    This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high dimension...... of the problem at hand. We develop a theory for the point process of the normalized eigenvalues of the sample covariance matrix in the case where rows and columns of the data are linearly dependent. Based on the weak convergence of this point process we derive the limit laws of various functionals...

  16. Measuring the Association Between Body Mass Index and All-Cause Mortality in the Presence of Missing Data: Analyses From the Scottish National Diabetes Register.

    Science.gov (United States)

    Read, Stephanie H; Lewis, Steff C; Halbesma, Nynke; Wild, Sarah H

    2017-04-15

    Incorrectly handling missing data can lead to imprecise and biased estimates. We describe the effect of applying different approaches to handling missing data in an analysis of the association between body mass index and all-cause mortality among people with type 2 diabetes. We used data from the Scottish diabetes register that were linked to hospital admissions data and death registrations. The analysis was based on people diagnosed with type 2 diabetes between 2004 and 2011, with follow-up until May 31, 2014. The association between body mass index and mortality was investigated using Cox proportional hazards models. Findings were compared using 4 different missing-data methods: complete-case analysis, 2 multiple-imputation models, and nearest-neighbor imputation. There were 124,451 cases of type 2 diabetes, among which there were 17,085 deaths during 787,275 person-years of follow-up. Patients with missing data (24.8%) had higher mortality than those without missing data (adjusted hazard ratio = 1.36, 95% confidence interval: 1.31, 1.41). A U-shaped relationship between body mass index and mortality was observed, with the lowest hazard ratios occurring among moderately obese people, regardless of the chosen approach for handling missing data. Missing data may affect absolute and relative risk estimates differently and should be considered in analyses of routinely collected data. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Random sampling and validation of covariance matrices of resonance parameters

    Directory of Open Access Journals (Sweden)

    Plevnik Lucijan

    2017-01-01

    Full Text Available Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  18. Missing Strands? Dealing with Hair Loss

    Science.gov (United States)

    ... 2017 Print this issue Missing Strands? Dealing with Hair Loss En español Send us your comments Hair loss is often associated with men and aging, but ... depends on the cause. A family history of baldness, medical conditions or their treatments, and many other ...

  19. India's missing daughters | IDRC - International Development ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2010-11-04

    Nov 4, 2010 ... In the case of the missing females study, for example, one of the more striking conclusions is that selective abortion of female fetuses is practiced more in families where the mothers are better educated. Among women with a Grade 10 education or higher, the sex ratio for second and third births was 683 ...

  20. Scalable Tensor Factorizations with Missing Data

    DEFF Research Database (Denmark)

    Acar, Evrim; Dunlavy, Daniel M.; Kolda, Tamara G.

    2010-01-01

    is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram...

  1. Missing Energy Reconstruction and Dark Matter Searches

    CERN Document Server

    Roskas, Christos

    2015-01-01

    The Missing transverse momentum (${E}_{T}^{miss}$) measurement is a powerful method of searching for particles that interact via the weak interaction. Precision measurements of ${E}_{T}^{miss}$ are critical for various studies in CMS. These can refer to Standard Model physics, e.g. W mass measurements or to beyond the Standard Model physics, e.g. searches for Dark Matter. The first part of this report discusses the reconstruction of the ${E}_{T}^{miss}$ in the W mass measurements and the calibration of the hadronic recoil created in the W bosons production. The main focus went to understand the recoil fit model in order to assign systematic uncertainties to the calibration. The second part concerns the Dark Matter searches at the LHC, studying the annihilation process of the Dark Matter in the Universe. According to this annihilation process, the new results implement additional constraints to the models that are studied at the LHC. Furthermore, the interpretations are expanded to higher mass regions ...

  2. Missed Distal Tracheal Foreign Body in Consecutive ...

    African Journals Online (AJOL)

    2017-05-18

    May 18, 2017 ... Since invention, bronchoscopy has become the gold standard in the diagnosis and extraction of airway FB.[4]. Foreign bodies may be missed at bronchoscopy if covered by granulation tissue or multiple with the remaining ones not searched for. This article reports the case of a 6‑year‑old boy who had three ...

  3. Missed opportunities and inappropriately given vaccines reduce ...

    African Journals Online (AJOL)

    Objectives: To quantify missed opportunities for immunisation, document reasons for their occurrence and evaluate the extent of inappropriately given vaccine doses. Design: A cross sectional study of children under two years of age attending health facilities. Setting: Six health facilities predominantly serving the slums of ...

  4. Missed isolated volar dislocation of the scaphoid

    DEFF Research Database (Denmark)

    Kolby, Lise; Larsen, Søren; Jørring, Stig

    2007-01-01

    A patient presented with volar dislocation of the scaphoid, the diagnosis of which had been missed for two weeks. He was treated with open reduction through a combined volar and dorsal approach with decompression of the median nerve, internal fixation, and a cast for eight weeks. One year...

  5. Missed medical appointment among hypertensive and diabetic ...

    African Journals Online (AJOL)

    Purpose: To explore the reasons for missed medical appointment, patients' awareness on its consequences; and to find strategies to reduce it among the study population. Methods: This was a descriptive cross-sectional survey among 300 hypertensive and 200 diabetic outpatients assessing care at the University College ...

  6. What's Missing? Anti-Racist Sex Education!

    Science.gov (United States)

    Whitten, Amanda; Sethna, Christabelle

    2014-01-01

    Contemporary sexual health curricula in Canada include information about sexual diversity and queer identities, but what remains missing is any explicit discussion of anti-racist sex education. Although there exists federal and provincial support for multiculturalism and anti-racism in schools, contemporary Canadian sex education omits crucial…

  7. Flash CS4: The Missing Manual

    CERN Document Server

    Grover, Chris

    2008-01-01

    Unlock the power of Flash and bring gorgeous animations to life onscreen. Flash CS4: The Missing Manual includes a complete primer on animation, a guided tour of the program's tools and capabilities, lots of new illustrations, and more details on working with video. Beginners will learn to use the software in no time, and experienced Flash designers will improve their skills.

  8. Missed opportunities and caretaker constraints to childhood ...

    African Journals Online (AJOL)

    Background: Despite concerted support to vaccination programmes, coverage remains low. While health service reasons for this are known, there is little information on caretaker constraints to vaccination in Africa. Objective: To establish the prevalence of missed vaccination opportunities and caretaker constraints to ...

  9. The Missing Link: Research on Teacher Education

    Science.gov (United States)

    Wiens, Peter D.

    2012-01-01

    Teacher education has recently come under attack for its perceived lack of efficacy in preparing teachers for classroom duty. A lack of comprehensive research in teacher education makes it difficult to understand the effects of teacher education programs on student learning. There is a missing link between what happens in teacher education…

  10. High-performance computing reveals missing genes

    OpenAIRE

    Whyte, Barry James

    2010-01-01

    Scientists at the Virginia Bioinformatics Institute and the Department of Computer Science at Virginia Tech have used high-performance computing to locate small genes that have been missed by scientists in their quest to define the microbial DNA sequences of life.

  11. DOE handbook: Tritium handling and safe storage

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    The DOE Handbook was developed as an educational supplement and reference for operations and maintenance personnel. Most of the tritium publications are written from a radiological protection perspective. This handbook provides more extensive guidance and advice on the null range of tritium operations. This handbook can be used by personnel involved in the full range of tritium handling from receipt to ultimate disposal. Compliance issues are addressed at each stage of handling. This handbook can also be used as a reference for those individuals involved in real time determination of bounding doses resulting from inadvertent tritium releases. This handbook provides useful information for establishing processes and procedures for the receipt, storage, assay, handling, packaging, and shipping of tritium and tritiated wastes. It includes discussions and advice on compliance-based issues and adds insight to those areas that currently possess unclear DOE guidance.

  12. MHSS: a material handling system simulator

    Energy Technology Data Exchange (ETDEWEB)

    Pomernacki, L.; Hollstien, R.B.

    1976-04-07

    A Material Handling System Simulator (MHSS) program is described that provides specialized functional blocks for modeling and simulation of nuclear material handling systems. Models of nuclear fuel fabrication plants may be built using functional blocks that simulate material receiving, storage, transport, inventory, processing, and shipping operations as well as the control and reporting tasks of operators or on-line computers. Blocks are also provided that allow the user to observe and gather statistical information on the dynamic behavior of simulated plants over single or replicated runs. Although it is currently being developed for the nuclear materials handling application, MHSS can be adapted to other industries in which material accountability is important. In this paper, emphasis is on the simulation methodology of the MHSS program with application to the nuclear material safeguards problem. (auth)

  13. Buckley-James Estimator of AFT Models with Auxiliary Covariates

    Science.gov (United States)

    Granville, Kevin; Fan, Zhaozhi

    2014-01-01

    In this paper we study the Buckley-James estimator of accelerated failure time models with auxiliary covariates. Instead of postulating distributional assumptions on the auxiliary covariates, we use a local polynomial approximation method to accommodate them into the Buckley-James estimating equations. The regression parameters are obtained iteratively by minimizing a consecutive distance of the estimates. Asymptotic properties of the proposed estimator are investigated. Simulation studies show that the efficiency gain of using auxiliary information is remarkable when compared to just using the validation sample. The method is applied to the PBC data from the Mayo Clinic trial in primary biliary cirrhosis as an illustration. PMID:25127479

  14. Spatial implications of covariate adjustment on patterns of risk

    DEFF Research Database (Denmark)

    Sabel, Clive Eric; Wilson, Jeff Gaines; Kingham, Simon

    2007-01-01

    localised factors that influence the exposure-response relationship. This paper examines the spatial patterns of relative risk and clusters of hospitalisations based on an illustrative small-area example from Christchurch, New Zealand. A four-stage test of the spatial relocation effects of covariate...... area to a mixed residential/industrial area, possibly introducing new environmental exposures. Researchers should be aware of the potential spatial effects inherent in adjusting for covariates when considering study design and interpreting results. © 2007 Elsevier Ltd. All rights reserved....

  15. Fission yield covariances for JEFF: A Bayesian Monte Carlo method

    Science.gov (United States)

    Leray, Olivier; Rochman, Dimitri; Fleming, Michael; Sublet, Jean-Christophe; Koning, Arjan; Vasiliev, Alexander; Ferroukhi, Hakim

    2017-09-01

    The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties) and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.

  16. On spectral distribution of high dimensional covariation matrices

    DEFF Research Database (Denmark)

    Heinrich, Claudio; Podolskij, Mark

    In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....

  17. Positive semidefinite integrated covariance estimation, factorizations and asynchronicity

    DEFF Research Database (Denmark)

    Sauri, Orimar; Lunde, Asger; Laurent, Sébastien

    2017-01-01

    An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization of the covariance matrix in order to exploit the heterogeneity in trading intensities to estimate the different parameters sequentially with as many...... observations as possible. The estimator is positive semidefinite by construction. We derive asymptotic results and confirm their good finite sample properties by means of a Monte Carlo simulation. In the application we forecast portfolio Value-at-Risk and sector risk exposures for a portfolio of 52 stocks. We...

  18. Fission yield covariances for JEFF: A Bayesian Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Leray Olivier

    2017-01-01

    Full Text Available The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.

  19. Portfolio management using realized covariances: Evidence from Brazil

    Directory of Open Access Journals (Sweden)

    João F. Caldeira

    2017-09-01

    Full Text Available It is often argued that intraday returns can be used to construct covariance estimates that are more accurate than those based on daily returns. However, it is still unclear whether high frequency data provide more precise covariance estimates in markets more contaminated from microstructure noise such as higher bid-ask spreads and lower liquidity. We address this question by investigating the benefits of using high frequency data in the Brazilian equities market to construct optimal minimum variance portfolios. We implement alternative realized covariance estimators based on intraday returns sampled at alternative frequencies and obtain their dynamic versions using a multivariate GARCH framework. Our evidence based on a high-dimensional data set suggests that realized covariance estimators performed significantly better from an economic point of view in comparison to standard estimators based on low-frequency (close-to-close data as they delivered less risky portfolios. Resumo: Argumenta-se frequentemente que retornos intradiários podem ser usados para construir estimativas de covariâncias mais precisas em relação àquelas obtidas com retornos diários. No entanto, ainda não está claro se os dados de alta freqüência fornecem estimativas de covariância mais precisas em mercados mais contaminados pelo ruído da microestrutura, como maiores spreads entre ofertas de compra e venda e baixa liquidez. Abordamos essa questão investigando os benefícios do uso de dados de alta freqüência no mercado de ações brasileiro através da construção de portfólios ótimos de variância mínima. Implementamos diversos estimadores de covariâncias realizadas com base em retornos intradiários amostrados em diferentes frequências e obtemos suas versões dinâmicas usando uma estrutura GARCH multivariada. Nossa evidência baseada em um conjunto de dados de alta dimensão sugere que os estimadores de covariâncias realizadas obtiveram um desempenho

  20. Neutron Resonance Parameters and Covariance Matrix of 239Pu

    Energy Technology Data Exchange (ETDEWEB)

    Derrien, Herve [ORNL; Leal, Luiz C [ORNL; Larson, Nancy M [ORNL

    2008-08-01

    In order to obtain the resonance parameters in a single energy range and the corresponding covariance matrix, a reevaluation of 239Pu was performed with the code SAMMY. The most recent experimental data were analyzed or reanalyzed in the energy range thermal to 2.5 keV. The normalization of the fission cross section data was reconsidered by taking into account the most recent measurements of Weston et al. and Wagemans et al. A full resonance parameter covariance matrix was generated. The method used to obtain realistic uncertainties on the average cross section calculated by SAMMY or other processing codes was examined.

  1. Covariant description of transformation optics in nonlinear media.

    Science.gov (United States)

    Paul, Oliver; Rahm, Marco

    2012-04-09

    The technique of transformation optics (TO) is an elegant method for the design of electromagnetic media with tailored optical properties. In this paper, we focus on the formal structure of TO theory. By using a complete covariant formalism, we present a general transformation law that holds for arbitrary materials including bianisotropic, magneto-optical, nonlinear and moving media. Due to the principle of general covariance, the formalism is applicable to arbitrary space-time coordinate transformations and automatically accounts for magneto-electric coupling terms. The formalism is demonstrated for the calculation of the second harmonic wave generation in a twisted TO concentrator.

  2. Dirac oscillator in a Galilean covariant non-commutative space

    Energy Technology Data Exchange (ETDEWEB)

    Melo, G.R. de [Universidade Federal do Reconcavo da Bahia, BA (Brazil); Montigny, M. [University of Alberta (Canada); Pompeia, P.J. [Instituto de Fomento e Coordecacao Industrial, Sao Jose dos Campos, SP (Brazil); Santos, Esdras S. [Universidade Federal da Bahia, Salvador (Brazil)

    2013-07-01

    Full text: Even though Galilean kinematics is only an approximation of the relativistic kinematics, the structure of Galilean kinematics is more intricate than relativistic kinematics. For instance, the Galilean algebra admits a nontrivial central extension and projective representations, whereas the Poincare algebra does not. It is possible to construct representations of the Galilei algebra with three possible methods: (1) directly from the Galilei algebra, (2) from contractions of the Poincare algebra with the same space-time dimension, or (3) from the Poincare algebra in a space-time with one additional dimension. In this paper, we follow the third approach, which we refer to as 'Galilean covariance' because the equations are Lorentz covariant in the extended manifold. These equations become Galilean invariant after projection to the lower dimension. Our motivation is that this covariant approach provides one more unifying feature of field theory models. Indeed, particle physics (with Poincare kinematics) and condensed matter physics (with Galilean kinematics) share many tools of quantum field theory (e.g. gauge invariance, spontaneous symmetry breaking, Goldstone bosons), but the Galilean kinematics does not admit a metric structure. However, since the Galilean Lie algebra is a subalgebra of the Poincare Lie algebra if one more space-like dimension is added, we can achieve 'Galilean covariance' with a metric in an extended manifold; that makes non-relativistic models look similar to Lorentz-covariant relativistic models. In this context we study the Galilei covariant five-dimensional formulation applied to Galilean Dirac oscillator in a non-commutative situation, with space-space and momentum-momentum non-commutativity. The wave equation is obtained via a 'Galilean covariant' approach, which consists in projecting the covariant motion equations from a (4, l)-dimensional manifold with light-cone coordinates, to a (3, l

  3. FUEL HANDLING FACILITY WORKER DOSE ASSESSMENT

    Energy Technology Data Exchange (ETDEWEB)

    A. Achudume

    2004-08-09

    The purpose of this design calculation is to estimate radiation doses received by personnel working in the Fuel Handling Facility (FHF) of the Monitored Geological Repository (MGR). The FHF is a surface facility supporting waste handling operations i.e. receive transportation casks, transfer wastes, prepare waste packages, and ship out loaded waste packages and empty casks. The specific scope of work contained in this calculation covers both collective doses and individual worker group doses on an annual basis, and includes the contributions due to external and internal radiation. The results are also limited to normal operations only. Results of this calculation will be used to support the FHF design and License Application.

  4. Missed retinal breaks in rhegmatogenous retinal detachment

    Directory of Open Access Journals (Sweden)

    Brijesh Takkar

    2016-12-01

    Full Text Available AIM: To evaluate the causes and associations of missed retinal breaks (MRBs and posterior vitreous detachment (PVD in patients with rhegmatogenous retinal detachment (RRD. METHODS: Case sheets of patients undergoing vitreo retinal surgery for RRD at a tertiary eye care centre were evaluated retrospectively. Out of the 378 records screened, 253 were included for analysis of MRBs and 191 patients were included for analysis of PVD, depending on the inclusion criteria. Features of RRD and retinal breaks noted on examination were compared to the status of MRBs and PVD detected during surgery for possible associations. RESULTS: Overall, 27% patients had MRBs. Retinal holes were commonly missed in patients with lattice degeneration while missed retinal tears were associated with presence of complete PVD. Patients operated for cataract surgery were significantly associated with MRBs (P=0.033 with the odds of missing a retinal break being 1.91 as compared to patients with natural lens. Advanced proliferative vitreo retinopathy (PVR and retinal bullae were the most common reasons for missing a retinal break during examination. PVD was present in 52% of the cases and was wrongly assessed in 16%. Retinal bullae, pseudophakia/aphakia, myopia, and horse shoe retinal tears were strongly associated with presence of PVD. Traumatic RRDs were rarely associated with PVD. CONCLUSION: Pseudophakic patients, and patients with retinal bullae or advanced PVR should be carefully screened for MRBs. Though Weiss ring is a good indicator of PVD, it may still be over diagnosed in some cases. PVD is associated with retinal bullae and pseudophakia, and inversely with traumatic RRD.

  5. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  6. Covariant Spectator Theory of heavy–light and heavy mesons and the predictive power of covariant interaction kernels

    Energy Technology Data Exchange (ETDEWEB)

    Leitão, Sofia, E-mail: sofia.leitao@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Stadler, Alfred, E-mail: stadler@uevora.pt [Departamento de Física, Universidade de Évora, 7000-671 Évora (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Peña, M.T., E-mail: teresa.pena@tecnico.ulisboa.pt [Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Biernat, Elmar P., E-mail: elmar.biernat@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2017-01-10

    The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.

  7. Imputing Missing Race/Ethnicity in Pediatric Electronic Health Records: Reducing Bias with Use of U.S. Census Location and Surname Data.

    Science.gov (United States)

    Grundmeier, Robert W; Song, Lihai; Ramos, Mark J; Fiks, Alexander G; Elliott, Marc N; Fremont, Allen; Pace, Wilson; Wasserman, Richard C; Localio, Russell

    2015-08-01

    To assess the utility of imputing race/ethnicity using U.S. Census race/ethnicity, residential address, and surname information compared to standard missing data methods in a pediatric cohort. Electronic health record data from 30 pediatric practices with known race/ethnicity. In a simulation experiment, we constructed dichotomous and continuous outcomes with pre-specified associations with known race/ethnicity. Bias was introduced by nonrandomly setting race/ethnicity to missing. We compared typical methods for handling missing race/ethnicity (multiple imputation alone with clinical factors, complete case analysis, indicator variables) to multiple imputation incorporating surname and address information. Imputation using U.S. Census information reduced bias for both continuous and dichotomous outcomes. The new method reduces bias when race/ethnicity is partially, nonrandomly missing. © Health Research and Educational Trust.

  8. A True Eddy Accumulation - Eddy Covariance hybrid for measurements of turbulent trace gas fluxes

    Science.gov (United States)

    Siebicke, Lukas

    2016-04-01

    Eddy covariance (EC) is state-of-the-art in directly and continuously measuring turbulent fluxes of carbon dioxide and water vapor. However, low signal-to-noise ratios, high flow rates and missing or complex gas analyzers limit it's application to few scalars. True eddy accumulation, based on conditional sampling ideas by Desjardins in 1972, requires no fast response analyzers and is therefore potentially applicable to a wider range of scalars. Recently we showed possibly the first successful implementation of True Eddy Accumulation (TEA) measuring net ecosystem exchange of carbon dioxide of a grassland. However, most accumulation systems share the complexity of having to store discrete air samples in physical containers representing entire flux averaging intervals. The current study investigates merging principles of eddy accumulation and eddy covariance, which we here refer to as "true eddy accumulation in transient mode" (TEA-TM). This direct flux method TEA-TM combines true eddy accumulation with continuous sampling. The TEA-TM setup is simpler than discrete accumulation methods while avoiding the need for fast response gas analyzers and high flow rates required for EC. We implemented the proposed TEA-TM method and measured fluxes of carbon dioxide (CO2), methane (CH4) and water vapor (H2O) above a mixed beech forest at the Hainich Fluxnet and ICOS site, Germany, using a G2301 laser spectrometer (Picarro Inc., USA). We further simulated a TEA-TM sampling system using measured high frequency CO2 time series from an open-path gas analyzer. We operated TEA-TM side-by-side with open-, enclosed- and closed-path EC flux systems for CO2, H2O and CH4 (LI-7500, LI-7200, LI-6262, LI-7700, Licor, USA, and FGGA LGR, USA). First results show that TEA-TM CO2 fluxes were similar to EC fluxes. Remaining differences were similar to those between the three eddy covariance setups (open-, enclosed- and closed-path gas analyzers). Measured TEA-TM CO2 fluxes from our physical

  9. The discoursivity in/of the miss(es body – muslim and world

    Directory of Open Access Journals (Sweden)

    Adriana Stela Bassini Edral

    2017-01-01

    Full Text Available Assuming that the discursive functioning in/of the body revels the presence of social and ideological values, congruous to people’s culture. We propound, from discourses aired by online media, about the beauty contest Miss World and Miss Muslim, understand in what way the power is symbolized in body, and what strategies has been used by occidental and oriental media, concerning gender studies. 

  10. SAFE HANDLING LABELS AND CONSUMER BEHAVIOR IN THE SOUTHERN US

    OpenAIRE

    Adu-Nyako, Kofi; Kunda, Danny; Ralston, Katherine L.

    2003-01-01

    The impact of safe handling labels on food handling practices is assesed using a two step procedure to adjust for sample selection bias in the label use decision. A significant positive influence of labels on safe handling practices is found. Food safety knowledge, consumer risk perception, and illness experience impacted handling practices positively.

  11. 7 CFR 948.8 - Handle or ship.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle or ship. 948.8 Section 948.8 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Order Regulating Handling Definitions § 948.8 Handle or ship. Handle or ship means to transport, sell...

  12. Type-Safe Compilation of Covariant Specialization: A Practical Case

    Science.gov (United States)

    1995-11-01

    modify the semantics of languages that use covariant specialization in order to improve their type safety. We demonstrate our technique using O2, a...not affect the semantics of those computations without type errors. Furthermore, the new semantics of the previously ill-typed computations is defined

  13. Pseudo-observations for competing risks with covariate dependent censoring

    DEFF Research Database (Denmark)

    Binder, Nadine; Gerds, Thomas A; Andersen, Per Kragh

    2014-01-01

    that the probability of not being lost to follow-up (un-censored) is independent of the covariates. Modified pseudo-values are proposed which rely on a correctly specified regression model for the censoring times. Bias and efficiency of these methods are compared in a simulation study. Further illustration...

  14. Modeling the Conditional Covariance between Stock and Bond Returns

    NARCIS (Netherlands)

    P. de Goeij (Peter); W.A. Marquering (Wessel)

    2002-01-01

    textabstractTo analyze the intertemporal interaction between the stock and bond market returns, we allow the conditional covariance matrix to vary over time according to a multivariate GARCH model similar to Bollerslev, Engle and Wooldridge (1988). We extend the model such that it allows for

  15. Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)

    DEFF Research Database (Denmark)

    Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis

    We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...

  16. Application of covariance analysis to feed/ ration experimental data ...

    African Journals Online (AJOL)

    Correlation and Regression analyses were used to adjust for the covariate – initial weight of the experimental birds. The Fisher's F statistic for the straight forward Analysis of Variance (ANOVA) showed significant differences among the rations. With the ANOVA, the calculated F statistic was 4.025, with a probability of 0.0149.

  17. Globally covering a-priori regional gravity covariance models

    Directory of Open Access Journals (Sweden)

    D. Arabelos

    2003-01-01

    Full Text Available Gravity anomaly data generated using Wenzel’s GPM98A model complete to degree 1800, from which OSU91A has been subtracted, have been used to estimate covariance functions for a set of globally covering equal-area blocks of size 22.5° × 22.5° at Equator, having a 2.5° overlap. For each block an analytic covariance function model was determined. The models are based on 4 parameters: the depth to the Bjerhammar sphere (determines correlation, the free-air gravity anomaly variance, a scale factor of the OSU91A error degree-variances and a maximal summation index, N, of the error degree-variances. The depth of Bjerhammar-sphere varies from -134km to nearly zero, N varies from 360 to 40, the scale factor from 0.03 to 38.0 and the gravity variance from 1081 to 24(10µms-22. The parameters are interpreted in terms of the quality of the data used to construct OSU91A and GPM98A and general conditions such as the occurrence of mountain chains. The variation of the parameters show that it is necessary to use regional covariance models in order to obtain a realistic signal to noise ratio in global applications.Key words. GOCE mission, Covariance function, Spacewise approach`

  18. Eddy covariance based methane flux in Sundarbans mangroves, India

    Indian Academy of Sciences (India)

    West Bengal Forest Department, Salt Lake, Kolkata 700 098, India. ∗. Corresponding author. e-mail: surajking123@gmail.com ... Eddy covariance; mangrove forests; methane flux; Sundarbans. J. Earth Syst. Sci. 123, No. 5, July 2014, pp. 1089–1096 .... terrestrial biomes in India. The main objective of this paper is to present.

  19. Do more detailed environmental covariates deliver more accurate soil maps?

    NARCIS (Netherlands)

    Samuel Rosa, A.; Heuvelink, G.B.M.; Vasques, G.M.; Anjos, L.H.C.

    2015-01-01

    In this study we evaluated whether investing in more spatially detailed environmental covariates improves the accuracy of digital soil maps. We used a case study from Southern Brazil to map clay content (CLAY), organic carbon content (SOC), and effective cation exchange capacity (ECEC) of the

  20. Different Approaches to Covariate Inclusion in the Mixture Rasch Model

    Science.gov (United States)

    Li, Tongyun; Jiao, Hong; Macready, George B.

    2016-01-01

    The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…

  1. Reproducibility of regional metabolic covariance patterns : Comparison of four populations

    NARCIS (Netherlands)

    Moeller, [No Value; Nakamura, T; Mentis, MJ; Dhawan, [No Value; Spetsieres, P; Antonini, A; Missimer, J; Leenders, KL; Eidelberg, D

    In a previous [F-18]fluorodeoxyglucose (FDG) PET study we analyzed regional metabolic data from a combined group of Parkinson's disease (PD) patients and healthy volunteers (N), using network analysis. By this method, we identified a unique pattern of regional metabolic covariation with an

  2. Covariation of spectral and nonlinear EEG measures with alpha biofeedback.

    NARCIS (Netherlands)

    Fell, J.; Elfadil, H.; Klaver, P.; Roschke, J.; Elger, C.E.; Fernandez, G.S.E.

    2002-01-01

    This study investigated how different spectral and nonlinear EEG measures covaried with alpha power during auditory alpha biofeedback training, performed by 13 healthy subjects. We found a significant positive correlation of alpha power with the largest Lyapunov-exponent, pointing to an increased

  3. RNA search with decision trees and partial covariance models.

    Science.gov (United States)

    Smith, Jennifer A

    2009-01-01

    The use of partial covariance models to search for RNA family members in genomic sequence databases is explored. The partial models are formed from contiguous subranges of the overall RNA family multiple alignment columns. A binary decision-tree framework is presented for choosing the order to apply the partial models and the score thresholds on which to make the decisions. The decision trees are chosen to minimize computation time subject to the constraint that all of the training sequences are passed to the full covariance model for final evaluation. Computational intelligence methods are suggested to select the decision tree since the tree can be quite complex and there is no obvious method to build the tree in these cases. Experimental results from seven RNA families shows execution times of 0.066-0.268 relative to using the full covariance model alone. Tests on the full sets of known sequences for each family show that at least 95 percent of these sequences are found for two families and 100 percent for five others. Since the full covariance model is run on all sequences accepted by the partial model decision tree, the false alarm rate is at least as low as that of the full model alone.

  4. Unified Approach to Universal Cloning and Phase-Covariant Cloning

    OpenAIRE

    Hu, Jia-Zhong; Yu, Zong-Wen; Wang, Xiang-Bin

    2008-01-01

    We analyze the problem of approximate quantum cloning when the quantum state is between two latitudes on the Bloch's sphere. We present an analytical formula for the optimized 1-to-2 cloning. The formula unifies the universal quantum cloning (UQCM) and the phase covariant quantum cloning.

  5. Proton-proton virtual bremsstrahlung in a relativistic covariant model

    NARCIS (Netherlands)

    Martinus, GH; Scholten, O; Tjon, J

    1999-01-01

    Lepton-pair production (virtual bremsstrahlung) in proton-proton scattering is investigated using a relativistic covariant model. The effects of negative-energy slates and two-body currents are studied. These are shown to have large effects in some particular structure functions, even at the

  6. Covariant Structure of Models of Geophysical Fluid Motion

    Science.gov (United States)

    Dubos, Thomas

    2018-01-01

    Geophysical models approximate classical fluid motion in rotating frames. Even accurate approximations can have profound consequences, such as the loss of inertial frames. If geophysical fluid dynamics are not strictly equivalent to Newtonian hydrodynamics observed in a rotating frame, what kind of dynamics are they? We aim to clarify fundamental similarities and differences between relativistic, Newtonian, and geophysical hydrodynamics, using variational and covariant formulations as tools to shed the necessary light. A space-time variational principle for the motion of a perfect fluid is introduced. The geophysical action is interpreted as a synchronous limit of the relativistic action. The relativistic Levi-Civita connection also has a finite synchronous limit, which provides a connection with which to endow geophysical space-time, generalizing Cartan (1923). A covariant mass-momentum budget is obtained using covariance of the action and metric-preserving properties of the connection. Ultimately, geophysical models are found to differ from the standard compressible Euler model only by a specific choice of a metric-Coriolis-geopotential tensor akin to the relativistic space-time metric. Once this choice is made, the same covariant mass-momentum budget applies to Newtonian and all geophysical hydrodynamics, including those models lacking an inertial frame. Hence, it is argued that this mass-momentum budget provides an appropriate, common fundamental principle of dynamics. The postulate that Euclidean, inertial frames exist can then be regarded as part of the Newtonian theory of gravitation, which some models of geophysical hydrodynamics slightly violate.

  7. Eddy Covariance Measurements of the Sea-Spray Aerosol Flu

    Science.gov (United States)

    Brooks, I. M.; Norris, S. J.; Yelland, M. J.; Pascal, R. W.; Prytherch, J.

    2015-12-01

    Historically, almost all estimates of the sea-spray aerosol source flux have been inferred through various indirect methods. Direct estimates via eddy covariance have been attempted by only a handful of studies, most of which measured only the total number flux, or achieved rather coarse size segregation. Applying eddy covariance to the measurement of sea-spray fluxes is challenging: most instrumentation must be located in a laboratory space requiring long sample lines to an inlet collocated with a sonic anemometer; however, larger particles are easily lost to the walls of the sample line. Marine particle concentrations are generally low, requiring a high sample volume to achieve adequate statistics. The highly hygroscopic nature of sea salt means particles change size rapidly with fluctuations in relative humidity; this introduces an apparent bias in flux measurements if particles are sized at ambient humidity. The Compact Lightweight Aerosol Spectrometer Probe (CLASP) was developed specifically to make high rate measurements of aerosol size distributions for use in eddy covariance measurements, and the instrument and data processing and analysis techniques have been refined over the course of several projects. Here we will review some of the issues and limitations related to making eddy covariance measurements of the sea spray source flux over the open ocean, summarise some key results from the last decade, and present new results from a 3-year long ship-based measurement campaign as part of the WAGES project. Finally we will consider requirements for future progress.

  8. Detection of fungal damaged popcorn using image property covariance features

    Science.gov (United States)

    Covariance-matrix-based features were applied to the detection of popcorn infected by a fungus that cause a symptom called “blue-eye.” This infection of popcorn kernels causes economic losses because of their poor appearance and the frequently disagreeable flavor of the popped kernels. Images of ker...

  9. Covariance Structure Models for Gene Expression Microarray Data

    Science.gov (United States)

    Xie, Jun; Bentler, Peter M.

    2003-01-01

    Covariance structure models are applied to gene expression data using a factor model, a path model, and their combination. The factor model is based on a few factors that capture most of the expression information. A common factor of a group of genes may represent a common protein factor for the transcript of the co-expressed genes, and hence, it…

  10. From covariant to canonical formulations of discrete gravity

    NARCIS (Netherlands)

    Dittrich, B.; Höhn, P.A.

    2010-01-01

    Starting from an action for discretized gravity, we derive a canonical formalism that exactly reproduces the dynamics and (broken) symmetries of the covariant formalism. For linearized Regge calculus on a flat background—which exhibits exact gauge symmetries—we derive local and first-class

  11. Proportional Hazards Model with Covariate Measurement Error and Instrumental Variables.

    Science.gov (United States)

    Song, Xiao; Wang, Ching-Yun

    2014-12-01

    In biomedical studies, covariates with measurement error may occur in survival data. Existing approaches mostly require certain replications on the error-contaminated covariates, which may not be available in the data. In this paper, we develop a simple nonparametric correction approach for estimation of the regression parameters in the proportional hazards model using a subset of the sample where instrumental variables are observed. The instrumental variables are related to the covariates through a general nonparametric model, and no distributional assumptions are placed on the error and the underlying true covariates. We further propose a novel generalized methods of moments nonparametric correction estimator to improve the efficiency over the simple correction approach. The efficiency gain can be substantial when the calibration subsample is small compared to the whole sample. The estimators are shown to be consistent and asymptotically normal. Performance of the estimators is evaluated via simulation studies and by an application to data from an HIV clinical trial. Estimation of the baseline hazard function is not addressed.

  12. Covariation of Color and Luminance Facilitate Object Individuation in Infancy

    Science.gov (United States)

    Woods, Rebecca J.; Wilcox, Teresa

    2010-01-01

    The ability to individuate objects is one of our most fundamental cognitive capacities. Recent research has revealed that when objects vary in color or luminance alone, infants fail to individuate those objects until 11.5 months. However, color and luminance frequently covary in the natural environment, thus providing a more salient and reliable…

  13. Analysis of Covariance and Randomized Block Design with Heterogeneous Slopes.

    Science.gov (United States)

    Klockars, Alan J.; Beretvas, S. Natasha

    2001-01-01

    Compared the Type I error rate and the power to detect differences in slopes and additive treatment effects of analysis of covariance (ANCOVA) and randomized block designs through a Monte Carlo simulation. Results show that the more powerful option in almost all simulations for tests of both slope and means was ANCOVA. (SLD)

  14. Scale-dependent background-error covariance localisation

    Directory of Open Access Journals (Sweden)

    Mark Buehner

    2015-12-01

    Full Text Available A new approach is presented and evaluated for efficiently applying scale-dependent spatial localisation to ensemble background-error covariances within an ensemble-variational data assimilation system. The approach is primarily motivated by the requirements of future data assimilation systems for global numerical weather prediction that will be capable of resolving the convective scale. Such systems must estimate the global and synoptic scales at least as well as current global systems while also effectively making use of information from frequent and spatially dense observation networks to constrain convective-scale features. Scale-dependent covariance localisation allows a wider range of scales to be efficiently estimated while simultaneously assimilating all available observations. In the context of an idealised numerical experiment, it is shown that using scale-dependent localisation produces an improved ensemble-based estimate of spatially varying covariances as compared with standard spatial localisation. When applied to an ensemble of Arctic sea-ice concentration, it is demonstrated that strong spatial gradients in the relative contribution of different spatial scales in the ensemble covariances result in strong spatial variations in the overall amount of spatial localisation. This feature is qualitatively similar to what might be expected when applying an adaptive localisation approach that estimates a spatially varying localisation function from the ensemble itself. When compared with standard spatial localisation, scale-dependent localisation also results in a lower analysis error for sea-ice concentration over all spatial scales.

  15. An alternative covariance estimator to investigate genetic heterogeneity in populations

    Science.gov (United States)

    Genomic predictions and GWAS have used mixed models for identification of associations and trait predictions. In both cases, the covariance between individuals for performance is estimated using molecular markers. Mixed model properties indicate that the use of the data for prediction is optimal if ...

  16. Further Note on the Probabilistic Constraint Handling

    NARCIS (Netherlands)

    Ciftcioglu, O.; Bittermann, M.S.; Datta, R

    2016-01-01

    A robust probabilistic constraint handling approach in the framework of joint evolutionary-classical optimization has been presented earlier. In this work, the
    theoretical foundations of the method are presented in detail. The method is known as bi-objective method, where the conventional

  17. Generic control of material handling systems

    NARCIS (Netherlands)

    Haneyah, S.W.A.

    2013-01-01

    Material handling systems (MHSs) are in general complex installations that raise challenging design and control problems. In the literature, design and control problems have received a lot of attention within distinct business sectors or systems, but primarily from a system’s user perspective. Much

  18. Intertextuality for Handling Complex Environmental Issues

    Science.gov (United States)

    Byhring, Anne Kristine; Knain, Erik

    2016-01-01

    Nowhere is the need for handling complexity more pertinent than in addressing environmental issues. Our study explores students' situated constructs of complexity in unfolding discourses on socio-scientific issues. Students' dialogues in two group-work episodes are analysed in detail, with tools from Systemic Functional Linguistics. We identify…

  19. Confluence Modulo Equivalence in Constraint Handling Rules

    DEFF Research Database (Denmark)

    Christiansen, Henning; Kirkeby, Maja Hanne

    2014-01-01

    Previous results on confluence for Constraint Handling Rules, CHR, are generalized to take into account user-defined state equivalence relations. This allows a much larger class of programs to enjoy the ad- vantages of confluence, which include various optimization techniques and simplified corre...

  20. Materials handling centre: making business more efficient

    NARCIS (Netherlands)

    B. Bollen (Brian)

    2012-01-01

    textabstractThe aim of the Materials Handling Forum at RSM is to narrow the gap between research and practice by promoting and disseminating academic knowledge, sharing innovative ideas, generating research questions, and co-developing new research themes with industry partners.

  1. 7 CFR 983.14 - Handle.

    Science.gov (United States)

    2010-01-01

    ..., cleaning, salting, and/or packaging for marketing in or transporting to any and all markets in the current... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 983.14 Section 983.14 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and...

  2. Technical Guidelines for Sodium Storage and Handling

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y. B.; Kim, J. M.; Kim, T. J.; Nam, H. Y.; Lee, T. H.; Jeong, J. Y.; Choi, B. H.; Choi, J. H.

    2010-09-15

    This document presents as a technical guideline for education and training of beginners who engage in the sodium facility operation and R and D activities for the first time. This guideline covers the following technical areas. - General properties of sodium - Sodium handling technology - Sodium fire and fire fighting - Material safety data sheet(MSDS)

  3. Biodiesel Handling and Use Guide (Fifth Edition)

    Energy Technology Data Exchange (ETDEWEB)

    Alleman, Teresa L.; McCormick, Robert L.; Christensen, Earl D.; Fioroni, Gina; Moriarty, Kristi; Yanowitz, Janet

    2016-11-08

    This document is a guide for those who blend, distribute, and use biodiesel and biodiesel blends. It provides basic information on the proper and safe use of biodiesel and biodiesel blends in engines and boilers, and is intended to help fleets, individual users, blenders, distributors, and those involved in related activities understand procedures for handling and using biodiesel fuels.

  4. [Cutting and incision tools: the scalpel: handles].

    Science.gov (United States)

    Illana Esteban, Emilio

    2006-10-01

    In its current version, a scalpel is the best known cutting tool. It comes with a versatile metallic handle, available in a variety of models having differentiated characteristics. With a few simple movements, all models enable you to articulate and extract multiple cutting blades.

  5. 7 CFR 915.10 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 915.10 Section 915.10 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE AVOCADOS GROWN IN SOUTH FLORIDA Order...

  6. 7 CFR 1219.11 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Handle. 1219.11 Section 1219.11 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE HASS AVOCADO PROMOTION, RESEARCH...

  7. Australia: round module handling and cotton classing

    Science.gov (United States)

    Round modules of seed cotton produced via on-board module building harvesters are the reality of the cotton industry, worldwide. Although round modules have been available to the industry for almost a decade, there is still no consensus on the best method to handle the modules, particularly when th...

  8. 7 CFR 956.8 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 956.8 Section 956.8 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... the production area and any point outside thereof. Such term shall not include the transportation...

  9. 7 CFR 959.7 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 959.7 Section 959.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... area and any point outside thereof. Such term shall not include the transportation, sale, or delivery...

  10. 7 CFR 946.7 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 946.7 Section 946.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and...: Provided, That, the definition of “handle” shall not include the transportation of ungraded potatoes within...

  11. 7 CFR 929.10 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 929.10 Section 929.10 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... transportation of cranberries from the bog where grown to a packing or processing facility located within the...

  12. 7 CFR 955.7 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 955.7 Section 955.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... transportation, sale, or delivery of field-run Vidalia onions to a person within the production area for the...

  13. 7 CFR 1210.307 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Handle. 1210.307 Section 1210.307 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... transportation or delivery of field run watermelons by the producer thereof to a handler for grading, sizing or...

  14. 7 CFR 985.8 - Handle.

    Science.gov (United States)

    2010-01-01

    ... the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE MARKETING ORDER REGULATING THE HANDLING OF...) The sale or transportation of salable oil by a producer to a handler of record within the production...

  15. 7 CFR 922.13 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 922.13 Section 922.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... outside thereof: Provided, That the term “handle” shall not include the transportation within the...

  16. 7 CFR 947.7 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 947.7 Section 947.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... definition of “handle” shall not include the transportation of ungraded potatoes within the district where...

  17. 7 CFR 925.10 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 925.10 Section 925.10 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... regulations are effective pursuant to § 925.52(a)(5) shall not include the transportation or delivery of...

  18. 7 CFR 924.13 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 924.13 Section 924.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... transportation within the production area of prunes from the orchard where grown to a packing facility located...

  19. 7 CFR 920.11 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 920.11 Section 920.11 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... not include the sale of kiwifruit on the vine, the transportation within the production area of...

  20. 7 CFR 916.11 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 916.11 Section 916.11 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... transportation within the production area of nectarines from the orchard where grown to a packing facility...

  1. 7 CFR 981.16 - To handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false To handle. 981.16 Section 981.16 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... production or to sell, consign, transport, ship (except as a common carrier of almonds owned by another) or...

  2. 7 CFR 1205.312 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Handle. 1205.312 Section 1205.312 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS..., compress, purchase, market, transport, or otherwise acquire ownership or control of cotton. ...

  3. 7 CFR 987.9 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Handle. 987.9 Section 987.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and..., consign, transport, or ship (except as a common or contract carrier of dates owned by another person) or...

  4. Guidance Counsellor Strategies for Handling Bullying

    Science.gov (United States)

    Power-Elliott, Michleen; Harris, Gregory E.

    2012-01-01

    The purpose of this exploratory-descriptive study was to examine how guidance counsellors in the province of Newfoundland and Labrador would handle a specific verbal-relational bullying incident. Also of interest was guidance counsellor involvement and training in bullying programmes and Positive Behaviour Supports. Data for this study was…

  5. Exploring Reflective Means to Handle Plagiarism

    Science.gov (United States)

    Dalal, Nikunj

    2016-01-01

    Plagiarism has become widespread in the university teaching environment. This article presents practical wisdom from several years of experience handling plagiarism in two Information Systems (IS) courses with the exploratory use of reflective means such as dialogues and essays. There has been very little work on the use of reflective approaches…

  6. Instrumentation to handle thermal polarized neutron beams

    NARCIS (Netherlands)

    Kraan, W.H.

    2004-01-01

    In this thesis we investigate devices needed to handle the polarization of thermal neutron beams: Ï/2-flippers (to start/stop Larmor precession) and Ï-flippers (to reverse polarization/precession direction) and illustrate how these devices are used to investigate the properties of matter and of the

  7. Confluence Modulo Equivalence in Constraint Handling Rules

    DEFF Research Database (Denmark)

    Christiansen, Henning; Kirkeby, Maja Hanne

    2015-01-01

    Previous results on confluence for Constraint Handling Rules, CHR, are generalized to take into account user-defined state equivalence relations. This allows a much larger class of programs to enjoy the advantages of confluence, which include various optimization techniques and simplified...

  8. 29 CFR 1926.953 - Material handling.

    Science.gov (United States)

    2010-07-01

    ... Material handling. (a) Unloading. Prior to unloading steel, poles, cross arms and similar material, the... stored in temporary containers other than those required in § 1926.152, such as pillow tanks. (f) Framing. During framing operations, employees shall not work under a pole or a structure suspended by a crane, A...

  9. How marketers handled deliveries last winter

    Energy Technology Data Exchange (ETDEWEB)

    1984-10-01

    A special study on how fuel oil marketers handled deliveries last winter is presented. A questionnaire was sent to the marketers asking how many fuel oil trucks they had, how penalties for small deliveries are assessed, and if many customers are calling for a summer fill. The results of the questionnaire are presented.

  10. 9 CFR 3.19 - Handling.

    Science.gov (United States)

    2010-01-01

    ... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment, and Transportation of Dogs and... sunlight and extreme heat. Sufficient shade must be provided to protect the dog or cat from the direct rays... provided to allow the dogs and cats to remain dry during rain, snow, and other precipitation. (3) Shelter...

  11. 9 CFR 3.92 - Handling.

    Science.gov (United States)

    2010-01-01

    ... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment, and Transportation of Nonhuman...: (1) Shelter from sunlight and extreme heat. Sufficient shade must be provided to protect the nonhuman... nonhuman primates to remain dry during rain, snow, and other precipitation. (3) Shelter from cold...

  12. 9 CFR 3.41 - Handling.

    Science.gov (United States)

    2010-01-01

    ... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment, and Transportation of Guinea... Welfare regulations and who moves live guinea pigs or hamsters from an animal holding area of a terminal... area of a terminal facility or transporting any live guinea pig or hamster to or from a terminal...

  13. Laboratory rearing and handling of cerambycids

    Science.gov (United States)

    Melody A. Keena

    2017-01-01

    Lack of suitable rearing and handling techniques has hampered research on the biology and control of many species of cerambycids that feed on host species of economic importance. Furthermore, because cerambycids spend most or all of their pre-adult life cycle inside the host plant, the biology of many is not well-known and would be dif

  14. 7 CFR 1207.307 - Handle.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Handle. 1207.307 Section 1207.307 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POTATO RESEARCH AND PROMOTION PLAN...

  15. Prioritising the prevention of medication handling errors.

    Science.gov (United States)

    Bertsche, Thilo; Niemann, Dorothee; Mayer, Yvonne; Ingram, Katrin; Hoppe-Tichy, Torsten; Haefeli, Walter E

    2008-12-01

    Medication errors are frequent in a hospital setting and often caused by inappropriate drug handling. Systematic strategies for their prevention however are still lacking. We developed and applied a classification model to categorise medication handling errors and defined the urgency of correction on the basis of these findings. Nurses on medical wards (including intensive and intermediate care units) of a 1,680-bed teaching hospital. In a prospective observational study we evaluated the prevalence of 20 predefined medication handling errors on the ward. In a concurrent questionnaire survey, we assessed the knowledge of the nurses on medication handling. The severity of errors observed in individual areas was scored considering prevalence, potential risk of an error, and the involved drug. These scores and the prevalence of corresponding knowledge deficits were used to define the urgency of preventive strategies according to a four-field decision matrix. Prevalence and potential risk of medication handling errors, corresponding knowledge deficits in nurses committing the errors, and priority of quality improvement. In 1,376 observed processes 833 medication handling errors were detected. Errors concerning preparation (mean 0.88 errors per observed process [95% CI: 0.81-0.96], N = 645) were more frequent than administration errors (0.36 [0.32-0.41], N = 701, P errors than enteral drugs (0.32 [0.28-0.36], N = 794, P medication errors 30.9% concerned processes of high risk, 19.0% of moderate risk, and 50.1% of low risk. Of these errors 11.4% were caused by critical dose drugs, 81.6% by uncomplicated drugs, and 6.9% by nutritional supplements or diluents without active ingredient. According to the decision matrix that also considered knowledge deficits two error types concerning enteral drugs (flaws in light protection and prescribing information) were given maximum priority for quality improvement. For parenteral drugs five errors (incompatibilities, flaws in hygiene

  16. Gini covariance matrix and its affine equivariant version

    Science.gov (United States)

    Weatherall, Lauren Anne

    Gini's mean difference (GMD) and its derivatives such as Gini index have been widely used as alternative measures of variability over one century in many research fields especially in finance, economics and social welfare. In this dissertation, we generalize the univariate GMD to the multivariate case and propose a new covariance matrix so called the Gini covariance matrix (GCM). The extension is natural, which is based on the covariance representation of GMD with the notion of multivariate spatial rank function. In order to gain the affine equivariance property for GCM, we utilize the transformation-retransformation (TR) technique and obtain TR version GCM that turns out to be a symmetrized M-functional. Indeed, both GCMs are symmetrized approaches based on the difference of two independent variables without reference of a location, hence avoiding some arbitrary definition of location for non-symmetric distributions. We study the properties of both GCMs. They possess the so-called independence property, which is highly important, for example, in independent component analysis. Influence functions of two GCMs are derived to assess their robustness. They are found to be more robust than the regular covariance matrix but less robust than Tyler and Dumbgen M-functional. Under elliptical distributions, the relationship between the scatter parameter and the two GCM are obtained. With this relationship, principal component analysis (PCA) based on GCM is possible. Estimation of two GCMs is presented. We study asymptotical behavior of the estimators. √n-consistency and asymptotical normality of estimators are established. Asymptotic relative efficiency (ARE) of TR-GCM estimator with respect to sample covariance matrix is compared to that of Tyler and Dumbgen M-estimators. With little loss on efficiency (UCI machine learning repository. Relying on some graphical and numerical summaries, Gini-based PCA demonstrates its competitive performance.

  17. Interpolation of missing data in image sequences.

    Science.gov (United States)

    Kokaram, A C; Morris, R D; Fitzgerald, W J; Rayner, P W

    1995-01-01

    This paper presents a number of model based interpolation schemes tailored to the problem of interpolating missing regions in image sequences. These missing regions may be of arbitrary size and of random, but known, location. This problem occurs regularly with archived film material. The film is abraded or obscured in patches, giving rise to bright and dark flashes, known as "dirt and sparkle" in the motion picture industry. Both 3-D autoregressive models and 3-D Markov random fields are considered in the formulation of the different reconstruction processes. The models act along motion directions estimated using a multiresolution block matching scheme. It is possible to address this sort of impulsive noise suppression problem with median filters, and comparisons with earlier work using multilevel median filters are performed. These comparisons demonstrate the higher reconstruction fidelity of the new interpolators.

  18. Amplification of DNA mixtures - Missing data approach

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt

    2008-01-01

    This paper presents a model for the interpretation of results of STR typing of DNA mixtures based on a multivariate normal distribution of peak areas. From previous analyses of controlled experiments with mixed DNA samples, we exploit the linear relationship between peak heights and peak areas...... DNA samples, it is only possible to observe the cumulative peak heights and areas. Complying with this latent structure, we use the EM-algorithm to impute the missing variables based on a compound symmetry model. That is the measurements are subject to intra- and inter-loci correlations not depending...... on the actual alleles of the DNA profiles. Due to factorization of the likelihood, properties of the normal distribution and use of auxiliary variables, an ordinary implementation of the EM-algorithm solves the missing data problem. We estimate the parameters in the model based on a training data set. In order...

  19. iLife '05 The Missing Manual

    CERN Document Server

    Pogue, David

    2005-01-01

    The incomparable iLife '05 is the must-have multimedia suite for everyone who owns a Mac--and the envy of everyone who doesn't. iLife '05: The Missing Manual is the definitive iLife '05 book--and what should have come with the suite. There's no better guide to your iLife experience than the #1 bestselling Macintosh author and expert--and Missing Manual series creator--David Pogue. Totally objective and utterly in-the-know, Pogue highlights the newest features, changes, and improvements of iLife '05, covers the capabilities and limitations of each program within the suite, and delivers count

  20. Filtering remotely sensed chlorophyll concentrations in the Red Sea using a space-time covariance model and a Kalman filter

    KAUST Repository

    Dreano, Denis

    2015-04-27

    A statistical model is proposed to filter satellite-derived chlorophyll concentration from the Red Sea, and to predict future chlorophyll concentrations. The seasonal trend is first estimated after filling missing chlorophyll data using an Empirical Orthogonal Function (EOF)-based algorithm (Data Interpolation EOF). The anomalies are then modeled as a stationary Gaussian process. A method proposed by Gneiting (2002) is used to construct positive-definite space-time covariance models for this process. After choosing an appropriate statistical model and identifying its parameters, Kriging is applied in the space-time domain to make a one step ahead prediction of the anomalies. The latter serves as the prediction model of a reduced-order Kalman filter, which is applied to assimilate and predict future chlorophyll concentrations. The proposed method decreases the root mean square (RMS) prediction error by about 11% compared with the seasonal average.

  1. Robust parametric indirect estimates of the expected cost of a hospital stay with covariates and censored data.

    Science.gov (United States)

    Locatelli, Isabella; Marazzi, Alfio

    2013-06-30

    We consider the problem of estimating the mean hospital cost of stays of a class of patients (e.g., a diagnosis-related group) as a function of patient characteristics. The statistical analysis is complicated by the asymmetry of the cost distribution, the possibility of censoring on the cost variable, and the occurrence of outliers. These problems have often been treated separately in the literature, and a method offering a joint solution to all of them is still missing. Indirect procedures have been proposed, combining an estimate of the duration distribution with an estimate of the conditional cost for a given duration. We propose a parametric version of this approach, allowing for asymmetry and censoring in the cost distribution and providing a mean cost estimator that is robust in the presence of extreme values. In addition, the new method takes covariate information into account. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Anticipating missing reference standard data when planning diagnostic accuracy studies

    NARCIS (Netherlands)

    Naaktgeboren, Christiana A; de Groot, Joris A H; Rutjes, Anne W S; Bossuyt, Patrick M M; Reitsma, Johannes B; Moons, Karel G M

    2016-01-01

    Results obtained using a reference standard may be missing for some participants in diagnostic accuracy studies. This paper looks at methods for dealing with such missing data when designing or conducting a prospective diagnostic accuracy study.

  3. Gastric cancer missed at endoscopy | Gado | Alexandria Journal of ...

    African Journals Online (AJOL)

    with biopsies) is the gold standard for its diagnosis but missed oesophageal and gastric cancers are not infrequent in patients who have undergone previous endoscopy. Errors by the endoscopist account for the majority of these missed lesions.

  4. Missing solution in a Cornell potential

    Energy Technology Data Exchange (ETDEWEB)

    Castro, L.B., E-mail: luis.castro@pgfsc.ufsc.br [Departamento de Física, CFM, Universidade Federal de Santa Catarina, 88040-900, Florianópolis - SC (Brazil); Castro, A.S. de, E-mail: castro@pq.cnpq.br [Departamento de Física e Química, Campus de Guaratinguetá, Universidade Estadual Paulista, 12516-410, Guaratinguetá - SP (Brazil)

    2013-11-15

    Missing bound-state solutions for fermions in the background of a Cornell potential consisting of a mixed scalar–vector–pseudoscalar coupling is examined. Charge-conjugation operation, degeneracy and localization are discussed. -- Highlights: •The Dirac equation with scalar–vector–pseudoscalar Cornell potential is investigated. •The isolated solution from the Sturm–Liouville problem is found. •Charge-conjugation operation, degeneracy and localization are discussed.

  5. iWork '09 The Missing Manual

    CERN Document Server

    Clark, Josh

    2009-01-01

    With iWork '09: The Missing Manual, you'll quickly learn everything you need to know about Apple's incredible productivity programs, including the Pages word-processor, the Numbers spreadsheet, and the Keynote presentation program that Al Gore and Steve Jobs made famous. This book gives you crystal-clear and jargon-free explanations of iWork's capabilities, advantages, and limitations to help you produce stunning documents and cinema-quality digital presentations in no time.

  6. Missing and forbidden links in mutualistic networks.

    Science.gov (United States)

    Olesen, Jens M; Bascompte, Jordi; Dupont, Yoko L; Elberling, Heidi; Rasmussen, Claus; Jordano, Pedro

    2011-03-07

    Ecological networks are complexes of interacting species, but not all potential links among species are realized. Unobserved links are either missing or forbidden. Missing links exist, but require more sampling or alternative ways of detection to be verified. Forbidden links remain unobservable, irrespective of sampling effort. They are caused by linkage constraints. We studied one Arctic pollination network and two Mediterranean seed-dispersal networks. In the first, for example, we recorded flower-visit links for one full season, arranged data in an interaction matrix and got a connectance C of 15 per cent. Interaction accumulation curves documented our sampling of interactions through observation of visits to be robust. Then, we included data on pollen from the body surface of flower visitors as an additional link 'currency'. This resulted in 98 new links, missing from the visitation data. Thus, the combined visit-pollen matrix got an increased C of 20 per cent. For the three networks, C ranged from 20 to 52 per cent, and thus the percentage of unobserved links (100 - C) was 48 to 80 per cent; these were assumed forbidden because of linkage constraints and not missing because of under-sampling. Phenological uncoupling (i.e. non-overlapping phenophases between interacting mutualists) is one kind of constraint, and it explained 22 to 28 per cent of all possible, but unobserved links. Increasing phenophase overlap between species increased link probability, but extensive overlaps were required to achieve a high probability. Other kinds of constraint, such as size mismatch and accessibility limitations, are briefly addressed.

  7. Monoclonal gammopathy missed by capillary zone electrophoresis

    OpenAIRE

    Schild, Christof; Egger, Florence; Kaelin-Lang, Alain; Nuoffer, Jean-Marc

    2017-01-01

    Background: Serum protein electrophoresis is used as a screening test for monoclonal gammopathies. Here, we present a case of a high-concentration monoclonal immunoglobulin (M-protein) that was missed by serum protein electrophoresis on a Capillarys 2 capillary zone electrophoresis system. The aim of our study was to identify the reason for the failure of the system to detect the M-protein. Methods: M-protein solubility was examined in response to temperature, pH, ionic strength, the chaotrop...

  8. Posttraumatic Stress Disorder: The Missed Diagnosis

    OpenAIRE

    Grasso, Damion; Boonsiri, Joseph; Lipschitz, Deborah; Guyer, Amanda; Houshyar, Shadi; Douglas-Palumberi, Heather; Massey, Johari; Kaufman, Joan

    2009-01-01

    Posttraumatic stress disorder (PTSD) is frequently under-diagnosed in maltreated samples. Protective services information is critical for obtaining complete trauma histories and determining whether to survey PTSD symptoms in maltreated children. In the current study, without protective services information to supplement parent and child report, the diagnosis of PTSD was missed in a significant proportion of the cases. Collaboration between mental health professionals and protective service wo...

  9. Photoshop Elements 6 The Missing Manual

    CERN Document Server

    Brundage, Barbara

    2009-01-01

    With Photoshop Elements 6, the most popular photo-editing program on Earth just keeps getting better. It's perfect for scrapbooking, email-ready slideshows, Web galleries, you name it. But knowing what to do and when is tricky. That's why our Missing Manual is the bestselling book on the topic. This fully revised guide explains not only how the tools and commands work, but when to use them.

  10. Office 2008 for Macintosh The Missing Manual

    CERN Document Server

    Elferdink, Jim

    2008-01-01

    Though Office 2008 has been improved to take advantage of the latest Mac OS X features, you don't get a single page of printed instructions to guide you through the changes. Office 2008 for Macintosh: The Missing Manual gives you the friendly and thorough introduction you need, whether you're a beginner who can't do more than point and click, or a power user who's ready for a few advanced techniques.

  11. MOLDOVA: MISSED ADVANTAGES OF EURASIAN INTEGRATION

    Directory of Open Access Journals (Sweden)

    Ludmila Vasiljevna Fokina

    2015-01-01

    Full Text Available The article is devoted to potentially missed advantages of Eurasian integration (EAEU for Moldova. Special attention is given to the branches in which the country could get evident advantages including agriculture, power engineering, external trade ties with the EAEU countries. Possible positive effects of Eurasian integration in solution of the Transnistrian problem, in the sphere of labour migration and other fields are also shown.

  12. Holt-Winters Method with Missing Observations

    OpenAIRE

    Tomá\\v{s} Cipra; José Trujillo; Asunción Robio

    1995-01-01

    The paper presents a simple procedure for interpolating, smoothing, and predicting in seasonal time series with missing observations. The approach suggested by Wright (Wright, D. J. 1986. Forecasting data published at irregular time intervals using extension of Holt's method. Management Sci. 32 499--510.) for the Holt's method with nonseasonal data published at irregular time intervals is extended to the Holt-Winters method in the seasonal case. Numerical examples demonstrate the procedure.

  13. Gaussian regression and power spectral density estimation with missing data: The MICROSCOPE space mission as a case study

    CERN Document Server

    Baghi, Quentin; Bergé, Joël; Christophe, Bruno; Touboul, Pierre; Rodrigues, Manuel

    2016-01-01

    We present a Gaussian regression method for time series with missing data and stationary residuals of unknown power spectral density (PSD). The missing data are efficiently estimated by their conditional expectation as in universal Kriging, based on the circulant approximation of the complete data covariance. After initialization with an autoregessive fit of the noise, a few iterations of estimation/reconstruction steps are performed until convergence of the regression and PSD estimates, in a way similar to the expectation-conditional-maximization algorithm. The estimation can be performed for an arbitrary PSD provided that it is sufficiently smooth. The algorithm is developed in the framework of the MICROSCOPE space mission whose goal is to test the weak equivalence principle (WEP) with a precision of $10^{-15}$. We show by numerical simulations that the developed method allows us to meet three major requirements: to maintain the targeted precision of the WEP test in spite of the loss of data, to calculate a...

  14. Missing teeth and pediatric obstructive sleep apnea.

    Science.gov (United States)

    Guilleminault, Christian; Abad, Vivien C; Chiu, Hsiao-Yean; Peters, Brandon; Quo, Stacey

    2016-05-01

    Missing teeth in early childhood can result in abnormal facial morphology with narrow upper airway. The potential association between dental agenesis or early dental extractions and the presence of obstructive sleep apnea (OSA) was investigated. We reviewed clinical data, results of polysomnographic sleep studies, and orthodontic imaging studies of children with dental agenesis (n = 32) or early extraction of permanent teeth (n = 11) seen during the past 5 years and compared their findings to those of age-, gender-, and body mass index-matched children with normal teeth development but tonsilloadenoid (T&A) hypertrophy and symptoms of OSA (n = 64). The 31 children with dental agenesis and 11 children with early dental extractions had at least 2 permanent teeth missing. All children with missing teeth (n = 43) had clinical complaints and signs evoking OSA. There was a significant difference in mean apnea-hypopnea indices (AHI) in the three dental agenesis, dental extraction, and T&A studied groups (p sleep, and presented with OSA recognized at a later age. Due to the low-grade initial symptomatology, sleep-disordered breathing may be left untreated for a prolonged period with progressive worsening of symptoms over time.

  15. Imputation-based strategies for clinical trial longitudinal data with nonignorable missing values

    Science.gov (United States)

    Yang, Xiaowei; Li, Jinhui; Shoptaw, Steven

    2011-01-01

    SUMMARY Biomedical research is plagued with problems of missing data, especially in clinical trials of medical and behavioral therapies adopting longitudinal design. After a literature review on modeling incomplete longitudinal data based on full-likelihood functions, this paper proposes a set of imputation-based strategies for implementing selection, pattern-mixture, and shared-parameter models for handling intermittent missing values and dropouts that are potentially nonignorable according to various criteria. Within the framework of multiple partial imputation, intermittent missing values are first imputed several times; then, each partially imputed data set is analyzed to deal with dropouts with or without further imputation. Depending on the choice of imputation model or measurement model, there exist various strategies that can be jointly applied to the same set of data to study the effect of treatment or intervention from multi-faceted perspectives. For illustration, the strategies were applied to a data set with continuous repeated measures from a smoking cessation clinical trial. PMID:18205247

  16. Miss rate of colorectal neoplastic polyps and risk factors for missed polyps in consecutive colonoscopies.

    Science.gov (United States)

    Kim, Nam Hee; Jung, Yoon Suk; Jeong, Woo Shin; Yang, Hyo-Joon; Park, Soo-Kyung; Choi, Kyuyong; Park, Dong Il

    2017-07-01

    Colonoscopic polypectomy is the best diagnostic and therapeutic tool to detect and prevent colorectal neoplasms. However, previous studies have reported that 17% to 28% of colorectal polyps are missed during colonoscopy. We investigated the miss rate of neoplastic polyps and the factors associated with missed polyps from quality-adjusted consecutive colonoscopies. We reviewed the medical records of patients who were found to have colorectal polyps at a medical examination center of the Kangbuk Samsung Hospital between March 2012 and February 2013. Patients who were referred to a single tertiary academic medical center and underwent colonoscopic polypectomy on the same day were enrolled in our study. The odds ratios (ORs) associated with polyp-related and patient-related factors were evaluated using logistic regression analyses. A total of 463 patients and 1,294 neoplastic polyps were analyzed. The miss rates for adenomas, advanced adenomas, and carcinomas were 24.1% (312/1,294), 1.2% (15/1,294), and 0% (0/1,294), respectively. Flat/sessile-shaped adenomas (adjusted OR, 3.62; 95% confidence interval [CI], 2.40-5.46) and smaller adenomas (adjusted OR, 5.63; 95% CI, 2.84- 11.15 for ≤5 mm; adjusted OR, 3.18; 95% CI, 1.60-6.30 for 6-9 mm, respectively) were more frequently missed than pedunculated/sub-pedunculated adenomas and larger adenomas. In patients with 2 or more polyps compared with only one detected (adjusted OR, 2.37; 95% CI, 1.55-3.61 for 2-4 polyps; adjusted OR, 11.52; 95% CI, 4.61-28.79 for ≥5 polyps, respectively) during the first endoscopy, the risk of missing an additional polyp was significantly higher. One-quarter of neoplastic polyps was missed during colonoscopy. We encourage endoscopists to detect smaller and flat or sessile polyps by using the optimal withdrawal technique.

  17. Missed opportunities for immunisation at hospitals in the western Cape

    African Journals Online (AJOL)

    immunise children, and thereby improve vaccine coverage in the community, are missed. In February 1990, juSt before the national measles immunisation campaign, a study' was carried out at 8 hos- pitals in the western Cape to determine the extent of misse,d opportunities for measles immunisation in children.

  18. Planned Missing Data Designs in Educational Psychology Research

    NARCIS (Netherlands)

    Rhemtulla, M.; Hancock, G.R.

    2016-01-01

    Although missing data are often viewed as a challenge for applied researchers, in fact missing data can be highly beneficial. Specifically, when the amount of missing data on specific variables is carefully controlled, a balance can be struck between statistical power and research costs. This

  19. Missed injury – decreasing morbidity and mortality: A literature review

    African Journals Online (AJOL)

    This brief literature review examines the causes of missed injury, the typical clinical pictures that are associated with missed injury and techniques and procedures to help avoid missing injury in the light of the recent literature, while highlighting the cost implications for clinicians. SAJS, VOL 49, NO. 4, NOVEMBER 2011 ...

  20. Missing data in randomized clinical trials for weight loss: scope of the problem, state of the field, and performance of statistical methods.

    Directory of Open Access Journals (Sweden)

    Mai A Elobeid

    2009-08-01

    Full Text Available Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods.We searched PubMed and Cochrane databases (2000-2006 for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout rates being approximated by an exponential decay curve (e(-lambdat where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100 and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive.Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last