Semiparametric approach for non-monotone missing covariates in a parametric regression model
Sinha, Samiran
2014-02-26
Missing covariate data often arise in biomedical studies, and analysis of such data that ignores subjects with incomplete information may lead to inefficient and possibly biased estimates. A great deal of attention has been paid to handling a single missing covariate or a monotone pattern of missing data when the missingness mechanism is missing at random. In this article, we propose a semiparametric method for handling non-monotone patterns of missing data. The proposed method relies on the assumption that the missingness mechanism of a variable does not depend on the missing variable itself but may depend on the other missing variables. This mechanism is somewhat less general than the completely non-ignorable mechanism but is sometimes more flexible than the missing at random mechanism where the missingness mechansim is allowed to depend only on the completely observed variables. The proposed approach is robust to misspecification of the distribution of the missing covariates, and the proposed mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation studies. Finally, for the purpose of illustration we analyze an endometrial cancer dataset and a hip fracture dataset.
The prevention and handling of the missing data
Kang, Hyun
2013-01-01
Even in a well-designed and controlled study, missing data occurs in almost all research. Missing data can reduce the statistical power of a study and can produce biased estimates, leading to invalid conclusions. This manuscript reviews the problems and types of missing data, along with the techniques for handling missing data. The mechanisms by which missing data occurs are illustrated, and the methods for handling the missing data are discussed. The paper concludes with recommendations for ...
Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.
Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W
2017-06-01
Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and
Methods for Handling Missing Secondary Respondent Data
Young, Rebekah; Johnson, David
2013-01-01
Secondary respondent data are underutilized because researchers avoid using these data in the presence of substantial missing data. The authors reviewed, evaluated, and tested solutions to this problem. Five strategies of dealing with missing partner data were reviewed: (a) complete case analysis, (b) inverse probability weighting, (c) correction…
Handling missing data in ranked set sampling
Bouza-Herrera, Carlos N
2013-01-01
The existence of missing observations is a very important aspect to be considered in the application of survey sampling, for example. In human populations they may be caused by a refusal of some interviewees to give the true value for the variable of interest. Traditionally, simple random sampling is used to select samples. Most statistical models are supported by the use of samples selected by means of this design. In recent decades, an alternative design has started being used, which, in many cases, shows an improvement in terms of accuracy compared with traditional sampling. It is called R
Handling of incidents, near-misses
International Nuclear Information System (INIS)
Renborg, Bo; Jonsson, Klas; Broqvist, Kristoffer; Keski-Seppaelae, Sven
2006-12-01
This work has primarily been done as a study of available literature about reporting systems. The following items have also been considered: the participants' experience of safety work in general and reporting systems in particular, as well as correspondence with researchers and organisations that have experience from reporting systems in safety-critical applications. A number of definitions of the English term 'near-miss' have been found in the documentation about safety-critical systems. An important conclusion is that creating a precise definition in itself is not critical. The main objective is to persuade the individuals to report perceived risks as well as actual events or conditions. In this report, we have chosen to use the following definition of what should be reported: A condition or an incident with potential for more serious consequences. The reporting systems that have been evaluated have all data in the same system; they do not divide data into separate systems for incidents or 'near-misses'. The term incident in the literature is not used consistently, especially if both Swedish and English texts are considered. In a large portion of the documentation where the reporting system is mentioned, the focus lies more on analysis than on the problem with the willingness to report. Even when the focus is on reporting it is often dealing with the design of the actual report in order to enable the subsequent treatment of data. In some cases this has led to unnecessary complicated report forms. The cornerstone of a high willingness to report is the creation of a 'no-blame' culture. Based on experience it can be concluded that the question whether a report could lead to personal reprisals is crucial. Even a system that explicitly gives the reporter immunity is still brittle. The bare suspicion (that immunity may vanish) in the mind of the one reporting reduces the willingness to report dramatically. Meaning that the purpose of the analysis of reports must be to
Handling missing values in the MDS-UPDRS.
Goetz, Christopher G; Luo, Sheng; Wang, Lu; Tilley, Barbara C; LaPelle, Nancy R; Stebbins, Glenn T
2015-10-01
This study was undertaken to define the number of missing values permissible to render valid total scores for each Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) part. To handle missing values, imputation strategies serve as guidelines to reject an incomplete rating or create a surrogate score. We tested a rigorous, scale-specific, data-based approach to handling missing values for the MDS-UPDRS. From two large MDS-UPDRS datasets, we sequentially deleted item scores, either consistently (same items) or randomly (different items) across all subjects. Lin's Concordance Correlation Coefficient (CCC) compared scores calculated without missing values with prorated scores based on sequentially increasing missing values. The maximal number of missing values retaining a CCC greater than 0.95 determined the threshold for rendering a valid prorated score. A second confirmatory sample was selected from the MDS-UPDRS international translation program. To provide valid part scores applicable across all Hoehn and Yahr (H&Y) stages when the same items are consistently missing, one missing item from Part I, one from Part II, three from Part III, but none from Part IV can be allowed. To provide valid part scores applicable across all H&Y stages when random item entries are missing, one missing item from Part I, two from Part II, seven from Part III, but none from Part IV can be allowed. All cutoff values were confirmed in the validation sample. These analyses are useful for constructing valid surrogate part scores for MDS-UPDRS when missing items fall within the identified threshold and give scientific justification for rejecting partially completed ratings that fall below the threshold. © 2015 International Parkinson and Movement Disorder Society.
Software for handling and replacement of missing data
Directory of Open Access Journals (Sweden)
Mayer, Benjamin
2009-10-01
Full Text Available In medical research missing values often arise in the course of a data analysis. This fact constitutes a problem for different reasons, so e.g. standard methods for analyzing data lead to biased estimates and a loss of statistical power due to missing values, since those methods require complete data sets and therefore omit incomplete cases for the analyses. Furthermore missing values imply a certain loss of information for what reason the validity of results of a study with missing values has to be rated less than in a case where all data had been available. For years there are methods for replacement of missing values (Rubin, Schafer to tackle these problems and solve them in parts. Hence in this article we want to present the existing software to handle and replace missing values on the one hand and give an outline about the available options to get information on the other hand. The methodological aspects of the replacement strategies are delineated just briefly in this article.
Cox regression with missing covariate data using a modified partial likelihood method
DEFF Research Database (Denmark)
Martinussen, Torben; Holst, Klaus K.; Scheike, Thomas H.
2016-01-01
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard...
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
CSIR Research Space (South Africa)
Kim, S
2008-03-01
Full Text Available used in Table 4 are as follow — βk: direct effect; βTk : total effect; and βsbk : superbeta. There are some interesting findings from the results presented in Table 4. For out- come variable Customer satisfaction, the superbeta measure was strongest... corresponding 95% HPD interval contains 0. This suggests that ignoring the heterogeneity and/or covariates gives different conclusions based on the total-effect measure. Also from Table 4, we see that for outcome variable Customer satisfaction, all the 3...
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
The handling of missing binary data in language research
Directory of Open Access Journals (Sweden)
François Pichette
2015-01-01
Full Text Available Researchers are frequently confronted with unanswered questions or items on their questionnaires and tests, due to factors such as item difficulty, lack of testing time, or participant distraction. This paper first presents results from a poll confirming previous claims (Rietveld & van Hout, 2006; Schafer & Gra- ham, 2002 that data replacement and deletion methods are common in research. Language researchers declared that when faced with missing answers of the yes/no type (that translate into zero or one in data tables, the three most common solutions they adopt are to exclude the participant’s data from the analyses, to leave the square empty, or to fill in with zero, as for an incorrect answer. This study then examines the impact on Cronbach’s α of five types of data insertion, using simulated and actual data with various numbers of participants and missing percentages. Our analyses indicate that the three most common methods we identified among language researchers are the ones with the greatest impact n Cronbach's α coefficients; in other words, they are the least desirable solutions to the missing data problem. On the basis of our results, we make recommendations for language researchers concerning the best way to deal with missing data. Given that none of the most common simple methods works properly, we suggest that the missing data be replaced either by the item’s mean or by the participants’ overall mean to provide a better, more accurate image of the instrument’s internal consistency.
Wolgast, Anett; Schwinger, Malte; Hahnel, Carolin; Stiensmeier-Pelster, Joachim
2017-01-01
Introduction: Multiple imputation (MI) is one of the most highly recommended methods for replacing missing values in research data. The scope of this paper is to demonstrate missing data handling in SEM by analyzing two modified data examples from educational psychology, and to give practical recommendations for applied researchers. Method: We…
Bayesian nonparametric generative models for causal inference with missing at random covariates.
Roy, Jason; Lum, Kirsten J; Zeldow, Bret; Dworkin, Jordan D; Re, Vincent Lo; Daniels, Michael J
2018-03-26
We propose a general Bayesian nonparametric (BNP) approach to causal inference in the point treatment setting. The joint distribution of the observed data (outcome, treatment, and confounders) is modeled using an enriched Dirichlet process. The combination of the observed data model and causal assumptions allows us to identify any type of causal effect-differences, ratios, or quantile effects, either marginally or for subpopulations of interest. The proposed BNP model is well-suited for causal inference problems, as it does not require parametric assumptions about the distribution of confounders and naturally leads to a computationally efficient Gibbs sampling algorithm. By flexibly modeling the joint distribution, we are also able to impute (via data augmentation) values for missing covariates within the algorithm under an assumption of ignorable missingness, obviating the need to create separate imputed data sets. This approach for imputing the missing covariates has the additional advantage of guaranteeing congeniality between the imputation model and the analysis model, and because we use a BNP approach, parametric models are avoided for imputation. The performance of the method is assessed using simulation studies. The method is applied to data from a cohort study of human immunodeficiency virus/hepatitis C virus co-infected patients. © 2018, The International Biometric Society.
Individual Information-Centered Approach for Handling Physical Activity Missing Data
Kang, Minsoo; Rowe, David A.; Barreira, Tiago V.; Robinson, Terrance S.; Mahar, Matthew T.
2009-01-01
The purpose of this study was to validate individual information (II)-centered methods for handling missing data, using data samples of 118 middle-aged adults and 91 older adults equipped with Yamax SW-200 pedometers and Actigraph accelerometers for 7 days. We used a semisimulation approach to create six data sets: three physical activity outcome…
Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde
2017-01-01
Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.
Directory of Open Access Journals (Sweden)
Gulhan Bourget
Full Text Available The Transmission Disequilibrium Test (TDT compares frequencies of transmission of two alleles from heterozygote parents to an affected offspring. This test requires all genotypes to be known from all members of the nuclear families. However, obtaining all genotypes in a study might not be possible for some families, in which case, a data set results in missing genotypes. There are many techniques of handling missing genotypes in parents but only a few in offspring. The robust TDT (rTDT is one of the methods that handles missing genotypes for all members of nuclear families [with one affected offspring]. Even though all family members can be imputed, the rTDT is a conservative test with low power. We propose a new method, Mendelian Inheritance TDT (MITDT-ONE, that controls type I error and has high power. The MITDT-ONE uses Mendelian Inheritance properties, and takes population frequencies of the disease allele and marker allele into account in the rTDT method. One of the advantages of using the MITDT-ONE is that the MITDT-ONE can identify additional significant genes that are not found by the rTDT. We demonstrate the performances of both tests along with Sib-TDT (S-TDT in Monte Carlo simulation studies. Moreover, we apply our method to the type 1 diabetes data from the Warren families in the United Kingdom to identify significant genes that are related to type 1 diabetes.
Directory of Open Access Journals (Sweden)
Jiangxiu Zhou
2014-09-01
Full Text Available The purpose of this study is to demonstrate a way of dealing with missing data in clustered randomized trials by doing multiple imputation (MI with the PAN package in R through SAS. The procedure for doing MI with PAN through SAS is demonstrated in detail in order for researchers to be able to use this procedure with their own data. An illustration of the technique with empirical data was also included. In this illustration thePAN results were compared with pairwise deletion and three types of MI: (1 Normal Model (NM-MI ignoring the cluster structure; (2 NM-MI with dummy-coded cluster variables (fixed cluster structure; and (3 a hybrid NM-MI which imputes half the time ignoring the cluster structure, and the other half including the dummy-coded cluster variables. The empirical analysis showed that using PAN and the other strategies produced comparable parameter estimates. However, the dummy-coded MI overestimated the intraclass correlation, whereas MI ignoring the cluster structure and the hybrid MI underestimated the intraclass correlation. When compared with PAN, the p-value and standard error for the treatment effect were higher with dummy-coded MI, and lower with MI ignoring the clusterstructure, the hybrid MI approach, and pairwise deletion. Previous studies have shown that NM-MI is not appropriate for handling missing data in clustered randomized trials. This approach, in addition to the pairwise deletion approach, leads to a biased intraclass correlation and faultystatistical conclusions. Imputation in clustered randomized trials should be performed with PAN. We have demonstrated an easy way for using PAN through SAS.
DEFF Research Database (Denmark)
Jakobsen, Janus Christian; Gluud, Christian; Wetterslev, Jørn
2017-01-01
the missingness. Therefore, the analysis of trial data with missing values requires careful planning and attention. METHODS: The authors had several meetings and discussions considering optimal ways of handling missing data to minimise the bias potential. We also searched PubMed (key words: missing data; randomi...
Lee, In Heok
2012-01-01
Researchers in career and technical education often ignore more effective ways of reporting and treating missing data and instead implement traditional, but ineffective, missing data methods (Gemici, Rojewski, & Lee, 2012). The recent methodological, and even the non-methodological, literature has increasingly emphasized the importance of…
DEFF Research Database (Denmark)
Edriss, Vahid; Guldbrandtsen, Bernt; Lund, Mogens Sandø
2013-01-01
The aim of this study was to investigate the effect of different strategies for handling low-quality or missing data on prediction accuracy for direct genomic values of protein yield, mastitis and fertility using a Bayesian variable model and a GBLUP model in the Danish Jersey population. The data...... contained 1071 Jersey bulls that were genotyped with the Illumina Bovine 50K chip. After preliminary editing, 39227 SNP remained in the dataset. Four methods to handle missing genotypes were: 1) BEAGLE: missing markers were imputed using Beagle 3.3 software, 2) COMMON: missing genotypes at a locus were...
Li, Tianjing; Hutfless, Susan; Scharfstein, Daniel O; Daniels, Michael J; Hogan, Joseph W; Little, Roderick J A; Roy, Jason A; Law, Andrew H; Dickersin, Kay
2014-01-01
To recommend methodological standards in the prevention and handling of missing data for primary patient-centered outcomes research (PCOR). We searched National Library of Medicine Bookshelf and Catalog as well as regulatory agencies' and organizations' Web sites in January 2012 for guidance documents that had formal recommendations regarding missing data. We extracted the characteristics of included guidance documents and recommendations. Using a two-round modified Delphi survey, a multidisciplinary panel proposed mandatory standards on the prevention and handling of missing data for PCOR. We identified 1,790 records and assessed 30 as having relevant recommendations. We proposed 10 standards as mandatory, covering three domains. First, the single best approach is to prospectively prevent missing data occurrence. Second, use of valid statistical methods that properly reflect multiple sources of uncertainty is critical when analyzing missing data. Third, transparent and thorough reporting of missing data allows readers to judge the validity of the findings. We urge researchers to adopt rigorous methodology and promote good science by applying best practices to the prevention and handling of missing data. Developing guidance on the prevention and handling of missing data for observational studies and studies that use existing records is a priority for future research. Copyright © 2014 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
J. P. DiGangi
2011-10-01
Full Text Available We report the first observations of formaldehyde (HCHO flux measured via eddy covariance, as well as HCHO concentrations and gradients, as observed by the Madison Fiber Laser-Induced Fluorescence Instrument during the BEACHON-ROCS 2010 campaign in a rural, Ponderosa Pine forest northwest of Colorado Springs, CO. A median noon upward flux of ~80 μg m^{−2} h^{−1} (~24 ppt_{v} m s^{−1} was observed with a noon range of 37 to 131 μg m^{−2} h^{−1}. Enclosure experiments were performed to determine the HCHO branch (3.5 μg m^{-2} h^{−1} and soil (7.3 μg m^{−2} h^{−1} direct emission rates in the canopy. A zero-dimensional canopy box model, used to determine the apportionment of HCHO source and sink contributions to the flux, underpredicted the observed HCHO flux by a factor of 6. Simulated increases in concentrations of species similar to monoterpenes resulted in poor agreement with measurements, while simulated increases in direct HCHO emissions and/or concentrations of species similar to 2-methyl-3-buten-2-ol best improved model/measurement agreement. Given the typical diurnal variability of these BVOC emissions and direct HCHO emissions, this suggests that the source of the missing flux is a process with both a strong temperature and radiation dependence.
A general method for handling missing binary outcome data in randomized controlled trials
Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen
2014-01-01
Aims The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. Design We propose a sensitivity analysis where standard analyses, which could include ‘missing = smoking’ and ‘last observation carried forward’, are embedded in a wider class of models. Setting We apply our general method to data from two smoking cessation trials. Partici...
Energy Technology Data Exchange (ETDEWEB)
Riggi, S., E-mail: sriggi@oact.inaf.it [INAF - Osservatorio Astrofisico di Catania (Italy); Riggi, D. [Keras Strategy - Milano (Italy); Riggi, F. [Dipartimento di Fisica e Astronomia - Università di Catania (Italy); INFN, Sezione di Catania (Italy)
2015-04-21
Identification of charged particles in a multilayer detector by the energy loss technique may also be achieved by the use of a neural network. The performance of the network becomes worse when a large fraction of information is missing, for instance due to detector inefficiencies. Algorithms which provide a way to impute missing information have been developed over the past years. Among the various approaches, we focused on normal mixtures’ models in comparison with standard mean imputation and multiple imputation methods. Further, to account for the intrinsic asymmetry of the energy loss data, we considered skew-normal mixture models and provided a closed form implementation in the Expectation-Maximization (EM) algorithm framework to handle missing patterns. The method has been applied to a test case where the energy losses of pions, kaons and protons in a six-layers’ Silicon detector are considered as input neurons to a neural network. Results are given in terms of reconstruction efficiency and purity of the various species in different momentum bins.
Scalable Data Quality for Big Data: The Pythia Framework for Handling Missing Values.
Cahsai, Atoshum; Anagnostopoulos, Christos; Triantafillou, Peter
2015-09-01
Solving the missing-value (MV) problem with small estimation errors in large-scale data environments is a notoriously resource-demanding task. The most widely used MV imputation approaches are computationally expensive because they explicitly depend on the volume and the dimension of the data. Moreover, as datasets and their user community continuously grow, the problem can only be exacerbated. In an attempt to deal with such a problem, in our previous work, we introduced a novel framework coined Pythia, which employs a number of distributed data nodes (cohorts), each of which contains a partition of the original dataset. To perform MV imputation, the Pythia, based on specific machine and statistical learning structures (signatures), selects the most appropriate subset of cohorts to perform locally a missing value substitution algorithm (MVA). This selection relies on the principle that particular subset of cohorts maintains the most relevant partition of the dataset. In addition to this, as Pythia uses only part of the dataset for imputation and accesses different cohorts in parallel, it improves efficiency, scalability, and accuracy compared to a single machine (coined Godzilla), which uses the entire massive dataset to compute imputation requests. Although this article is an extension of our previous work, we particularly investigate the robustness of the Pythia framework and show that the Pythia is independent from any MVA and signature construction algorithms. In order to facilitate our research, we considered two well-known MVAs (namely K-nearest neighbor and expectation-maximization imputation algorithms), as well as two machine and neural computational learning signature construction algorithms based on adaptive vector quantization and competitive learning. We prove comprehensive experiments to assess the performance of the Pythia against Godzilla and showcase the benefits stemmed from this framework.
Handling of incidents, near-misses; Hantering av haendelser, naera misstag
Energy Technology Data Exchange (ETDEWEB)
Renborg, Bo; Jonsson, Klas; Broqvist, Kristoffer; Keski-Seppaelae, Sven [Professor Sten Luthander Ingenjoersbyraa AB, Gaevlegatan 22, SE-113 30 Stockholm (Sweden)
2006-12-15
This work has primarily been done as a study of available literature about reporting systems. The following items have also been considered: the participants' experience of safety work in general and reporting systems in particular, as well as correspondence with researchers and organisations that have experience from reporting systems in safety-critical applications. A number of definitions of the English term 'near-miss' have been found in the documentation about safety-critical systems. An important conclusion is that creating a precise definition in itself is not critical. The main objective is to persuade the individuals to report perceived risks as well as actual events or conditions. In this report, we have chosen to use the following definition of what should be reported: A condition or an incident with potential for more serious consequences. The reporting systems that have been evaluated have all data in the same system; they do not divide data into separate systems for incidents or 'near-misses'. The term incident in the literature is not used consistently, especially if both Swedish and English texts are considered. In a large portion of the documentation where the reporting system is mentioned, the focus lies more on analysis than on the problem with the willingness to report. Even when the focus is on reporting it is often dealing with the design of the actual report in order to enable the subsequent treatment of data. In some cases this has led to unnecessary complicated report forms. The cornerstone of a high willingness to report is the creation of a 'no-blame' culture. Based on experience it can be concluded that the question whether a report could lead to personal reprisals is crucial. Even a system that explicitly gives the reporter immunity is still brittle. The bare suspicion (that immunity may vanish) in the mind of the one reporting reduces the willingness to report dramatically. Meaning that the purpose of
Liu, M; Wei, L; Zhang, J
2006-01-01
Missing data in clinical trials are inevitable. We highlight the ICH guidelines and CPMP points to consider on missing data. Specifically, we outline how we should consider missing data issues when designing, planning and conducting studies to minimize missing data impact. We also go beyond the coverage of the above two documents, provide a more detailed review of the basic concepts of missing data and frequently used terminologies, and examples of the typical missing data mechanism, and discuss technical details and literature for several frequently used statistical methods and associated software. Finally, we provide a case study where the principles outlined in this paper are applied to one clinical program at protocol design, data analysis plan and other stages of a clinical trial.
Hipp, John R; Wang, Cheng; Butts, Carter T; Jose, Rupa; Lakon, Cynthia M
2015-05-01
Although stochastic actor based models (e.g., as implemented in the SIENA software program) are growing in popularity as a technique for estimating longitudinal network data, a relatively understudied issue is the consequence of missing network data for longitudinal analysis. We explore this issue in our research note by utilizing data from four schools in an existing dataset (the AddHealth dataset) over three time points, assessing the substantive consequences of using four different strategies for addressing missing network data. The results indicate that whereas some measures in such models are estimated relatively robustly regardless of the strategy chosen for addressing missing network data, some of the substantive conclusions will differ based on the missing data strategy chosen. These results have important implications for this burgeoning applied research area, implying that researchers should more carefully consider how they address missing data when estimating such models.
Godin, Judith; Keefe, Janice; Andrew, Melissa K
2017-04-01
Missing values are commonly encountered on the Mini Mental State Examination (MMSE), particularly when administered to frail older people. This presents challenges for MMSE scoring in research settings. We sought to describe missingness in MMSEs administered in long-term-care facilities (LTCF) and to compare and contrast approaches to dealing with missing items. As part of the Care and Construction project in Nova Scotia, Canada, LTCF residents completed an MMSE. Different methods of dealing with missing values (e.g., use of raw scores, raw scores/number of items attempted, scale-level multiple imputation [MI], and blended approaches) are compared to item-level MI. The MMSE was administered to 320 residents living in 23 LTCF. The sample was predominately female (73%), and 38% of participants were aged >85 years. At least one item was missing from 122 (38.2%) of the MMSEs. Data were not Missing Completely at Random (MCAR), χ 2 (1110) = 1,351, p < 0.001. Using raw scores for those missing <6 items in combination with scale-level MI resulted in the regression coefficients and standard errors closest to item-level MI. Patterns of missing items often suggest systematic problems, such as trouble with manual dexterity, literacy, or visual impairment. While these observations may be relatively easy to take into account in clinical settings, non-random missingness presents challenges for research and must be considered in statistical analyses. We present suggestions for dealing with missing MMSE data based on the extent of missingness and the goal of analyses. Copyright © 2016 The Authors. Production and hosting by Elsevier B.V. All rights reserved.
Martín-Merino, Elisa; Calderón-Larrañaga, Amaia; Hawley, Samuel; Poblador-Plou, Beatriz; Llorente-García, Ana; Petersen, Irene; Prieto-Alhambra, Daniel
2018-01-01
Background Missing data are often an issue in electronic medical records (EMRs) research. However, there are many ways that people deal with missing data in drug safety studies. Aim To compare the risk estimates resulting from different strategies for the handling of missing data in the study of venous thromboembolism (VTE) risk associated with antiosteoporotic medications (AOM). Methods New users of AOM (alendronic acid, other bisphosphonates, strontium ranelate, selective estrogen receptor modulators, teriparatide, or denosumab) aged ≥50 years during 1998–2014 were identified in two Spanish (the Base de datos para la Investigación Farmacoepidemiológica en Atención Primaria [BIFAP] and EpiChron cohort) and one UK (Clinical Practice Research Datalink [CPRD]) EMR. Hazard ratios (HRs) according to AOM (with alendronic acid as reference) were calculated adjusting for VTE risk factors, body mass index (that was missing in 61% of patients included in the three databases), and smoking (that was missing in 23% of patients) in the year of AOM therapy initiation. HRs and standard errors obtained using cross-sectional multiple imputation (MI) (reference method) were compared to complete case (CC) analysis – using only patients with complete data – and longitudinal MI – adding to the cross-sectional MI model the body mass index/smoking values as recorded in the year before and after therapy initiation. Results Overall, 422/95,057 (0.4%), 19/12,688 (0.1%), and 2,051/161,202 (1.3%) VTE cases/participants were seen in BIFAP, EpiChron, and CPRD, respectively. HRs moved from 100.00% underestimation to 40.31% overestimation in CC compared with cross-sectional MI, while longitudinal MI methods provided similar risk estimates compared with cross-sectional MI. Precision for HR improved in cross-sectional MI versus CC by up to 160.28%, while longitudinal MI improved precision (compared with cross-sectional) only minimally (up to 0.80%). Conclusion CC may substantially
Spineli, Loukia M.; Pandis, Nikolaos; Salanti, Georgia
2015-01-01
Objectives: The purpose of the study was to provide empirical evidence about the reporting of methodology to address missing outcome data and the acknowledgement of their impact in Cochrane systematic reviews in the mental health field. Methods: Systematic reviews published in the Cochrane Database of Systematic Reviews after January 1, 2009 by…
Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.
2017-01-01
Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels
Covariant field equations in supergravity
Energy Technology Data Exchange (ETDEWEB)
Vanhecke, Bram [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium); Ghent University, Faculty of Physics, Gent (Belgium); Proeyen, Antoine van [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium)
2017-12-15
Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Covariant field equations in supergravity
International Nuclear Information System (INIS)
Vanhecke, Bram; Proeyen, Antoine van
2017-01-01
Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Austin, Peter C; Manca, Andrea; Zwarenstein, Merrick; Juurlink, David N; Stanbrook, Matthew B
2010-02-01
Statisticians have criticized the use of significance testing to compare the distribution of baseline covariates between treatment groups in randomized controlled trials (RCTs). Furthermore, some have advocated for the use of regression adjustment to estimate the effect of treatment after adjusting for potential imbalances in prognostically important baseline covariates between treatment groups. We examined 114 RCTs published in the New England Journal of Medicine, the Journal of the American Medical Association, The Lancet, and the British Medical Journal between January 1, 2007 and June 30, 2007. Significance testing was used to compare baseline characteristics between treatment arms in 38% of the studies. The practice was very rare in British journals and more common in the U.S. journals. In 29% of the studies, the primary outcome was continuous, whereas in 65% of the studies, the primary outcome was either dichotomous or time-to-event in nature. Adjustment for baseline covariates was reported when estimating the treatment effect in 34% of the studies. Our findings suggest the need for greater editorial consistency across journals in the reporting of RCTs. Furthermore, there is a need for greater debate about the relative merits of unadjusted vs. adjusted estimates of treatment effect. Copyright 2010 Elsevier Inc. All rights reserved.
Kisil, Vladimir V.
2010-01-01
The paper develops theory of covariant transform, which is inspired by the wavelet construction. It was observed that many interesting types of wavelets (or coherent states) arise from group representations which are not square integrable or vacuum vectors which are not admissible. Covariant transform extends an applicability of the popular wavelets construction to classic examples like the Hardy space H_2, Banach spaces, covariant functional calculus and many others. Keywords: Wavelets, cohe...
Principled Missing Data Treatments.
Lang, Kyle M; Little, Todd D
2018-04-01
We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.
Flexible Imputation of Missing Data
van Buuren, Stef
2012-01-01
Missing data form a problem in every scientific discipline, yet the techniques required to handle them are complicated and often lacking. One of the great ideas in statistical science--multiple imputation--fills gaps in the data with plausible values, the uncertainty of which is coded in the data itself. It also solves other problems, many of which are missing data problems in disguise. Flexible Imputation of Missing Data is supported by many examples using real data taken from the author's vast experience of collaborative research, and presents a practical guide for handling missing data unde
Multivariate covariance generalized linear models
DEFF Research Database (Denmark)
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
Covariance Manipulation for Conjunction Assessment
Hejduk, M. D.
2016-01-01
The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.; Chen, Min; Maadooliat, Mehdi; Pourahmadi, Mohsen
2012-01-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes
Covariant electromagnetic field lines
Hadad, Y.; Cohen, E.; Kaminer, I.; Elitzur, A. C.
2017-08-01
Faraday introduced electric field lines as a powerful tool for understanding the electric force, and these field lines are still used today in classrooms and textbooks teaching the basics of electromagnetism within the electrostatic limit. However, despite attempts at generalizing this concept beyond the electrostatic limit, such a fully relativistic field line theory still appears to be missing. In this work, we propose such a theory and define covariant electromagnetic field lines that naturally extend electric field lines to relativistic systems and general electromagnetic fields. We derive a closed-form formula for the field lines curvature in the vicinity of a charge, and show that it is related to the world line of the charge. This demonstrates how the kinematics of a charge can be derived from the geometry of the electromagnetic field lines. Such a theory may also provide new tools in modeling and analyzing electromagnetic phenomena, and may entail new insights regarding long-standing problems such as radiation-reaction and self-force. In particular, the electromagnetic field lines curvature has the attractive property of being non-singular everywhere, thus eliminating all self-field singularities without using renormalization techniques.
Directory of Open Access Journals (Sweden)
Bruce Weaver
2014-09-01
Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.
International Nuclear Information System (INIS)
Kawano, Toshihiko; Shibata, Keiichi.
1997-09-01
A covariance evaluation system for the evaluated nuclear data library was established. The parameter estimation method and the least squares method with a spline function are used to generate the covariance data. Uncertainties of nuclear reaction model parameters are estimated from experimental data uncertainties, then the covariance of the evaluated cross sections is calculated by means of error propagation. Computer programs ELIESE-3, EGNASH4, ECIS, and CASTHY are used. Covariances of 238 U reaction cross sections were calculated with this system. (author)
A Review of Methods for Missing Data.
Pigott, Therese D.
2001-01-01
Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…
Székely, Gábor J.; Rizzo, Maria L.
2010-01-01
Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...
Bergshoeff, E.; Pope, C.N.; Stelle, K.S.
1990-01-01
We discuss the notion of higher-spin covariance in w∞ gravity. We show how a recently proposed covariant w∞ gravity action can be obtained from non-chiral w∞ gravity by making field redefinitions that introduce new gauge-field components with corresponding new gauge transformations.
Shardell, Michelle; Hicks, Gregory E
2014-11-10
In studies of older adults, researchers often recruit proxy respondents, such as relatives or caregivers, when study participants cannot provide self-reports (e.g., because of illness). Proxies are usually only sought to report on behalf of participants with missing self-reports; thus, either a participant self-report or proxy report, but not both, is available for each participant. Furthermore, the missing-data mechanism for participant self-reports is not identifiable and may be nonignorable. When exposures are binary and participant self-reports are conceptualized as the gold standard, substituting error-prone proxy reports for missing participant self-reports may produce biased estimates of outcome means. Researchers can handle this data structure by treating the problem as one of misclassification within the stratum of participants with missing self-reports. Most methods for addressing exposure misclassification require validation data, replicate data, or an assumption of nondifferential misclassification; other methods may result in an exposure misclassification model that is incompatible with the analysis model. We propose a model that makes none of the aforementioned requirements and still preserves model compatibility. Two user-specified tuning parameters encode the exposure misclassification model. Two proposed approaches estimate outcome means standardized for (potentially) high-dimensional covariates using multiple imputation followed by propensity score methods. The first method is parametric and uses maximum likelihood to estimate the exposure misclassification model (i.e., the imputation model) and the propensity score model (i.e., the analysis model); the second method is nonparametric and uses boosted classification and regression trees to estimate both models. We apply both methods to a study of elderly hip fracture patients. Copyright © 2014 John Wiley & Sons, Ltd.
Covariant representations of nuclear *-algebras
International Nuclear Information System (INIS)
Moore, S.M.
1978-01-01
Extensions of the Csup(*)-algebra theory for covariant representations to nuclear *-algebra are considered. Irreducible covariant representations are essentially unique, an invariant state produces a covariant representation with stable vacuum, and the usual relation between ergodic states and covariant representations holds. There exist construction and decomposition theorems and a possible relation between derivations and covariant representations
Directory of Open Access Journals (Sweden)
Torres Munguía, Juan Armando
2014-06-01
Full Text Available This paper examines the sample proportions estimates in the presence of univariate missing categorical data. A database about smoking habits (2011 National Addiction Survey of Mexico was used to create simulated yet realistic datasets at rates 5% and 15% of missingness, each for MCAR, MAR and MNAR mechanisms. Then the performance of six methods for addressing missingness is evaluated: listwise, mode imputation, random imputation, hot-deck, imputation by polytomous regression and random forests. Results showed that the most effective methods for dealing with missing categorical data in most of the scenarios assessed in this paper were hot-deck and polytomous regression approaches. || El presente estudio examina la estimación de proporciones muestrales en la presencia de valores faltantes en una variable categórica. Se utiliza una encuesta de consumo de tabaco (Encuesta Nacional de Adicciones de México 2011 para crear bases de datos simuladas pero reales con 5% y 15% de valores perdidos para cada mecanismo de no respuesta MCAR, MAR y MNAR. Se evalúa el desempeño de seis métodos para tratar la falta de respuesta: listwise, imputación de moda, imputación aleatoria, hot-deck, imputación por regresión politómica y árboles de clasificación. Los resultados de las simulaciones indican que los métodos más efectivos para el tratamiento de la no respuesta en variables categóricas, bajo los escenarios simulados, son hot-deck y la regresión politómica.
Covariant Noncommutative Field Theory
Energy Technology Data Exchange (ETDEWEB)
Estrada-Jimenez, S [Licenciaturas en Fisica y en Matematicas, Facultad de Ingenieria, Universidad Autonoma de Chiapas Calle 4a Ote. Nte. 1428, Tuxtla Gutierrez, Chiapas (Mexico); Garcia-Compean, H [Departamento de Fisica, Centro de Investigacion y de Estudios Avanzados del IPN P.O. Box 14-740, 07000 Mexico D.F., Mexico and Centro de Investigacion y de Estudios Avanzados del IPN, Unidad Monterrey Via del Conocimiento 201, Parque de Investigacion e Innovacion Tecnologica (PIIT) Autopista nueva al Aeropuerto km 9.5, Lote 1, Manzana 29, cp. 66600 Apodaca Nuevo Leon (Mexico); Obregon, O [Instituto de Fisica de la Universidad de Guanajuato P.O. Box E-143, 37150 Leon Gto. (Mexico); Ramirez, C [Facultad de Ciencias Fisico Matematicas, Universidad Autonoma de Puebla, P.O. Box 1364, 72000 Puebla (Mexico)
2008-07-02
The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced.
Covariant Noncommutative Field Theory
International Nuclear Information System (INIS)
Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.
2008-01-01
The covariant approach to noncommutative field and gauge theories is revisited. In the process the formalism is applied to field theories invariant under diffeomorphisms. Local differentiable forms are defined in this context. The lagrangian and hamiltonian formalism is consistently introduced
Covariance data processing code. ERRORJ
International Nuclear Information System (INIS)
Kosako, Kazuaki
2001-01-01
The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)
Bayes reconstruction of missing teeth
DEFF Research Database (Denmark)
Sporring, Jon; Jensen, Katrine Hommelhoff
2008-01-01
contains two major parts: A statistical model of a selection of tooth shapes and a reconstruction of missing data. We use a training set consisting of 3D scans of dental cast models obtained with a laser scanner, and we have build a model of the shape variability of the teeth, their neighbors...... or equivalently noise elimination and for data analysis. However for small sets of high dimensional data, the log-likelihood estimator for the covariance matrix is often far from convergence, and therefore reliable models must be obtained by use of prior information. We propose a natural and intrinsic...
Pozsgay, Victor; Hirsch, Flavien; Branciard, Cyril; Brunner, Nicolas
2017-12-01
We introduce Bell inequalities based on covariance, one of the most common measures of correlation. Explicit examples are discussed, and violations in quantum theory are demonstrated. A crucial feature of these covariance Bell inequalities is their nonlinearity; this has nontrivial consequences for the derivation of their local bound, which is not reached by deterministic local correlations. For our simplest inequality, we derive analytically tight bounds for both local and quantum correlations. An interesting application of covariance Bell inequalities is that they can act as "shared randomness witnesses": specifically, the value of the Bell expression gives device-independent lower bounds on both the dimension and the entropy of the shared random variable in a local model.
Energy Technology Data Exchange (ETDEWEB)
Bourget, Antoine; Troost, Jan [Laboratoire de Physique Théorique, École Normale Supérieure, 24 rue Lhomond, 75005 Paris (France)
2016-03-23
We construct a covariant generating function for the spectrum of chiral primaries of symmetric orbifold conformal field theories with N=(4,4) supersymmetry in two dimensions. For seed target spaces K3 and T{sup 4}, the generating functions capture the SO(21) and SO(5) representation theoretic content of the chiral ring respectively. Via string dualities, we relate the transformation properties of the chiral ring under these isometries of the moduli space to the Lorentz covariance of perturbative string partition functions in flat space.
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Multiple feature fusion via covariance matrix for visual tracking
Jin, Zefenfen; Hou, Zhiqiang; Yu, Wangsheng; Wang, Xin; Sun, Hui
2018-04-01
Aiming at the problem of complicated dynamic scenes in visual target tracking, a multi-feature fusion tracking algorithm based on covariance matrix is proposed to improve the robustness of the tracking algorithm. In the frame-work of quantum genetic algorithm, this paper uses the region covariance descriptor to fuse the color, edge and texture features. It also uses a fast covariance intersection algorithm to update the model. The low dimension of region covariance descriptor, the fast convergence speed and strong global optimization ability of quantum genetic algorithm, and the fast computation of fast covariance intersection algorithm are used to improve the computational efficiency of fusion, matching, and updating process, so that the algorithm achieves a fast and effective multi-feature fusion tracking. The experiments prove that the proposed algorithm can not only achieve fast and robust tracking but also effectively handle interference of occlusion, rotation, deformation, motion blur and so on.
CSIR Research Space (South Africa)
Kim, S
2009-01-01
Full Text Available for i = 1; 2; : : : ; I, and j = 1; 2; : : : ; ni, where I denotes the total number of the facilities, ni is the number of individuals within the ith facility, and K is the total number of responses considered. Let Lk be the number of levels....3), the cut-points are subject to the constraints: ‚k0 = ¡1 < ‚k1 = 0 < ‚k 2 < ¢ ¢ ¢‚k;Lk¡2 < ‚k;Lk¡1 = 1 < ‚k;Lk = 1; (3.4) for k = 1; 2; : : : ; K. Note that since yij;25 is a binary response, there are no unknown cut-points. Thus, for the AES 2001 data...
Missing data imputation: focusing on single imputation.
Zhang, Zhongheng
2016-01-01
Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations.
On covariance structure in noisy, big data
Paffenroth, Randy C.; Nong, Ryan; Du Toit, Philip C.
2013-09-01
Herein we describe theory and algorithms for detecting covariance structures in large, noisy data sets. Our work uses ideas from matrix completion and robust principal component analysis to detect the presence of low-rank covariance matrices, even when the data is noisy, distorted by large corruptions, and only partially observed. In fact, the ability to handle partial observations combined with ideas from randomized algorithms for matrix decomposition enables us to produce asymptotically fast algorithms. Herein we will provide numerical demonstrations of the methods and their convergence properties. While such methods have applicability to many problems, including mathematical finance, crime analysis, and other large-scale sensor fusion problems, our inspiration arises from applying these methods in the context of cyber network intrusion detection.
Generally covariant gauge theories
International Nuclear Information System (INIS)
Capovilla, R.
1992-01-01
A new class of generally covariant gauge theories in four space-time dimensions is investigated. The field variables are taken to be a Lie algebra valued connection 1-form and a scalar density. Modulo an important degeneracy, complex [euclidean] vacuum general relativity corresponds to a special case in this class. A canonical analysis of the generally covariant gauge theories with the same gauge group as general relativity shows that they describe two degrees of freedom per space point, qualifying therefore as a new set of neighbors of general relativity. The modification of the algebra of the constraints with respect to the general relativity case is computed; this is used in addressing the question of how general relativity stands out from its neighbors. (orig.)
The Bayesian Covariance Lasso.
Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G
2013-04-01
Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.
Lorentz Covariance of Langevin Equation
International Nuclear Information System (INIS)
Koide, T.; Denicol, G.S.; Kodama, T.
2008-01-01
Relativistic covariance of a Langevin type equation is discussed. The requirement of Lorentz invariance generates an entanglement between the force and noise terms so that the noise itself should not be a covariant quantity. (author)
Robotics and remote handling in the nuclear industry
Energy Technology Data Exchange (ETDEWEB)
1984-01-01
This book presents the papers given at a conference on the use of remote handling equipment in nuclear facilities. Topics considered at the conference included dose reduction, artificial intelligence in nuclear plant maintenance, robotic welding, uncertainty covariances, reactor operation and inspection, reactor maintenance and repair, uranium mining, fuel fabrication, reactor component manufacture, irradiated fuel and radioactive waste management, and radioisotope handling.
Distance covariance for stochastic processes
DEFF Research Database (Denmark)
Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady
2017-01-01
The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...
Coquet, Julia Becaria; Tumas, Natalia; Osella, Alberto Ruben; Tanzi, Matteo; Franco, Isabella; Diaz, Maria Del Pilar
2016-01-01
A number of studies have evidenced the effect of modifiable lifestyle factors such as diet, breastfeeding and nutritional status on breast cancer risk. However, none have addressed the missing data problem in nutritional epidemiologic research in South America. Missing data is a frequent problem in breast cancer studies and epidemiological settings in general. Estimates of effect obtained from these studies may be biased, if no appropriate method for handling missing data is applied. We performed Multiple Imputation for missing values on covariates in a breast cancer case-control study of Córdoba (Argentina) to optimize risk estimates. Data was obtained from a breast cancer case control study from 2008 to 2015 (318 cases, 526 controls). Complete case analysis and multiple imputation using chained equations were the methods applied to estimate the effects of a Traditional dietary pattern and other recognized factors associated with breast cancer. Physical activity and socioeconomic status were imputed. Logistic regression models were performed. When complete case analysis was performed only 31% of women were considered. Although a positive association of Traditional dietary pattern and breast cancer was observed from both approaches (complete case analysis OR=1.3, 95%CI=1.0-1.7; multiple imputation OR=1.4, 95%CI=1.2-1.7), effects of other covariates, like BMI and breastfeeding, were only identified when multiple imputation was considered. A Traditional dietary pattern, BMI and breastfeeding are associated with the occurrence of breast cancer in this Argentinean population when multiple imputation is appropriately performed. Multiple Imputation is suggested in Latin America’s epidemiologic studies to optimize effect estimates in the future. PMID:27892664
Earth Observing System Covariance Realism
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
Lorentz covariant canonical symplectic algorithms for dynamics of charged particles
Wang, Yulei; Liu, Jian; Qin, Hong
2016-12-01
In this paper, the Lorentz covariance of algorithms is introduced. Under Lorentz transformation, both the form and performance of a Lorentz covariant algorithm are invariant. To acquire the advantages of symplectic algorithms and Lorentz covariance, a general procedure for constructing Lorentz covariant canonical symplectic algorithms (LCCSAs) is provided, based on which an explicit LCCSA for dynamics of relativistic charged particles is built. LCCSA possesses Lorentz invariance as well as long-term numerical accuracy and stability, due to the preservation of a discrete symplectic structure and the Lorentz symmetry of the system. For situations with time-dependent electromagnetic fields, which are difficult to handle in traditional construction procedures of symplectic algorithms, LCCSA provides a perfect explicit canonical symplectic solution by implementing the discretization in 4-spacetime. We also show that LCCSA has built-in energy-based adaptive time steps, which can optimize the computation performance when the Lorentz factor varies.
Miss Lora juveelikauplus = Miss Lora jewellery store
2009-01-01
Narvas Fama kaubanduskeskuses (Tallinna mnt. 19c) asuva juveelikaupluse Miss Lora sisekujundusest. Sisearhitektid Annes Arro ja Hanna Karits. Poe sisu - vitriinkapid, vaip, valgustid - on valmistatud eritellimusel. Sisearhitektide tähtsamate tööde loetelu
Lagishetty, Chakradhar V; Duffull, Stephen B
2015-11-01
Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.
Contributions to Large Covariance and Inverse Covariance Matrices Estimation
Kang, Xiaoning
2016-01-01
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...
International Nuclear Information System (INIS)
Ginelli, Francesco; Politi, Antonio; Chaté, Hugues; Livi, Roberto
2013-01-01
Recent years have witnessed a growing interest in covariant Lyapunov vectors (CLVs) which span local intrinsic directions in the phase space of chaotic systems. Here, we review the basic results of ergodic theory, with a specific reference to the implications of Oseledets’ theorem for the properties of the CLVs. We then present a detailed description of a ‘dynamical’ algorithm to compute the CLVs and show that it generically converges exponentially in time. We also discuss its numerical performance and compare it with other algorithms presented in the literature. We finally illustrate how CLVs can be used to quantify deviations from hyperbolicity with reference to a dissipative system (a chain of Hénon maps) and a Hamiltonian model (a Fermi–Pasta–Ulam chain). This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Lyapunov analysis: from dynamical systems theory to applications’. (paper)
Deriving covariant holographic entanglement
Energy Technology Data Exchange (ETDEWEB)
Dong, Xi [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Lewkowycz, Aitor [Jadwin Hall, Princeton University, Princeton, NJ 08544 (United States); Rangamani, Mukund [Center for Quantum Mathematics and Physics (QMAP), Department of Physics, University of California, Davis, CA 95616 (United States)
2016-11-07
We provide a gravitational argument in favour of the covariant holographic entanglement entropy proposal. In general time-dependent states, the proposal asserts that the entanglement entropy of a region in the boundary field theory is given by a quarter of the area of a bulk extremal surface in Planck units. The main element of our discussion is an implementation of an appropriate Schwinger-Keldysh contour to obtain the reduced density matrix (and its powers) of a given region, as is relevant for the replica construction. We map this contour into the bulk gravitational theory, and argue that the saddle point solutions of these replica geometries lead to a consistent prescription for computing the field theory Rényi entropies. In the limiting case where the replica index is taken to unity, a local analysis suffices to show that these saddles lead to the extremal surfaces of interest. We also comment on various properties of holographic entanglement that follow from this construction.
Networks of myelin covariance.
Melie-Garcia, Lester; Slater, David; Ruef, Anne; Sanabria-Diaz, Gretel; Preisig, Martin; Kherif, Ferath; Draganski, Bogdan; Lutti, Antoine
2018-04-01
Networks of anatomical covariance have been widely used to study connectivity patterns in both normal and pathological brains based on the concurrent changes of morphometric measures (i.e., cortical thickness) between brain structures across subjects (Evans, ). However, the existence of networks of microstructural changes within brain tissue has been largely unexplored so far. In this article, we studied in vivo the concurrent myelination processes among brain anatomical structures that gathered together emerge to form nonrandom networks. We name these "networks of myelin covariance" (Myelin-Nets). The Myelin-Nets were built from quantitative Magnetization Transfer data-an in-vivo magnetic resonance imaging (MRI) marker of myelin content. The synchronicity of the variations in myelin content between anatomical regions was measured by computing the Pearson's correlation coefficient. We were especially interested in elucidating the effect of age on the topological organization of the Myelin-Nets. We therefore selected two age groups: Young-Age (20-31 years old) and Old-Age (60-71 years old) and a pool of participants from 48 to 87 years old for a Myelin-Nets aging trajectory study. We found that the topological organization of the Myelin-Nets is strongly shaped by aging processes. The global myelin correlation strength, between homologous regions and locally in different brain lobes, showed a significant dependence on age. Interestingly, we also showed that the aging process modulates the resilience of the Myelin-Nets to damage of principal network structures. In summary, this work sheds light on the organizational principles driving myelination and myelin degeneration in brain gray matter and how such patterns are modulated by aging. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S
2005-05-15
Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE
Directory of Open Access Journals (Sweden)
Ma Jinhui
2013-01-01
Full Text Available Abstracts Background The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE and cluster-specific (i.e. random-effects logistic regression (RELR models for analyzing data from cluster randomized trials (CRTs with missing binary responses. Methods In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE, and coverage probability. Results GEE performs well on all four measures — provided the downward bias of the standard error (when the number of clusters per arm is small is adjusted appropriately — under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF 50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. Conclusion GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.
General Galilei Covariant Gaussian Maps
Gasbarri, Giulio; Toroš, Marko; Bassi, Angelo
2017-09-01
We characterize general non-Markovian Gaussian maps which are covariant under Galilean transformations. In particular, we consider translational and Galilean covariant maps and show that they reduce to the known Holevo result in the Markovian limit. We apply the results to discuss measures of macroscopicity based on classicalization maps, specifically addressing dissipation, Galilean covariance and non-Markovianity. We further suggest a possible generalization of the macroscopicity measure defined by Nimmrichter and Hornberger [Phys. Rev. Lett. 110, 16 (2013)].
Fast Computing for Distance Covariance
Huo, Xiaoming; Szekely, Gabor J.
2014-01-01
Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...
OD Covariance in Conjunction Assessment: Introduction and Issues
Hejduk, M. D.; Duncan, M.
2015-01-01
Primary and secondary covariances combined and projected into conjunction plane (plane perpendicular to relative velocity vector at TCA) Primary placed on x-axis at (miss distance, 0) and represented by circle of radius equal to sum of both spacecraft circumscribing radiiZ-axis perpendicular to x-axis in conjunction plane Pc is portion of combined error ellipsoid that falls within the hard-body radius circle
Bioinspired Computational Approach to Missing Value Estimation
Directory of Open Access Journals (Sweden)
Israel Edem Agbehadji
2018-01-01
Full Text Available Missing data occurs when values of variables in a dataset are not stored. Estimating these missing values is a significant step during the data cleansing phase of a big data management approach. The reason of missing data may be due to nonresponse or omitted entries. If these missing data are not handled properly, this may create inaccurate results during data analysis. Although a traditional method such as maximum likelihood method extrapolates missing values, this paper proposes a bioinspired method based on the behavior of birds, specifically the Kestrel bird. This paper describes the behavior and characteristics of the Kestrel bird, a bioinspired approach, in modeling an algorithm to estimate missing values. The proposed algorithm (KSA was compared with WSAMP, Firefly, and BAT algorithm. The results were evaluated using the mean of absolute error (MAE. A statistical test (Wilcoxon signed-rank test and Friedman test was conducted to test the performance of the algorithms. The results of Wilcoxon test indicate that time does not have a significant effect on the performance, and the quality of estimation between the paired algorithms was significant; the results of Friedman test ranked KSA as the best evolutionary algorithm.
Covariation in Natural Causal Induction.
Cheng, Patricia W.; Novick, Laura R.
1991-01-01
Biases and models usually offered by cognitive and social psychology and by philosophy to explain causal induction are evaluated with respect to focal sets (contextually determined sets of events over which covariation is computed). A probabilistic contrast model is proposed as underlying covariation computation in natural causal induction. (SLD)
A cautionary note on generalized linear models for covariance of unbalanced longitudinal data
Huang, Jianhua Z.
2012-03-01
Missing data in longitudinal studies can create enormous challenges in data analysis when coupled with the positive-definiteness constraint on a covariance matrix. For complete balanced data, the Cholesky decomposition of a covariance matrix makes it possible to remove the positive-definiteness constraint and use a generalized linear model setup to jointly model the mean and covariance using covariates (Pourahmadi, 2000). However, this approach may not be directly applicable when the longitudinal data are unbalanced, as coherent regression models for the dependence across all times and subjects may not exist. Within the existing generalized linear model framework, we show how to overcome this and other challenges by embedding the covariance matrix of the observed data for each subject in a larger covariance matrix and employing the familiar EM algorithm to compute the maximum likelihood estimates of the parameters and their standard errors. We illustrate and assess the methodology using real data sets and simulations. © 2011 Elsevier B.V.
Slater, David; Ruef, Anne; Sanabria‐Diaz, Gretel; Preisig, Martin; Kherif, Ferath; Draganski, Bogdan; Lutti, Antoine
2017-01-01
Abstract Networks of anatomical covariance have been widely used to study connectivity patterns in both normal and pathological brains based on the concurrent changes of morphometric measures (i.e., cortical thickness) between brain structures across subjects (Evans, 2013). However, the existence of networks of microstructural changes within brain tissue has been largely unexplored so far. In this article, we studied in vivo the concurrent myelination processes among brain anatomical structures that gathered together emerge to form nonrandom networks. We name these “networks of myelin covariance” (Myelin‐Nets). The Myelin‐Nets were built from quantitative Magnetization Transfer data—an in‐vivo magnetic resonance imaging (MRI) marker of myelin content. The synchronicity of the variations in myelin content between anatomical regions was measured by computing the Pearson's correlation coefficient. We were especially interested in elucidating the effect of age on the topological organization of the Myelin‐Nets. We therefore selected two age groups: Young‐Age (20–31 years old) and Old‐Age (60–71 years old) and a pool of participants from 48 to 87 years old for a Myelin‐Nets aging trajectory study. We found that the topological organization of the Myelin‐Nets is strongly shaped by aging processes. The global myelin correlation strength, between homologous regions and locally in different brain lobes, showed a significant dependence on age. Interestingly, we also showed that the aging process modulates the resilience of the Myelin‐Nets to damage of principal network structures. In summary, this work sheds light on the organizational principles driving myelination and myelin degeneration in brain gray matter and how such patterns are modulated by aging. PMID:29271053
Real-time probabilistic covariance tracking with efficient model update.
Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li
2012-05-01
The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.
Statistical methods for handling incomplete data
Kim, Jae Kwang
2013-01-01
""… this book nicely blends the theoretical material and its application through examples, and will be of interest to students and researchers as a textbook or a reference book. Extensive coverage of recent advances in handling missing data provides resources and guidelines for researchers and practitioners in implementing the methods in new settings. … I plan to use this as a textbook for my teaching and highly recommend it.""-Biometrics, September 2014
Covariance matrices of experimental data
International Nuclear Information System (INIS)
Perey, F.G.
1978-01-01
A complete statement of the uncertainties in data is given by its covariance matrix. It is shown how the covariance matrix of data can be generated using the information available to obtain their standard deviations. Determination of resonance energies by the time-of-flight method is used as an example. The procedure for combining data when the covariance matrix is non-diagonal is given. The method is illustrated by means of examples taken from the recent literature to obtain an estimate of the energy of the first resonance in carbon and for five resonances of 238 U
Evaluation and processing of covariance data
International Nuclear Information System (INIS)
Wagner, M.
1993-01-01
These proceedings of a specialists'meeting on evaluation and processing of covariance data is divided into 4 parts bearing on: part 1- Needs for evaluated covariance data (2 Papers), part 2- generation of covariance data (15 Papers), part 3- Processing of covariance files (2 Papers), part 4-Experience in the use of evaluated covariance data (2 Papers)
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Treatment of Nuclear Data Covariance Information in Sample Generation
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Adams, Brian M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wieselquist, William [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division
2017-10-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
Treatment of Nuclear Data Covariance Information in Sample Generation
International Nuclear Information System (INIS)
Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William
2017-01-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
On Galilean covariant quantum mechanics
International Nuclear Information System (INIS)
Horzela, A.; Kapuscik, E.; Kempczynski, J.; Joint Inst. for Nuclear Research, Dubna
1991-08-01
Formalism exhibiting the Galilean covariance of wave mechanics is proposed. A new notion of quantum mechanical forces is introduced. The formalism is illustrated on the example of the harmonic oscillator. (author)
ERROR HANDLING IN INTEGRATION WORKFLOWS
Directory of Open Access Journals (Sweden)
Alexey M. Nazarenko
2017-01-01
Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.
Nuclear fuel handling apparatus
International Nuclear Information System (INIS)
Andrea, C.; Dupen, C.F.G.; Noyes, R.C.
1977-01-01
A fuel handling machine for a liquid metal cooled nuclear reactor in which a retractable handling tube and gripper are lowered into the reactor to withdraw a spent fuel assembly into the handling tube. The handling tube containing the fuel assembly immersed in liquid sodium is then withdrawn completely from the reactor into the outer barrel of the handling machine. The machine is then used to transport the spent fuel assembly directly to a remotely located decay tank. The fuel handling machine includes a decay heat removal system which continuously removes heat from the interior of the handling tube and which is capable of operating at its full cooling capacity at all times. The handling tube is supported in the machine from an articulated joint which enables it to readily align itself with the correct position in the core. An emergency sodium supply is carried directly by the machine to provide make up in the event of a loss of sodium from the handling tube during transport to the decay tank. 5 claims, 32 drawing figures
GLq(N)-covariant quantum algebras and covariant differential calculus
International Nuclear Information System (INIS)
Isaev, A.P.; Pyatov, P.N.
1992-01-01
GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations are considered. It is that, up to some innessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. 25 refs
GLq(N)-covariant quantum algebras and covariant differential calculus
International Nuclear Information System (INIS)
Isaev, A.P.; Pyatov, P.N.
1993-01-01
We consider GL q (N)-covariant quantum algebras with generators satisfying quadratic polynomial relations. We show that, up to some inessential arbitrariness, there are only two kinds of such quantum algebras, namely, the algebras with q-deformed commutation and q-deformed anticommutation relations. The connection with the bicovariant differential calculus on the linear quantum groups is discussed. (orig.)
A class of covariate-dependent spatiotemporal covariance functions
Reich, Brian J; Eidsvik, Jo; Guindani, Michele; Nail, Amy J; Schmidt, Alexandra M.
2014-01-01
In geostatistics, it is common to model spatially distributed phenomena through an underlying stationary and isotropic spatial process. However, these assumptions are often untenable in practice because of the influence of local effects in the correlation structure. Therefore, it has been of prolonged interest in the literature to provide flexible and effective ways to model non-stationarity in the spatial effects. Arguably, due to the local nature of the problem, we might envision that the correlation structure would be highly dependent on local characteristics of the domain of study, namely the latitude, longitude and altitude of the observation sites, as well as other locally defined covariate information. In this work, we provide a flexible and computationally feasible way for allowing the correlation structure of the underlying processes to depend on local covariate information. We discuss the properties of the induced covariance functions and discuss methods to assess its dependence on local covariate information by means of a simulation study and the analysis of data observed at ozone-monitoring stations in the Southeast United States. PMID:24772199
Cosmic censorship conjecture revisited: covariantly
International Nuclear Information System (INIS)
Hamid, Aymen I M; Goswami, Rituparno; Maharaj, Sunil D
2014-01-01
In this paper we study the dynamics of the trapped region using a frame independent semi-tetrad covariant formalism for general locally rotationally symmetric (LRS) class II spacetimes. We covariantly prove some important geometrical results for the apparent horizon, and state the necessary and sufficient conditions for a singularity to be locally naked. These conditions bring out, for the first time in a quantitative and transparent manner, the importance of the Weyl curvature in deforming and delaying the trapped region during continual gravitational collapse, making the central singularity locally visible. (paper)
... Handle Abuse KidsHealth / For Kids / How to Handle Abuse What's in this article? Tell Right Away How Do You Know Something Is Abuse? ... babysitter, teacher, coach, or a bigger kid. Child abuse can happen anywhere — at ... building. Tell Right Away A kid who is being seriously hurt ...
Harris, Troy G.; Minor, John
This text for a secondary- or postecondary-level course in grain handling and storage contains ten chapters. Chapter titles are (1) Introduction to Grain Handling and Storage, (2) Elevator Safety, (3) Grain Grading and Seed Identification, (4) Moisture Control, (5) Insect and Rodent Control, (6) Grain Inventory Control, (7) Elevator Maintenance,…
Biersdorfer, J
2009-01-01
Netbooks are the hot new thing in PCs -- small, inexpensive laptops designed for web browsing, email, and working with web-based programs. But chances are you don't know how to choose a netbook, let alone use one. Not to worry: with this Missing Manual, you'll learn which netbook is right for you and how to set it up and use it for everything from spreadsheets for work to hobbies like gaming and photo sharing. Netbooks: The Missing Manual provides easy-to-follow instructions and lots of advice to help you: Learn the basics for using a Windows- or Linux-based netbookConnect speakers, printe
Karp, David
2005-01-01
Your vacuum comes with one. Even your blender comes with one. But your PC--something that costs a whole lot more and is likely to be used daily and for tasks of far greater importance and complexity--doesn't come with a printed manual. Thankfully, that's not a problem any longer: PCs: The Missing Manual explains everything you need to know about PCs, both inside and out, and how to keep them running smoothly and working the way you want them to work. A complete PC manual for both beginners and power users, PCs: The Missing Manual has something for everyone. PC novices will appreciate the una
Covariance matrix estimation for stationary time series
Xiao, Han; Wu, Wei Biao
2011-01-01
We obtain a sharp convergence rate for banded covariance matrix estimates of stationary processes. A precise order of magnitude is derived for spectral radius of sample covariance matrices. We also consider a thresholded covariance matrix estimator that can better characterize sparsity if the true covariance matrix is sparse. As our main tool, we implement Toeplitz [Math. Ann. 70 (1911) 351–376] idea and relate eigenvalues of covariance matrices to the spectral densities or Fourier transforms...
Modelling non-ignorable missing data mechanisms with item response theory models
Holman, Rebecca; Glas, Cornelis A.W.
2005-01-01
A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled
Modelling non-ignorable missing-data mechanisms with item response theory models
Holman, Rebecca; Glas, Cees A. W.
2005-01-01
A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled
Results of Database Studies in Spine Surgery Can Be Influenced by Missing Data.
Basques, Bryce A; McLynn, Ryan P; Fice, Michael P; Samuel, Andre M; Lukasiewicz, Adam M; Bohl, Daniel D; Ahn, Junyoung; Singh, Kern; Grauer, Jonathan N
2017-12-01
National databases are increasingly being used for research in spine surgery; however, one limitation of such databases that has received sparse mention is the frequency of missing data. Studies using these databases often do not emphasize the percentage of missing data for each variable used and do not specify how patients with missing data are incorporated into analyses. This study uses the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database to examine whether different treatments of missing data can influence the results of spine studies. (1) What is the frequency of missing data fields for demographics, medical comorbidities, preoperative laboratory values, operating room times, and length of stay recorded in ACS-NSQIP? (2) Using three common approaches to handling missing data, how frequently do those approaches agree in terms of finding particular variables to be associated with adverse events? (3) Do different approaches to handling missing data influence the outcomes and effect sizes of an analysis testing for an association with these variables with occurrence of adverse events? Patients who underwent spine surgery between 2005 and 2013 were identified from the ACS-NSQIP database. A total of 88,471 patients undergoing spine surgery were identified. The most common procedures were anterior cervical discectomy and fusion, lumbar decompression, and lumbar fusion. Demographics, comorbidities, and perioperative laboratory values were tabulated for each patient, and the percent of missing data was noted for each variable. These variables were tested for an association with "any adverse event" using three separate multivariate regressions that used the most common treatments for missing data. In the first regression, patients with any missing data were excluded. In the second regression, missing data were treated as a negative or "reference" value; for continuous variables, the mean of each variable's reference range
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Covariant Gauss law commutator anomaly
International Nuclear Information System (INIS)
Dunne, G.V.; Trugenberger, C.A.; Massachusetts Inst. of Tech., Cambridge
1990-01-01
Using a (fixed-time) hamiltonian formalism we derive a covariant form for the anomaly in the commutator algebra of Gauss law generators for chiral fermions interacting with a dynamical non-abelian gauge field in 3+1 dimensions. (orig.)
Covariant gauges for constrained systems
International Nuclear Information System (INIS)
Gogilidze, S.A.; Khvedelidze, A.M.; Pervushin, V.N.
1995-01-01
The method of constructing of extended phase space for singular theories which permits the consideration of covariant gauges without the introducing of a ghost fields, is proposed. The extension of the phase space is carried out by the identification of the initial theory with an equivalent theory with higher derivatives and applying to it the Ostrogradsky method of Hamiltonian description. 7 refs
Uncertainty covariances in robotics applications
International Nuclear Information System (INIS)
Smith, D.L.
1984-01-01
The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized
Indian Academy of Sciences (India)
Home; Fellowship. Fellow Profile. Elected: 1960 Section: Earth & Planetary Sciences. Mani, Miss Anna Modayil A.I.I.Sc., FNA 1971-79; Secretary 1977-79. Date of birth: 23 August 1918. Date of death: 16 August 2001. Specialization: Atmospheric Physics and Instrumentation Last known address: c/o Mr K.T. Chandy, 14, ...
African Journals Online (AJOL)
perienced health Workers, especially at lower level units, poor referral ... in the wards or operating theatre, and inability to access the busy health .... clinics, costs incurred and by who, who decided on hospitalisation, who .... pected pregnancy as I had missed my period the previous month. .... the patient received attention.
The Missing Entrepreneurs 2014
DEFF Research Database (Denmark)
Halabisky, David; Potter, Jonathan; Thompson, Stuart
OECD's LEED Programme and the European Commission's DG on Employment, Social Affairs and Inclusion recently published the second book as part of their programme of work on inclusive entrepreneurship. The Missing Entrepreneurs 2014 examines how public policies at national and local levels can...
Balfanz, Robert
2016-01-01
Results of a survey conducted by the Office for Civil Rights show that 6 million public school students (13%) are not attending school regularly. Chronic absenteeism--defined as missing more than 10% of school for any reason--has been negatively linked to many key academic outcomes. Evidence shows that students who exit chronic absentee status can…
Energy Technology Data Exchange (ETDEWEB)
Alnajjar, Mikhail S.; Haynie, Todd O.
2009-08-14
Pyrophoric reagents are extremely hazardous. Special handling techniques are required to prevent contact with air and the resulting fire. This document provides several methods for working with pyrophoric reagents outside of an inert atmosphere.
International Nuclear Information System (INIS)
Clement, G.
1984-01-01
After a definition of intervention, problems encountered for working in an adverse environment are briefly analyzed for development of various remote handling equipments. Some examples of existing equipments are given [fr
Ergonomics and patient handling.
McCoskey, Kelsey L
2007-11-01
This study aimed to describe patient-handling demands in inpatient units during a 24-hour period at a military health care facility. A 1-day total population survey described the diverse nature and impact of patient-handling tasks relative to a variety of nursing care units, patient characteristics, and transfer equipment. Productivity baselines were established based on patient dependency, physical exertion, type of transfer, and time spent performing the transfer. Descriptions of the physiological effect of transfers on staff based on patient, transfer, and staff characteristics were developed. Nursing staff response to surveys demonstrated how patient-handling demands are impacted by the staff's physical exertion and level of patient dependency. The findings of this study describe the types of transfers occurring in these inpatient units and the physical exertion and time requirements for these transfers. This description may guide selection of the most appropriate and cost-effective patient-handling equipment required for specific units and patients.
Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices
Lan, Shiwei
2017-11-08
Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.
Estimating range of influence in case of missing spatial data
DEFF Research Database (Denmark)
Bihrmann, Kristine; Ersbøll, Annette Kjær
2015-01-01
BACKGROUND: The range of influence refers to the average distance between locations at which the observed outcome is no longer correlated. In many studies, missing data occur and a popular tool for handling missing data is multiple imputation. The objective of this study was to investigate how...... the estimated range of influence is affected when 1) the outcome is only observed at some of a given set of locations, and 2) multiple imputation is used to impute the outcome at the non-observed locations. METHODS: The study was based on the simulation of missing outcomes in a complete data set. The range...... of influence was estimated from a logistic regression model with a spatially structured random effect, modelled by a Gaussian field. Results were evaluated by comparing estimates obtained from complete, missing, and imputed data. RESULTS: In most simulation scenarios, the range estimates were consistent...
Group covariance and metrical theory
International Nuclear Information System (INIS)
Halpern, L.
1983-01-01
The a priori introduction of a Lie group of transformations into a physical theory has often proved to be useful; it usually serves to describe special simplified conditions before a general theory can be worked out. Newton's assumptions of absolute space and time are examples where the Euclidian group and translation group have been introduced. These groups were extended to the Galilei group and modified in the special theory of relativity to the Poincare group to describe physics under the given conditions covariantly in the simplest way. The criticism of the a priori character leads to the formulation of the general theory of relativity. The general metric theory does not really give preference to a particular invariance group - even the principle of equivalence can be adapted to a whole family of groups. The physical laws covariantly inserted into the metric space are however adapted to the Poincare group. 8 references
Missed opportunities in crystallography.
Dauter, Zbigniew; Jaskolski, Mariusz
2014-09-01
Scrutinized from the perspective of time, the giants in the history of crystallography more than once missed a nearly obvious chance to make another great discovery, or went in the wrong direction. This review analyzes such missed opportunities focusing on macromolecular crystallographers (using Perutz, Pauling, Franklin as examples), although cases of particular historical (Kepler), methodological (Laue, Patterson) or structural (Pauling, Ramachandran) relevance are also described. Linus Pauling, in particular, is presented several times in different circumstances, as a man of vision, oversight, or even blindness. His example underscores the simple truth that also in science incessant creativity is inevitably connected with some probability of fault. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
DEFF Research Database (Denmark)
Wildermuth, Norbert
In November 1996 the South Indian metropolis Bangalore hosted the annual Miss World show. The live event and its televisualisation became a prominent symbol for the India's economic liberalisation and for the immanent globalizing dimensions of this development. As such, the highly prestigious......, the pageant's contestation, which gave rise to a series of vehement protests and a broad public debate about the country's cultural alienation, marked a crucial point in time and trend towards the (re)localisation of the Indian television landscape. In consequence, the 1996 Miss World show and its...... their vision and politics of gender, nation and modernity on the larger Indian public, over the last two decades. Engaging the Indian population increasingly by way of the new electronic, c&s distributed media, competing discourses of gender and sexuality were projected, basically as a necessary, effective...
Phenotypic covariance at species' borders.
Caley, M Julian; Cripps, Edward; Game, Edward T
2013-05-28
Understanding the evolution of species limits is important in ecology, evolution, and conservation biology. Despite its likely importance in the evolution of these limits, little is known about phenotypic covariance in geographically marginal populations, and the degree to which it constrains, or facilitates, responses to selection. We investigated phenotypic covariance in morphological traits at species' borders by comparing phenotypic covariance matrices (P), including the degree of shared structure, the distribution of strengths of pair-wise correlations between traits, the degree of morphological integration of traits, and the ranks of matricies, between central and marginal populations of three species-pairs of coral reef fishes. Greater structural differences in P were observed between populations close to range margins and conspecific populations toward range centres, than between pairs of conspecific populations that were both more centrally located within their ranges. Approximately 80% of all pair-wise trait correlations within populations were greater in the north, but these differences were unrelated to the position of the sampled population with respect to the geographic range of the species. Neither the degree of morphological integration, nor ranks of P, indicated greater evolutionary constraint at range edges. Characteristics of P observed here provide no support for constraint contributing to the formation of these species' borders, but may instead reflect structural change in P caused by selection or drift, and their potential to evolve in the future.
International Nuclear Information System (INIS)
Bundy, A.L.
1988-01-01
One of the questions that haunts the radiologist as he shuffles through piles of films is ''What am I missing?'' This same question takes on even more meaning when the radiologist is pressed for time, when he reluctantly checks the night work of the resident, when the patient left before more or better films could be obtained; or when the radiologist is involved in a subspecialty in which he is not properly trained. According to Dr. Berlin's survey, the missed diagnosis category accounted for the largest number of radiology malpractice cases. We all know that many diagnoses are more easily made using the ''retrospectoscope.'' But is the plaintiff attorney also adept at using this instrument? Just how knowledgeable must the radiologist be in the use of the ''prospectoscope''? A familiarity with cases that have already been tried should at least alert radiologists to the chances of their own involvement in litigation. While the missed diagnosis is by no means peculiar to the radiologist, it is one of the principal reasons that he may find himself in court
DEFF Research Database (Denmark)
Tanggaard, Lene; Glaveanu, Vlad Petre
creative learning at the borders need not minimize differences, but handle and learn from them? If not, schools and educational institutions risk becoming bad copies of the labour marked instead of enabling students to enter the market with something new, something radically dissimilar from what...
Semiparametric Theory and Missing Data
Tsiatis, Anastasios A
2006-01-01
Missing data arise in almost all scientific disciplines. In many cases, missing data in an analysis is treated in a casual and ad-hoc manner, leading to invalid inferences and erroneous conclusions. This book summarizes knowledge regarding the theory of estimation for semiparametric models with missing data.
Linear Regression with a Randomly Censored Covariate: Application to an Alzheimer's Study.
Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A
2017-01-01
The association between maternal age of onset of dementia and amyloid deposition (measured by in vivo positron emission tomography (PET) imaging) in cognitively normal older offspring is of interest. In a regression model for amyloid, special methods are required due to the random right censoring of the covariate of maternal age of onset of dementia. Prior literature has proposed methods to address the problem of censoring due to assay limit of detection, but not random censoring. We propose imputation methods and a survival regression method that do not require parametric assumptions about the distribution of the censored covariate. Existing imputation methods address missing covariates, but not right censored covariates. In simulation studies, we compare these methods to the simple, but inefficient complete case analysis, and to thresholding approaches. We apply the methods to the Alzheimer's study.
Modeling Covariance Breakdowns in Multivariate GARCH
Jin, Xin; Maheu, John M
2014-01-01
This paper proposes a flexible way of modeling dynamic heterogeneous covariance breakdowns in multivariate GARCH (MGARCH) models. During periods of normal market activity, volatility dynamics are governed by an MGARCH specification. A covariance breakdown is any significant temporary deviation of the conditional covariance matrix from its implied MGARCH dynamics. This is captured through a flexible stochastic component that allows for changes in the conditional variances, covariances and impl...
Principal Component Analysis of Process Datasets with Missing Values
Directory of Open Access Journals (Sweden)
Kristen A. Severson
2017-07-01
Full Text Available Datasets with missing values arising from causes such as sensor failure, inconsistent sampling rates, and merging data from different systems are common in the process industry. Methods for handling missing data typically operate during data pre-processing, but can also occur during model building. This article considers missing data within the context of principal component analysis (PCA, which is a method originally developed for complete data that has widespread industrial application in multivariate statistical process control. Due to the prevalence of missing data and the success of PCA for handling complete data, several PCA algorithms that can act on incomplete data have been proposed. Here, algorithms for applying PCA to datasets with missing values are reviewed. A case study is presented to demonstrate the performance of the algorithms and suggestions are made with respect to choosing which algorithm is most appropriate for particular settings. An alternating algorithm based on the singular value decomposition achieved the best results in the majority of test cases involving process datasets.
International Nuclear Information System (INIS)
Sato, Shinri
1985-01-01
In nuclear power facilities, the management of radioactive wastes is made with its technology plus the automatic techniques. Under the radiation field, the maintenance or aid of such systems is important. To cope with this situation, MF-2 system, MF-3 system and a manipulator system as remote handling machines are described. MF-2 system consists of an MF-2 carrier truck, a control unit and a command trailer. It is capable of handling heavy-weight objects. The system is not by hydraulic but by electrical means. MF-3 system consists of a four-crawler truck and a manipulator. The truck is versatile in its posture by means of the four independent crawlers. The manipulator system is bilateral in operation, so that the delicate handling is made possible. (Mori, K.)
DEFF Research Database (Denmark)
Ræbild, Ulla
to touch, pick up, carry, or feel with the hands. Figuratively it is to manage, deal with, direct, train, or control. Additionally, as a noun, a handle is something by which we grasp or open up something. Lastly, handle also has a Nordic root, here meaning to trade, bargain or deal. Together all four...... meanings seem to merge in the fashion design process, thus opening up for an embodied engagement with matter that entails direction giving, organizational management and negotiation. By seeing processes of handling as a key fashion methodological practice, it is possible to divert the discourse away from...... introduces four ways whereby fashion designers apply their own bodies as tools for design; a) re-activating past garment-design experiences, b) testing present garment-design experiences c) probing for new garment-design experiences and d) design of future garment experiences by body proxy. The paper...
International Nuclear Information System (INIS)
Grisham, D.L.; Lambert, J.E.
1983-01-01
Experimental area A at the Clinton P. Anderson Meson Physics Facility (LAMPF) encompasses a large area. Presently there are four experimental target cells along the main proton beam line that have become highly radioactive, thus dictating that all maintenance be performed remotely. The Monitor remote handling system was developed to perform in situ maintenance at any location within area A. Due to the complexity of experimental systems and confined space, conventional remote handling methods based upon hot cell and/or hot bay concepts are not workable. Contrary to conventional remote handling which require special tooling for each specifically planned operation, the Monitor concept is aimed at providing a totally flexible system capable of remotely performing general mechanical and electrical maintenance operations using standard tools. The Monitor system is described
Groupe ST/HM
2002-01-01
A new EDH document entitled 'Transport/Handling Request' will be in operation as of Monday, 11th February 2002, when the corresponding icon will be accessible from the EDH desktop, together with the application instructions. This EDH form will replace the paper-format transport/handling request form for all activities involving the transport of equipment and materials. However, the paper form will still be used for all vehicle-hire requests. The introduction of the EDH transport/handling request form is accompanied by the establishment of the following time limits for the various services concerned: 24 hours for the removal of office items, 48 hours for the transport of heavy items (of up to 6 metric tons and of standard road width), 5 working days for a crane operation, extra-heavy transport operation or complete removal, 5 working days for all transport operations relating to LHC installation. ST/HM Group, Logistics Section Tel: 72672 - 72202
Proofs of Contracted Length Non-covariance
International Nuclear Information System (INIS)
Strel'tsov, V.N.
1994-01-01
Different proofs of contracted length non covariance are discussed. The way based on the establishment of interval inconstancy (dependence on velocity) seems to be the most convincing one. It is stressed that the known non covariance of the electromagnetic field energy and momentum of a moving charge ('the problem 4/3') is a direct consequence of contracted length non covariance. 8 refs
Structural Analysis of Covariance and Correlation Matrices.
Joreskog, Karl G.
1978-01-01
A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…
Construction of covariance matrix for experimental data
International Nuclear Information System (INIS)
Liu Tingjin; Zhang Jianhua
1992-01-01
For evaluators and experimenters, the information is complete only in the case when the covariance matrix is given. The covariance matrix of the indirectly measured data has been constructed and discussed. As an example, the covariance matrix of 23 Na(n, 2n) cross section is constructed. A reasonable result is obtained
A simple method for analyzing data from a randomized trial with a missing binary outcome
Directory of Open Access Journals (Sweden)
Freedman Laurence S
2003-05-01
Full Text Available Abstract Background Many randomized trials involve missing binary outcomes. Although many previous adjustments for missing binary outcomes have been proposed, none of these makes explicit use of randomization to bound the bias when the data are not missing at random. Methods We propose a novel approach that uses the randomization distribution to compute the anticipated maximum bias when missing at random does not hold due to an unobserved binary covariate (implying that missingness depends on outcome and treatment group. The anticipated maximum bias equals the product of two factors: (a the anticipated maximum bias if there were complete confounding of the unobserved covariate with treatment group among subjects with an observed outcome and (b an upper bound factor that depends only on the fraction missing in each randomization group. If less than 15% of subjects are missing in each group, the upper bound factor is less than .18. Results We illustrated the methodology using data from the Polyp Prevention Trial. We anticipated a maximum bias under complete confounding of .25. With only 7% and 9% missing in each arm, the upper bound factor, after adjusting for age and sex, was .10. The anticipated maximum bias of .25 × .10 =.025 would not have affected the conclusion of no treatment effect. Conclusion This approach is easy to implement and is particularly informative when less than 15% of subjects are missing in each arm.
Lorentz covariant theory of gravitation
International Nuclear Information System (INIS)
Fagundes, H.V.
1974-12-01
An alternative method for the calculation of second order effects, like the secular shift of Mercury's perihelium is developed. This method uses the basic ideas of thirring combined with the more mathematical approach of Feyman. In the case of a static source, the treatment used is greatly simplified. Besides, Einstein-Infeld-Hoffmann's Lagrangian for a system of two particles and spin-orbit and spin-spin interactions of two particles with classical spin, ie, internal angular momentum in Moller's sense, are obtained from the Lorentz covariant theory
International Nuclear Information System (INIS)
Sebestyen, A.
1975-07-01
The principle of covariance is extended to coordinates corresponding to internal degrees of freedom. The conditions for a system to be isolated are given. It is shown how internal forces arise in such systems. Equations for internal fields are derived. By an interpretation of the generalized coordinates based on group theory it is shown how particles of the ordinary sense enter into the model and as a simple application the gravitational interaction of two pointlike particles is considered and the shift of the perihelion is deduced. (Sz.Z.)
Covariant gauges at finite temperature
Landshoff, Peter V
1992-01-01
A prescription is presented for real-time finite-temperature perturbation theory in covariant gauges, in which only the two physical degrees of freedom of the gauge-field propagator acquire thermal parts. The propagators for the unphysical degrees of freedom of the gauge field, and for the Faddeev-Popov ghost field, are independent of temperature. This prescription is applied to the calculation of the one-loop gluon self-energy and the two-loop interaction pressure, and is found to be simpler to use than the conventional one.
Veer, E
2011-01-01
Facebook's spreading about as far and fast as the Web itself: 500 million members and counting. But there's a world of fun packed into the site that most folks miss. With this bestselling guide, learn how to unlock Facebook's talents as personal website creator, souped-up address book, and bustling community forum. It's an eye-opening, timesaving tour, guaranteed to help you get the most out of your Facebook experience. Coverage includes: Get started, get connected. Signing up is easy, but the real payoff comes when you tap into networks of coworkers, classmates, and friends. Pick and choose
Covariance Evaluation Methodology for Neutron Cross Sections
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
International Nuclear Information System (INIS)
1991-01-01
The main objective of this publication is to provide practical guidance and recommendations on operational radiation protection aspects related to the safe handling of tritium in laboratories, industrial-scale nuclear facilities such as heavy-water reactors, tritium removal plants and fission fuel reprocessing plants, and facilities for manufacturing commercial tritium-containing devices and radiochemicals. The requirements of nuclear fusion reactors are not addressed specifically, since there is as yet no tritium handling experience with them. However, much of the material covered is expected to be relevant to them as well. Annex III briefly addresses problems in the comparatively small-scale use of tritium at universities, medical research centres and similar establishments. However, the main subject of this publication is the handling of larger quantities of tritium. Operational aspects include designing for tritium safety, safe handling practice, the selection of tritium-compatible materials and equipment, exposure assessment, monitoring, contamination control and the design and use of personal protective equipment. This publication does not address the technologies involved in tritium control and cleanup of effluents, tritium removal, or immobilization and disposal of tritium wastes, nor does it address the environmental behaviour of tritium. Refs, figs and tabs
Rendleman, Matt; Legacy, James
This publication provides an introduction to grain grading and handling for adult students in vocational and technical education programs. Organized in five chapters, the booklet provides a brief overview of the jobs performed at a grain elevator and of the techniques used to grade grain. The first chapter introduces the grain industry and…
Mars Sample Handling Functionality
Meyer, M. A.; Mattingly, R. L.
2018-04-01
The final leg of a Mars Sample Return campaign would be an entity that we have referred to as Mars Returned Sample Handling (MRSH.) This talk will address our current view of the functional requirements on MRSH, focused on the Sample Receiving Facility (SRF).
Energy Technology Data Exchange (ETDEWEB)
1974-09-18
Details of bulk handling equipment suitable for collection and compressing wood waste from commercial joinery works are discussed. The Redler Bin Discharger ensures free flow of chips from storage silo discharge prior to compression into briquettes for use as fuel or processing into chipboard.
Poincare covariance and κ-Minkowski spacetime
International Nuclear Information System (INIS)
Dabrowski, Ludwik; Piacitelli, Gherardo
2011-01-01
A fully Poincare covariant model is constructed as an extension of the κ-Minkowski spacetime. Covariance is implemented by a unitary representation of the Poincare group, and thus complies with the original Wigner approach to quantum symmetries. This provides yet another example (besides the DFR model), where Poincare covariance is realised a la Wigner in the presence of two characteristic dimensionful parameters: the light speed and the Planck length. In other words, a Doubly Special Relativity (DSR) framework may well be realised without deforming the meaning of 'Poincare covariance'. -- Highlights: → We construct a 4d model of noncommuting coordinates (quantum spacetime). → The coordinates are fully covariant under the undeformed Poincare group. → Covariance a la Wigner holds in presence of two dimensionful parameters. → Hence we are not forced to deform covariance (e.g. as quantum groups). → The underlying κ-Minkowski model is unphysical; covariantisation does not cure this.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data.
Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng
2014-04-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.
COVARIANCE ASSISTED SCREENING AND ESTIMATION.
Ke, By Tracy; Jin, Jiashun; Fan, Jianqing
2014-11-01
Consider a linear model Y = X β + z , where X = X n,p and z ~ N (0, I n ). The vector β is unknown and it is of interest to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao, 2003) and the change-point problem (Bhattacharya, 1994), we are primarily interested in the case where the Gram matrix G = X ' X is non-sparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the Covariance Assisted Screening and Estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage , which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage Screen and Clean (Fan and Song, 2010; Wasserman and Roeder, 2009) procedure, where we first identify candidates of these submodels by patching and screening , and then re-examine each candidate to remove false positives. For any procedure β̂ for variable selection, we measure the performance by the minimax Hamming distance between the sign vectors of β̂ and β. We show that in a broad class of situations where the Gram matrix is non-sparse but sparsifiable, CASE achieves the optimal rate of convergence. The results are successfully applied to long-memory time series and the change-point model.
DEFF Research Database (Denmark)
Franzosi, Diogo Buarque; Frandsen, Mads T.; Shoemaker, Ian M.
2016-01-01
flavor structures. Monojet data alone can be used to infer the mass of the "missing particle" from the shape of the missing energy distribution. In particular, 13 TeV LHC data will have sensitivity to DM masses greater than $\\sim$ 1 TeV. In addition to the monojet channel, NSI can be probed in multi......Missing energy signals such as monojets are a possible signature of Dark Matter (DM) at colliders. However, neutrino interactions beyond the Standard Model may also produce missing energy signals. In order to conclude that new "missing particles" are observed the hypothesis of BSM neutrino......-lepton searches which we find to yield stronger limits at heavy mediator masses. The sensitivity offered by these multi-lepton channels provide a method to reject or confirm the DM hypothesis in missing energy searches....
Non-Critical Covariant Superstrings
Grassi, P A
2005-01-01
We construct a covariant description of non-critical superstrings in even dimensions. We construct explicitly supersymmetric hybrid type variables in a linear dilaton background, and study an underlying N=2 twisted superconformal algebra structure. We find similarities between non-critical superstrings in 2n+2 dimensions and critical superstrings compactified on CY_(4-n) manifolds. We study the spectrum of the non-critical strings, and in particular the Ramond-Ramond massless fields. We use the supersymmetric variables to construct the non-critical superstrings sigma-model action in curved target space backgrounds with coupling to the Ramond-Ramond fields. We consider as an example non-critical type IIA strings on AdS_2 background with Ramond-Ramond 2-form flux.
Broughton, John
2008-01-01
Want to be part of the largest group-writing project in human history? Learn how to contribute to Wikipedia, the user-generated online reference for the 21st century. Considered more popular than eBay, Microsoft.com, and Amazon.com, Wikipedia servers respond to approximately 30,000 requests per second, or about 2.5 billion per day. It's become the first point of reference for people the world over who need a fact fast.If you want to jump on board and add to the content, Wikipedia: The Missing Manual is your first-class ticket. Wikipedia has more than 9 million entries in 250 languages, over 2
Test sample handling apparatus
International Nuclear Information System (INIS)
1981-01-01
A test sample handling apparatus using automatic scintillation counting for gamma detection, for use in such fields as radioimmunoassay, is described. The apparatus automatically and continuously counts large numbers of samples rapidly and efficiently by the simultaneous counting of two samples. By means of sequential ordering of non-sequential counting data, it is possible to obtain precisely ordered data while utilizing sample carrier holders having a minimum length. (U.K.)
Handling and Transport Problems
Energy Technology Data Exchange (ETDEWEB)
Pomarola, J. [Head of Technical Section, Atomic Energy Commission, Saclay (France); Savouyaud, J. [Head of Electro-Mechanical Sub-Division, Atomic Energy Commission, Saclay (France)
1960-07-01
Arrangements for special or dangerous transport operations by road arising out of the activities of the Atomic Energy Commission are made by the Works and Installations Division which acts in concert with the Monitoring and Protection Division (MPD) whenever radioactive substances or appliances are involved. In view of the risk of irradiation and contamination entailed in handling and transporting radioactive substances, including waste, a specialized transport and storage team has been formed as a complement to the emergency and decontamination teams.
International Nuclear Information System (INIS)
Parazin, R.J.
1995-01-01
This study presents estimates of the solid radioactive waste quantities that will be generated in the Separations, Low-Level Waste Vitrification and High-Level Waste Vitrification facilities, collectively called the Tank Waste Remediation System Treatment Complex, over the life of these facilities. This study then considers previous estimates from other 200 Area generators and compares alternative methods of handling (segregation, packaging, assaying, shipping, etc.)
International Nuclear Information System (INIS)
Sanhueza Mir, Azucena
1998-01-01
Based on characteristics and quantities of different types of radioactive waste produced in the country, achievements in infrastructure and the way to solve problems related with radioactive waste handling and management, are presented in this paper. Objectives of maintaining facilities and capacities for controlling, processing and storing radioactive waste in a conditioned form, are attained, within a great range of legal framework, so defined to contribute with safety to people and environment (au)
Renal phosphate handling: Physiology
Directory of Open Access Journals (Sweden)
Narayan Prasad
2013-01-01
Full Text Available Phosphorus is a common anion. It plays an important role in energy generation. Renal phosphate handling is regulated by three organs parathyroid, kidney and bone through feedback loops. These counter regulatory loops also regulate intestinal absorption and thus maintain serum phosphorus concentration in physiologic range. The parathyroid hormone, vitamin D, Fibrogenic growth factor 23 (FGF23 and klotho coreceptor are the key regulators of phosphorus balance in body.
International Nuclear Information System (INIS)
1991-01-01
The United States Department of Energy, Oak Ridge Field Office, and Martin Marietta Energy Systems, Inc., are co-sponsoring this Second International Conference on Uranium Hexafluoride Handling. The conference is offered as a forum for the exchange of information and concepts regarding the technical and regulatory issues and the safety aspects which relate to the handling of uranium hexafluoride. Through the papers presented here, we attempt not only to share technological advances and lessons learned, but also to demonstrate that we are concerned about the health and safety of our workers and the public, and are good stewards of the environment in which we all work and live. These proceedings are a compilation of the work of many experts in that phase of world-wide industry which comprises the nuclear fuel cycle. Their experience spans the entire range over which uranium hexafluoride is involved in the fuel cycle, from the production of UF 6 from the naturally-occurring oxide to its re-conversion to oxide for reactor fuels. The papers furnish insights into the chemical, physical, and nuclear properties of uranium hexafluoride as they influence its transport, storage, and the design and operation of plant-scale facilities for production, processing, and conversion to oxide. The papers demonstrate, in an industry often cited for its excellent safety record, continuing efforts to further improve safety in all areas of handling uranium hexafluoride
Uranium hexafluoride handling. Proceedings
Energy Technology Data Exchange (ETDEWEB)
1991-12-31
The United States Department of Energy, Oak Ridge Field Office, and Martin Marietta Energy Systems, Inc., are co-sponsoring this Second International Conference on Uranium Hexafluoride Handling. The conference is offered as a forum for the exchange of information and concepts regarding the technical and regulatory issues and the safety aspects which relate to the handling of uranium hexafluoride. Through the papers presented here, we attempt not only to share technological advances and lessons learned, but also to demonstrate that we are concerned about the health and safety of our workers and the public, and are good stewards of the environment in which we all work and live. These proceedings are a compilation of the work of many experts in that phase of world-wide industry which comprises the nuclear fuel cycle. Their experience spans the entire range over which uranium hexafluoride is involved in the fuel cycle, from the production of UF{sub 6} from the naturally-occurring oxide to its re-conversion to oxide for reactor fuels. The papers furnish insights into the chemical, physical, and nuclear properties of uranium hexafluoride as they influence its transport, storage, and the design and operation of plant-scale facilities for production, processing, and conversion to oxide. The papers demonstrate, in an industry often cited for its excellent safety record, continuing efforts to further improve safety in all areas of handling uranium hexafluoride. Selected papers were processed separately for inclusion in the Energy Science and Technology Database.
International Nuclear Information System (INIS)
Grisham, D.L.
1981-01-01
A remote handling system is proposed for moving a torus sector of the accelerator from under the cryostat to a point where it can be handled by a crane and for the reverse process for a new sector. Equipment recommendations are presented, as well as possible alignment schemes. Some general comments about future remote-handling methods and the present capabilities of existing systems will also be included. The specific task to be addressed is the removal and replacement of a 425 to 450 ton torus sector. This requires a horizontal movement of approx. 10 m from a normal operating position to a point where its further transport can be accomplished by more conventional means (crane or floor transporter). The same horizontal movement is required for reinstallation, but a positional tolerance of 2 cm is required to allow reasonable fit-up for the vacuum seal from the radial frames to the torus sector. Since the sectors are not only heavy but rather tall and narrow, the transport system must provide a safe, stable, and repeatable method fo sector movement. This limited study indicates that the LAMPF-based method of transporting torus sectors offers a proven method of moving heavy items. In addition, the present state of the art in remote equipment is adequate for FED maintenance
International Nuclear Information System (INIS)
Medina Bermudez, Clara Ines
1999-01-01
The topic of solid residues is specifically of great interest and concern for the authorities, institutions and community that identify in them a true threat against the human health and the atmosphere in the related with the aesthetic deterioration of the urban centers and of the natural landscape; in the proliferation of vectorial transmitters of illnesses and the effect on the biodiversity. Inside the wide spectrum of topics that they keep relationship with the environmental protection, the inadequate handling of solid residues and residues dangerous squatter an important line in the definition of political and practical environmentally sustainable. The industrial development and the population's growth have originated a continuous increase in the production of solid residues; of equal it forms, their composition day after day is more heterogeneous. The base for the good handling includes the appropriate intervention of the different stages of an integral administration of residues, which include the separation in the source, the gathering, the handling, the use, treatment, final disposition and the institutional organization of the administration. The topic of the dangerous residues generates more expectation. These residues understand from those of pathogen type that are generated in the establishments of health that of hospital attention, until those of combustible, inflammable type, explosive, radio-active, volatile, corrosive, reagent or toxic, associated to numerous industrial processes, common in our countries in development
ISSUES IN NEUTRON CROSS SECTION COVARIANCES
Energy Technology Data Exchange (ETDEWEB)
Mattoon, C.M.; Oblozinsky,P.
2010-04-30
We review neutron cross section covariances in both the resonance and fast neutron regions with the goal to identify existing issues in evaluation methods and their impact on covariances. We also outline ideas for suitable covariance quality assurance procedures.We show that the topic of covariance data remains controversial, the evaluation methodologies are not fully established and covariances produced by different approaches have unacceptable spread. The main controversy is in very low uncertainties generated by rigorous evaluation methods and much larger uncertainties based on simple estimates from experimental data. Since the evaluators tend to trust the former, while the users tend to trust the latter, this controversy has considerable practical implications. Dedicated effort is needed to arrive at covariance evaluation methods that would resolve this issue and produce results accepted internationally both by evaluators and users.
Covariant diagrams for one-loop matching
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhengkang [Michigan Center for Theoretical Physics (MCTP), University of Michigan,450 Church Street, Ann Arbor, MI 48109 (United States); Deutsches Elektronen-Synchrotron (DESY),Notkestraße 85, 22607 Hamburg (Germany)
2017-05-30
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariant diagrams for one-loop matching
International Nuclear Information System (INIS)
Zhang, Zhengkang
2017-01-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed “covariant diagrams.” The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Improvement of covariance data for fast reactors
International Nuclear Information System (INIS)
Shibata, Keiichi; Hasegawa, Akira
2000-02-01
We estimated covariances of the JENDL-3.2 data on the nuclides and reactions needed to analyze fast-reactor cores for the past three years, and produced covariance files. The present work was undertaken to re-examine the covariance files and to make some improvements. The covariances improved are the ones for the inelastic scattering cross section of 16 O, the total cross section of 23 Na, the fission cross section of 235 U, the capture cross section of 238 U, and the resolved resonance parameters for 238 U. Moreover, the covariances of 233 U data were newly estimated by the present work. The covariances obtained were compiled in the ENDF-6 format. (author)
Sample-Based Extreme Learning Machine with Missing Data
Directory of Open Access Journals (Sweden)
Hang Gao
2015-01-01
Full Text Available Extreme learning machine (ELM has been extensively studied in machine learning community during the last few decades due to its high efficiency and the unification of classification, regression, and so forth. Though bearing such merits, existing ELM algorithms cannot efficiently handle the issue of missing data, which is relatively common in practical applications. The problem of missing data is commonly handled by imputation (i.e., replacing missing values with substituted values according to available information. However, imputation methods are not always effective. In this paper, we propose a sample-based learning framework to address this issue. Based on this framework, we develop two sample-based ELM algorithms for classification and regression, respectively. Comprehensive experiments have been conducted in synthetic data sets, UCI benchmark data sets, and a real world fingerprint image data set. As indicated, without introducing extra computational complexity, the proposed algorithms do more accurate and stable learning than other state-of-the-art ones, especially in the case of higher missing ratio.
ANL Critical Assembly Covariance Matrix Generation - Addendum
Energy Technology Data Exchange (ETDEWEB)
McKnight, Richard D. [Argonne National Lab. (ANL), Argonne, IL (United States); Grimm, Karl N. [Argonne National Lab. (ANL), Argonne, IL (United States)
2014-01-13
In March 2012, a report was issued on covariance matrices for Argonne National Laboratory (ANL) critical experiments. That report detailed the theory behind the calculation of covariance matrices and the methodology used to determine the matrices for a set of 33 ANL experimental set-ups. Since that time, three new experiments have been evaluated and approved. This report essentially updates the previous report by adding in these new experiments to the preceding covariance matrix structure.
Neutron spectrum adjustment. The role of covariances
International Nuclear Information System (INIS)
Remec, I.
1992-01-01
Neutron spectrum adjustment method is shortly reviewed. Practical example dealing with power reactor pressure vessel exposure rates determination is analysed. Adjusted exposure rates are found only slightly affected by the covariances of measured reaction rates and activation cross sections, while the multigroup spectra covariances were found important. Approximate spectra covariance matrices, as suggested in Astm E944-89, were found useful but care is advised if they are applied in adjustments of spectra at locations without dosimetry. (author) [sl
Missing value imputation for epistatic MAPs
LENUS (Irish Health Repository)
Ryan, Colm
2010-04-20
Abstract Background Epistatic miniarray profiling (E-MAPs) is a high-throughput approach capable of quantifying aggravating or alleviating genetic interactions between gene pairs. The datasets resulting from E-MAP experiments typically take the form of a symmetric pairwise matrix of interaction scores. These datasets have a significant number of missing values - up to 35% - that can reduce the effectiveness of some data analysis techniques and prevent the use of others. An effective method for imputing interactions would therefore increase the types of possible analysis, as well as increase the potential to identify novel functional interactions between gene pairs. Several methods have been developed to handle missing values in microarray data, but it is unclear how applicable these methods are to E-MAP data because of their pairwise nature and the significantly larger number of missing values. Here we evaluate four alternative imputation strategies, three local (Nearest neighbor-based) and one global (PCA-based), that have been modified to work with symmetric pairwise data. Results We identify different categories for the missing data based on their underlying cause, and show that values from the largest category can be imputed effectively. We compare local and global imputation approaches across a variety of distinct E-MAP datasets, showing that both are competitive and preferable to filling in with zeros. In addition we show that these methods are effective in an E-MAP from a different species, suggesting that pairwise imputation techniques will be increasingly useful as analogous epistasis mapping techniques are developed in different species. We show that strongly alleviating interactions are significantly more difficult to predict than strongly aggravating interactions. Finally we show that imputed interactions, generated using nearest neighbor methods, are enriched for annotations in the same manner as measured interactions. Therefore our method potentially
Modifications of Sp(2) covariant superfield quantization
Energy Technology Data Exchange (ETDEWEB)
Gitman, D.M.; Moshin, P.Yu
2003-12-04
We propose a modification of the Sp(2) covariant superfield quantization to realize a superalgebra of generating operators isomorphic to the massless limit of the corresponding superalgebra of the osp(1,2) covariant formalism. The modified scheme ensures the compatibility of the superalgebra of generating operators with extended BRST symmetry without imposing restrictions eliminating superfield components from the quantum action. The formalism coincides with the Sp(2) covariant superfield scheme and with the massless limit of the osp(1,2) covariant quantization in particular cases of gauge-fixing and solutions of the quantum master equations.
Competing risks and time-dependent covariates
DEFF Research Database (Denmark)
Cortese, Giuliana; Andersen, Per K
2010-01-01
Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates......, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates...
Activities of covariance utilization working group
International Nuclear Information System (INIS)
Tsujimoto, Kazufumi
2013-01-01
During the past decade, there has been a interest in the calculational uncertainties induced by nuclear data uncertainties in the neutronics design of advanced nuclear system. The covariance nuclear data is absolutely essential for the uncertainty analysis. In the latest version of JENDL, JENDL-4.0, the covariance data for many nuclides, especially actinide nuclides, was substantialy enhanced. The growing interest in the uncertainty analysis and the covariance data has led to the organisation of the working group for covariance utilization under the JENDL committee. (author)
Missed Opportunities for Hepatitis A Vaccination, National Immunization Survey-Child, 2013.
Casillas, Shannon M; Bednarczyk, Robert A
2017-08-01
To quantify the number of missed opportunities for vaccination with hepatitis A vaccine in children and assess the association of missed opportunities for hepatitis A vaccination with covariates of interest. Weighted data from the 2013 National Immunization Survey of US children aged 19-35 months were used. Analysis was restricted to children with provider-verified vaccination history (n = 13 460). Missed opportunities for vaccination were quantified by determining the number of medical visits a child made when another vaccine was administered during eligibility for hepatitis A vaccine, but hepatitis A vaccine was not administered. Cross-sectional bivariate and multivariate polytomous logistic regression were used to assess the association of missed opportunities for vaccination with child and maternal demographic, socioeconomic, and geographic covariates. In 2013, 85% of children in our study population had initiated the hepatitis A vaccine series, and 60% received 2 or more doses. Children who received zero doses of hepatitis A vaccine had an average of 1.77 missed opportunities for vaccination compared with 0.43 missed opportunities for vaccination in those receiving 2 doses. Children with 2 or more missed opportunities for vaccination initiated the vaccine series 6 months later than children without missed opportunities. In the fully adjusted multivariate model, children who were younger, had ever received WIC benefits, or lived in a state with childcare entry mandates were at a reduced odds for 2 or more missed opportunities for vaccination; children living in the Northeast census region were at an increased odds. Missed opportunities for vaccination likely contribute to the poor coverage for hepatitis A vaccination in children; it is important to understand why children are not receiving the vaccine when eligible. Copyright © 2017 Elsevier Inc. All rights reserved.
Bayesian estimation of covariance matrices: Application to market risk management at EDF
International Nuclear Information System (INIS)
Jandrzejewski-Bouriga, M.
2012-01-01
In this thesis, we develop new methods of regularized covariance matrix estimation, under the Bayesian setting. The regularization methodology employed is first related to shrinkage. We investigate a new Bayesian modeling of covariance matrix, based on hierarchical inverse-Wishart distribution, and then derive different estimators under standard loss functions. Comparisons between shrunk and empirical estimators are performed in terms of frequentist performance under different losses. It allows us to highlight the critical importance of the definition of cost function and show the persistent effect of the shrinkage-type prior on inference. In a second time, we consider the problem of covariance matrix estimation in Gaussian graphical models. If the issue is well treated for the decomposable case, it is not the case if you also consider non-decomposable graphs. We then describe a Bayesian and operational methodology to carry out the estimation of covariance matrix of Gaussian graphical models, decomposable or not. This procedure is based on a new and objective method of graphical-model selection, combined with a constrained and regularized estimation of the covariance matrix of the model chosen. The procedures studied effectively manage missing data. These estimation techniques were applied to calculate the covariance matrices involved in the market risk management for portfolios of EDF (Electricity of France), in particular for problems of calculating Value-at-Risk or in Asset Liability Management. (author)
Preference Handling for Artificial Intelligence
Goldsmith, Judy; University of Kentucky; Junker, Ulrich; ILOG
2009-01-01
This article explains the benefits of preferences for AI systems and draws a picture of current AI research on preference handling. It thus provides an introduction to the topics covered by this special issue on preference handling.
International Nuclear Information System (INIS)
Smith, J.C.; Manuel, R.J.; McAllister, J.E.
1981-01-01
A process for handling the problems of crud formation during the solvent extraction of wet-process phosphoric acid, e.g. for uranium and rare earth removal, is described. It involves clarification of the crud-solvent mixture, settling, water washing the residue and treatment of the crud with a caustic wash to remove and regenerate the solvent. Applicable to synergistic mixtures of dialkylphosphoric acids and trialkylphosphine oxides dissolved in inert diluents and more preferably to the reductive stripping technique. (U.K.)
International Nuclear Information System (INIS)
Schwarz, N.; Komurka, M.
1983-03-01
As a result for the Fast Breeder Development extensive experience is available worldwide with respect to Sodium technology. Due to the extension of the research program to topping cycles with Potassium as the working medium, test facilities with Potassium have been designed and operated in the Institute of Reactor Safety. The different chemical properties of Sodium and Potassium give rise in new safety concepts and operating procedures. The handling problems of Potassium are described in the light of theoretical properties and own experiences. Selected literature on main safety and operating problems complete this report. (Author) [de
Energy Technology Data Exchange (ETDEWEB)
Bradbury, S; Homleid, D. [Air Control Science Inc. (United States)
2004-04-01
Within the journals 'Focus on O & M' is a short article describing modifications to coal handling systems at Eielson Air Force Base near Fairbanks, Alaska, which is supplied with power and heat from a subbituminous coal-fired central plant. Measures to reduce dust include addition of an enclosed recirculation chamber at each transfer point and new chute designs to reduce coal velocity, turbulence, and induced air. The modifications were developed by Air Control Science (ACS). 7 figs., 1 tab.
Missing money and missing markets: Reliability, capacity auctions and interconnectors
International Nuclear Information System (INIS)
Newbery, David
2016-01-01
In the energy trilemma of reliability, sustainability and affordability, politicians treat reliability as over-riding. The EU assumes the energy-only Target Electricity Model will deliver reliability but the UK argues that a capacity remuneration mechanism is needed. This paper argues that capacity auctions tend to over-procure capacity, exacerbating the missing money problem they were designed to address. The bias is further exacerbated by failing to address some of the missing market problems also neglected in the debate. It examines the case for, criticisms of, and outcome of the first GB capacity auction and problems of trading between different capacity markets. - Highlights: •Energy-only markets can work if they avoid missing money and missing market problems. •Policy makers over-estimate the cost of so-called “loss of load events”. •Policy makers tend to over-procure capacity, exacerbating the missing money problem. •Rectifying missing market problems simplifies trade between different capacity markets. •Addressing missing market problems makes under-procurement cheaper than over-procurement.
General covariance and quantum theory
International Nuclear Information System (INIS)
Mashhoon, B.
1986-01-01
The extension of the principle of relativity to general coordinate systems is based on the hypothesis that an accelerated observer is locally equivalent to a hypothetical inertial observer with the same velocity as the noninertial observer. This hypothesis of locality is expected to be valid for classical particle phenomena as well as for classical wave phenomena but only in the short-wavelength approximation. The generally covariant theory is therefore expected to be in conflict with the quantum theory which is based on wave-particle duality. This is explicitly demonstrated for the frequency of electromagnetic radiation measured by a uniformly rotating observer. The standard Doppler formula is shown to be valid only in the geometric optics approximation. A new definition for the frequency is proposed, and the resulting formula for the frequency measured by the rotating observer is shown to be consistent with expectations based on the classical theory of electrons. A tentative quantum theory is developed on the basis of the generalization of the Bohr frequency condition to include accelerated observers. The description of the causal sequence of events is assumed to be independent of the motion of the observer. Furthermore, the quantum hypothesis is supposed to be valid for all observers. The implications of this theory are critically examined. The new formula for frequency, which is still based on the hypothesis of locality, leads to the observation of negative energy quanta by the rotating observer and is therefore in conflict with the quantum theory
Semiparametric approach for non-monotone missing covariates in a parametric regression model
Sinha, Samiran; Saha, Krishna K.; Wang, Suojin
2014-01-01
mechanism helps to nullify (or reduce) the problems due to non-identifiability that result from the non-ignorable missingness mechanism. The asymptotic properties of the proposed estimator are derived. Finite sample performance is assessed through simulation
CERN Sells its Electronic Document Handling System
2001-01-01
The EDH team. Left to right: Derek Mathieson, Rotislav Titov, Per Gunnar Jonsson, Ivica Dobrovicova, James Purvis. Missing from the photo is Jurgen De Jonghe. In a 1 MCHF deal announced this week, the British company Transacsys bought the rights to CERN's Electronic Document Handling (EDH) system, which has revolutionised the Laboratory's administrative procedures over the last decade. Under the deal, CERN and Transacsys will collaborate on developing EDH over the coming 12 months. CERN will provide manpower and expertise and will retain the rights to use EDH, which will also be available freely to other particle physics laboratories. This development is an excellent example of the active technology transfer policy CERN is currently pursuing. The negotiations were carried out through a fruitful collaboration between AS and ETT Divisions, following the recommendations of the Technology Advisory Board, and with the help of SPL Division. EDH was born in 1991 when John Ferguson and Achille Petrilli of AS Divisi...
Parameters of the covariance function of galaxies
International Nuclear Information System (INIS)
Fesenko, B.I.; Onuchina, E.V.
1988-01-01
The two-point angular covariance functions for two samples of galaxies are considered using quick methods of analysis. It is concluded that in the previous investigations the amplitude of the covariance function in the Lick counts was overestimated and the rate of decrease of the function underestimated
Covariance Function for Nearshore Wave Assimilation Systems
2018-01-30
which is applicable for any spectral wave model. The four dimensional variational (4DVar) assimilation methods are based on the mathematical ...covariance can be modeled by a parameterized Gaussian function, for nearshore wave assimilation applications , the covariance function depends primarily on...SPECTRAL ACTION DENSITY, RESPECTIVELY. ............................ 5 FIGURE 2. TOP ROW: STATISTICAL ANALYSIS OF THE WAVE-FIELD PROPERTIES AT THE
Treatment Effects with Many Covariates and Heteroskedasticity
DEFF Research Database (Denmark)
Cattaneo, Matias D.; Jansson, Michael; Newey, Whitney K.
The linear regression model is widely used in empirical work in Economics. Researchers often include many covariates in their linear model specification in an attempt to control for confounders. We give inference methods that allow for many covariates and heteroskedasticity. Our results...
Covariance and sensitivity data generation at ORNL
International Nuclear Information System (INIS)
Leal, L. C.; Derrien, H.; Larson, N. M.; Alpan, A.
2005-01-01
Covariance data are required to assess uncertainties in design parameters in several nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the US Evaluated Nuclear Data Library, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. In this paper we address the generation of covariance data in the resonance region done with the computer code SAMMY. SAMMY is used in the evaluation of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on the generalised least-squares formalism (Bayesian theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, it provides the resonance parameter covariances. For resonance parameter evaluations where there are no resonance parameter covariance data available, the alternative is to use an approach called the 'retroactive' resonance parameter covariance generation. In this paper, we describe the application of the retroactive covariance generation approach for the gadolinium isotopes. (authors)
Position Error Covariance Matrix Validation and Correction
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
Quality Quantification of Evaluated Cross Section Covariances
International Nuclear Information System (INIS)
Varet, S.; Dossantos-Uzarralde, P.; Vayatis, N.
2015-01-01
Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the 85 Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations
On the algebraic structure of covariant anomalies and covariant Schwinger terms
International Nuclear Information System (INIS)
Kelnhofer, G.
1992-01-01
A cohomological characterization of covariant anomalies and covariant Schwinger terms in an anomalous Yang-Mills theory is formulated and w ill be geometrically interpreted. The BRS and anti-BRS transformations are defined as purely differential geometric objects. Finally the covariant descent equations are formulated within this context. (author)
Covariant diagrams for one-loop matching
International Nuclear Information System (INIS)
Zhang, Zhengkang
2016-10-01
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
Covariant diagrams for one-loop matching
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhengkang [Michigan Univ., Ann Arbor, MI (United States). Michigan Center for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2016-10-15
We present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gaugecovariant quantities and are thus dubbed ''covariant diagrams.'' The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We show how such derivation can be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.
On estimating cosmology-dependent covariance matrices
International Nuclear Information System (INIS)
Morrison, Christopher B.; Schneider, Michael D.
2013-01-01
We describe a statistical model to estimate the covariance matrix of matter tracer two-point correlation functions with cosmological simulations. Assuming a fixed number of cosmological simulation runs, we describe how to build a 'statistical emulator' of the two-point function covariance over a specified range of input cosmological parameters. Because the simulation runs with different cosmological models help to constrain the form of the covariance, we predict that the cosmology-dependent covariance may be estimated with a comparable number of simulations as would be needed to estimate the covariance for fixed cosmology. Our framework is a necessary first step in planning a simulations campaign for analyzing the next generation of cosmological surveys
Covariance descriptor fusion for target detection
Cukur, Huseyin; Binol, Hamidullah; Bal, Abdullah; Yavuz, Fatih
2016-05-01
Target detection is one of the most important topics for military or civilian applications. In order to address such detection tasks, hyperspectral imaging sensors provide useful images data containing both spatial and spectral information. Target detection has various challenging scenarios for hyperspectral images. To overcome these challenges, covariance descriptor presents many advantages. Detection capability of the conventional covariance descriptor technique can be improved by fusion methods. In this paper, hyperspectral bands are clustered according to inter-bands correlation. Target detection is then realized by fusion of covariance descriptor results based on the band clusters. The proposed combination technique is denoted Covariance Descriptor Fusion (CDF). The efficiency of the CDF is evaluated by applying to hyperspectral imagery to detect man-made objects. The obtained results show that the CDF presents better performance than the conventional covariance descriptor.
Directory of Open Access Journals (Sweden)
Geneviève Fabre
2006-05-01
Full Text Available Considering the wide range of conversations in the autobiography, this essay will attempt to appraise the importance of these verbal exchanges in relation to the overall narrative structure of the book and to the prevalent oral tradition in Louisiana culture, as both an individual and communal expression. The variety of circumstances, the setting and staging, the interlocutors , and the complex intersection of time and place, of stories and History, will be examined; in these conversations with Miss Jane many actors participate, from the interviewer-narrator, to most characters; even the reader becomes involved.Speaking, hearing, listening, keeping silent is an elaborate ritual that performs many functions; besides conveying news or rumors, it imparts information on the times and on the life of a “representative” woman whose existence - spanning a whole century- is both singular and emblematic. Most importantly this essay will analyse the resonance of an eventful and often dramatic era on her sensibility and conversely show how her evolving sensibility informs that history and draws attention to aspects that might have passed unnoticed or be forever silenced. Jane’s desire for liberty and justice is often challenged as she faces the possibilities of life or death.Conversations build up a complex, often contradictory, but compelling portrait: torn between silence and vehemence, between memories and the urge to meet the future, Jane summons body and mind to find her way through the maze of a fast changing world; self-willed and obstinate she claims her right to speak, to express with wit and wisdom her firm belief in the word, in the ability to express deep seated convictions and faith and a whole array of feelings and emotions.
Del Re, A C; Maisel, Natalya C; Blodgett, Janet C; Finney, John W
2013-11-12
Intention to treat (ITT) is an analytic strategy for reducing potential bias in treatment effects arising from missing data in randomised controlled trials (RCTs). Currently, no universally accepted definition of ITT exists, although many researchers consider it to require either no attrition or a strategy to handle missing data. Using the reports of a large pool of RCTs, we examined discrepancies between the types of analyses that alcohol pharmacotherapy researchers stated they used versus those they actually used. We also examined the linkage between analytic strategy (ie, ITT or not) and how missing data on outcomes were handled (if at all), and whether data analytic and missing data strategies have changed over time. Descriptive statistics were generated for reported and actual data analytic strategy and for missing data strategy. In addition, generalised linear models determined changes over time in the use of ITT analyses and missing data strategies. 165 RCTs of pharmacotherapy for alcohol use disorders. Of the 165 studies, 74 reported using an ITT strategy. However, less than 40% of the studies actually conducted ITT according to the rigorous definition above. Whereas no change in the use of ITT analyses over time was found, censored (last follow-up completed) and imputed missing data strategies have increased over time, while analyses of data only for the sample actually followed have decreased. Discrepancies in reporting versus actually conducting ITT analyses were found in this body of RCTs. Lack of clarity regarding the missing data strategy used was common. Consensus on a definition of ITT is important for an adequate understanding of research findings. Clearer reporting standards for analyses and the handling of missing data in pharmacotherapy trials and other intervention studies are needed.
International Nuclear Information System (INIS)
Andelfinger, C.; Lackner, E.; Ulrich, M.; Weber, G.; Schilling, H.B.
1982-04-01
A conceptual design of the ZEPHYR building is described. The listed radiation data show that remote handling devices will be necessary in most areas of the building. For difficult repair and maintenance works it is intended to transfer complete units from the experimental hall to a hot cell which provides better working conditions. The necessary crane systems and other transport means are summarized as well as suitable commercially available manipulators and observation devices. The conept of automatic devices for cutting and welding and other operations inside the vacuum vessel and the belonging position control system is sketched. Guidelines for the design of passive components are set up in order to facilitate remote operation. (orig.)
1992-04-01
Hunger strikes are being used increasingly and not only by those with a political point to make. Whereas in the past, hunger strikes in the United Kingdom seemed mainly to be started by terrorist prisoners for political purposes, the most recent was begun by a Tamil convicted of murder, to protest his innocence. In the later stages of his strike, before calling it off, he was looked after at the Hammersmith Hospital. So it is not only prison doctors who need to know how to handle a hunger strike. The following guidelines, adopted by the 43rd World Medical Assembly in Malta in November 1991, are therefore a timely reminder of the doctor's duties during a hunger strike.
MFTF exception handling system
International Nuclear Information System (INIS)
Nowell, D.M.; Bridgeman, G.D.
1979-01-01
In the design of large experimental control systems, a major concern is ensuring that operators are quickly alerted to emergency or other exceptional conditions and that they are provided with sufficient information to respond adequately. This paper describes how the MFTF exception handling system satisfies these requirements. Conceptually exceptions are divided into one of two classes. Those which affect command status by producing an abort or suspend condition and those which fall into a softer notification category of report only or operator acknowledgement requirement. Additionally, an operator may choose to accept an exception condition as operational, or turn off monitoring for sensors determined to be malfunctioning. Control panels and displays used in operator response to exceptions are described
International Nuclear Information System (INIS)
Tvehlov, Yu.
2000-01-01
The abstract, prepared on the basis of materials of the IAEA new leadership on the plutonium safe handling and its storage (the publication no. 9 in the Safety Reports Series), aimed at presenting internationally acknowledged criteria on the radiation danger evaluation and summarizing the experience in the safe management of great quantities of plutonium, accumulated in the nuclear states, is presented. The data on the weapon-class and civil plutonium, the degree of its danger, the measures for provision of its safety, including the data on accident radiation consequences with the fission number 10 18 , are presented. The recommendations, making it possible to eliminate the super- criticality danger, as well as ignition and explosion, to maintain the tightness of the facility, aimed at excluding the radioactive contamination and the possibility of internal irradiation, to provide for the plutonium security, physical protection and to reduce irradiation are given [ru
Energy Technology Data Exchange (ETDEWEB)
NONE
1965-03-15
Full text: A film dealing with transport of radioactive materials by everyday means - rail, road, sea and air transport - has been made for IAEA. It illustrates in broad terms some of the simple precautions which should be followed by persons dealing with such materials during shipment. Throughout, the picture stresses the transport regulations drawn up and recommended by the Agency, and in particular the need to carry out carefully the instructions based on these regulations in order to ensure that there is no hazard to the public nor to those who handle radioactive materials in transit and storage. In straightforward language, the film addresses the porter of a goods wagon, an airline cargo clerk, a dockside crane operator, a truck driver and others who load and ship freight. It shows the various types of package used to contain different categories of radioactive substances according to the intensity of the radiation emitted. It also illustrates their robustness by a series of tests involving drops, fires, impact, crushing, etc. Clear instructions are conveyed on what to do in the event of an unlikely accident with any type of package. The film is entitled, 'The Safe Transport of Radioactive Materials', and is No. 3 in the series entitled, 'Handle with Care'. It was made for IAEA through the United Kingdom Atomic Energy Authority by the Film Producers' Guild in the United Kingdom. It is in 16 mm colour, optical sound, with a running time of 20 minutes. It is available for order at $50 either direct from IAEA or through any of its Member Governments. Prints can be supplied in English, French, Russian or Spanish. Copies are also available for adaptation for commentaries in other languages. (author)
Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis
George, E., Chan, K.
2012-09-01
Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for
Causal inference with missing exposure information: Methods and applications to an obstetric study.
Zhang, Zhiwei; Liu, Wei; Zhang, Bo; Tang, Li; Zhang, Jun
2016-10-01
Causal inference in observational studies is frequently challenged by the occurrence of missing data, in addition to confounding. Motivated by the Consortium on Safe Labor, a large observational study of obstetric labor practice and birth outcomes, this article focuses on the problem of missing exposure information in a causal analysis of observational data. This problem can be approached from different angles (i.e. missing covariates and causal inference), and useful methods can be obtained by drawing upon the available techniques and insights in both areas. In this article, we describe and compare a collection of methods based on different modeling assumptions, under standard assumptions for missing data (i.e. missing-at-random and positivity) and for causal inference with complete data (i.e. no unmeasured confounding and another positivity assumption). These methods involve three models: one for treatment assignment, one for the dependence of outcome on treatment and covariates, and one for the missing data mechanism. In general, consistent estimation of causal quantities requires correct specification of at least two of the three models, although there may be some flexibility as to which two models need to be correct. Such flexibility is afforded by doubly robust estimators adapted from the missing covariates literature and the literature on causal inference with complete data, and by a newly developed triply robust estimator that is consistent if any two of the three models are correct. The methods are applied to the Consortium on Safe Labor data and compared in a simulation study mimicking the Consortium on Safe Labor. © The Author(s) 2013.
Missing data in trauma registries: A systematic review.
Shivasabesan, Gowri; Mitra, Biswadev; O'Reilly, Gerard M
2018-03-30
Trauma registries play an integral role in trauma systems but their valid use hinges on data quality. The aim of this study was to determine, among contemporary publications using trauma registry data, the level of reporting of data completeness and the methods used to deal with missing data. A systematic review was conducted of all trauma registry-based manuscripts published from 01 January 2015 to current date (17 March 2017). Studies were identified by searching MEDLINE, EMBASE, and CINAHL using relevant subject headings and keywords. Included manuscripts were evaluated based on previously published recommendations regarding the reporting and discussion of missing data. Manuscripts were graded on their degree of characterization of such observations. In addition, the methods used to manage missing data were examined. There were 539 manuscripts that met inclusion criteria. Among these, 208 (38.6%) manuscripts did not mention data completeness and 88 (16.3%) mentioned missing data but did not quantify the extent. Only a handful (n = 26; 4.8%) quantified the 'missingness' of all variables. Most articles (n = 477; 88.5%) contained no details such as a comparison between patient characteristics in cohorts with and without missing data. Of the 331 articles which made at least some mention of data completeness, the method of managing missing data was unknown in 34 (10.3%). When method(s) to handle missing data were identified, 234 (78.8%) manuscripts used complete case analysis only, 18 (6.1%) used multiple imputation only and 34 (11.4%) used a combination of these. Most manuscripts using trauma registry data did not quantify the extent of missing data for any variables and contained minimal discussion regarding missingness. Out of the studies which identified a method of managing missing data, most used complete case analysis, a method that may bias results. The lack of standardization in the reporting and management of missing data questions the validity of
Covariant quantizations in plane and curved spaces
International Nuclear Information System (INIS)
Assirati, J.L.M.; Gitman, D.M.
2017-01-01
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
Covariant quantizations in plane and curved spaces
Energy Technology Data Exchange (ETDEWEB)
Assirati, J.L.M. [University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil); Gitman, D.M. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); P.N. Lebedev Physical Institute, Moscow (Russian Federation); University of Sao Paulo, Institute of Physics, Sao Paulo (Brazil)
2017-07-15
We present covariant quantization rules for nonsingular finite-dimensional classical theories with flat and curved configuration spaces. In the beginning, we construct a family of covariant quantizations in flat spaces and Cartesian coordinates. This family is parametrized by a function ω(θ), θ element of (1,0), which describes an ambiguity of the quantization. We generalize this construction presenting covariant quantizations of theories with flat configuration spaces but already with arbitrary curvilinear coordinates. Then we construct a so-called minimal family of covariant quantizations for theories with curved configuration spaces. This family of quantizations is parametrized by the same function ω(θ). Finally, we describe a more wide family of covariant quantizations in curved spaces. This family is already parametrized by two functions, the previous one ω(θ) and by an additional function Θ(x,ξ). The above mentioned minimal family is a part at Θ = 1 of the wide family of quantizations. We study constructed quantizations in detail, proving their consistency and covariance. As a physical application, we consider a quantization of a non-relativistic particle moving in a curved space, discussing the problem of a quantum potential. Applying the covariant quantizations in flat spaces to an old problem of constructing quantum Hamiltonian in polar coordinates, we directly obtain a correct result. (orig.)
International Nuclear Information System (INIS)
MCDONALD, K.M.
2000-01-01
This drum-handling plan proposes a method to deal with unvented transuranic drums encountered during retrieval of drums. Finding unvented drums during retrieval activities was expected, as identified in the Transuranic (TRU) Phase I Retrieval Plan (HNF-4781). However, significant numbers of unvented drums were not expected until excavation of buried drums began. This plan represents accelerated planning for management of unvented drums. A plan is proposed that manages unvented drums differently based on three categories. The first category of drums is any that visually appear to be pressurized. These will be vented immediately, using either the Hanford Fire Department Hazardous Materials (Haz. Mat.) team, if such are encountered before the facilities' capabilities are established, or using internal capabilities, once established. To date, no drums have been retrieved that showed signs of pressurization. The second category consists of drums that contain a minimal amount of Pu isotopes. This minimal amount is typically less than 1 gram of Pu, but may be waste-stream dependent. Drums in this category are assayed to determine if they are low-level waste (LLW). LLW drums are typically disposed of without venting. Any unvented drums that assay as TRU will be staged for a future venting campaign, using appropriate safety precautions in their handling. The third category of drums is those for which records show larger amounts of Pu isotopes (typically greater than or equal to 1 gram of Pu). These are assumed to be TRU and are not assayed at this point, but are staged for a future venting campaign. Any of these drums that do not have a visible venting device will be staged awaiting venting, and will be managed under appropriate controls, including covering the drums to protect from direct solar exposure, minimizing of container movement, and placement of a barrier to restrict vehicle access. There are a number of equipment options available to perform the venting. The
New transport and handling contract
SC Department
2008-01-01
A new transport and handling contract entered into force on 1.10.2008. As with the previous contract, the user interface is the internal transport/handling request form on EDH: https://edh.cern.ch/Document/TransportRequest/ To ensure that you receive the best possible service, we invite you to complete the various fields as accurately as possible and to include a mobile telephone number on which we can reach you. You can follow the progress of your request (schedule, completion) in the EDH request routing information. We remind you that the following deadlines apply: 48 hours for the transport of heavy goods (up to 8 tonnes) or simple handling operations 5 working days for crane operations, transport of extra-heavy goods, complex handling operations and combined transport and handling operations in the tunnel. For all enquiries, the number to contact remains unchanged: 72202. Heavy Handling Section TS-HE-HH 72672 - 160319
Some observations on interpolating gauges and non-covariant gauges
International Nuclear Information System (INIS)
Joglekar, Satish D.
2003-01-01
We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge invariance as the interpolating parameter θ varies, depends very sensitively on the parameter variation. We do this with a gauge used by Doust. We also consider the Lagrangian path-integrals in Minkowski space for gauges with a residual gauge-invariance. We point out the necessity of inclusion of an ε-term (even) in the formal treatments, without which one may reach incorrect conclusions. We, further, point out that the ε-term can contribute to the BRST WT-identities in a non-trivial way (even as ε → 0). We point out that these contributions lead to additional constraints on Green's function that are not normally taken into account in the BRST formalism that ignores the ε-term, and that they are characteristic of the way the singularities in propagators are handled. We argue that a prescription, in general, will require renormalization; if at all it is to be viable. (author)
Remote handling and accelerators
International Nuclear Information System (INIS)
Wilson, M.T.
1983-01-01
The high-current levels of contemporary and proposed accelerator facilities induce radiation levels into components, requiring consideration be given to maintenance techniques that reduce personnel exposure. Typical components involved include beamstops, targets, collimators, windows, and instrumentation that intercepts the direct beam. Also included are beam extraction, injection, splitting, and kicking regions, as well as purposeful spill areas where beam tails are trimmed and neutral particles are deposited. Scattered beam and secondary particles activate components all along a beamline such as vacuum pipes, magnets, and shielding. Maintenance techniques vary from hands-on to TV-viewed operation using state-of-the-art servomanipulators. Bottom- or side-entry casks are used with thimble-type target and diagnostic assemblies. Long-handled tools are operated from behind shadow shields. Swinging shield doors, unstacking block, and horizontally rolling shield roofs are all used to provide access. Common to all techniques is the need to make operations simple and to provide a means of seeing and reaching the area
TFTR tritium handling concepts
International Nuclear Information System (INIS)
Garber, H.J.
1976-01-01
The Tokamak Fusion Test Reactor, to be located on the Princeton Forrestal Campus, is expected to operate with 1 to 2.5 MA tritium--deuterium plasmas, with the pulses involving injection of 50 to 150 Ci (5 to 16 mg) of tritium. Attainment of fusion conditions is based on generation of an approximately 1 keV tritium plasma by ohmic heating and conversion to a moderately hot tritium--deuterium ion plasma by injection of a ''preheating'' deuterium neutral beam (40 to 80 keV), followed by injection of a ''reacting'' beam of high energy neutral deuterium (120 to 150 keV). Additionally, compressions accompany the beam injections. Environmental, safety and cost considerations led to the decision to limit the amount of tritium gas on-site to that required for an experiment, maintaining all other tritium in ''solidified'' form. The form of the tritium supply is as uranium tritide, while the spent tritium and other hydrogen isotopes are getter-trapped by zirconium--aluminum alloy. The issues treated include: (1) design concepts for the tritium generator and its purification, dispensing, replenishment, containment, and containment--cleanup systems; (2) features of the spent plasma trapping system, particularly the regenerable absorption cartridges, their integration into the vacuum system, and the handling of non-getterables; (3) tritium permeation through the equipment and the anticipated releases to the environment; (4) overview of the tritium related ventilation systems; and (5) design bases for the facility's tritium clean-up systems
Safe Handling of Radioisotopes
International Nuclear Information System (INIS)
1958-01-01
Under its Statute the International Atomic Energy Agency is empowered to provide for the application of standards of safety for protection against radiation to its own operations and to operations making use of assistance provided by it or with which it is otherwise directly associated. To this end authorities receiving such assistance are required to observe relevant health and safety measures prescribed by the Agency. As a first step, it has been considered an urgent task to provide users of radioisotopes with a manual of practice for the safe handling of these substances. Such a manual is presented here and represents the first of a series of manuals and codes to be issued by the Agency. It has been prepared after careful consideration of existing national and international codes of radiation safety, by a group of international experts and in consultation with other international bodies. At the same time it is recommended that the manual be taken into account as a basic reference document by Member States of the Agency in the preparation of national health and safety documents covering the use of radioisotopes.
Radioactive wastes handling facility
International Nuclear Information System (INIS)
Hirose, Emiko; Inaguma, Masahiko; Ozaki, Shigeru; Matsumoto, Kaname.
1997-01-01
There are disposed an area where a conveyor is disposed for separating miscellaneous radioactive solid wastes such as metals, on area for operators which is disposed in the direction vertical to the transferring direction of the conveyor, an area for receiving the radioactive wastes and placing them on the conveyor and an area for collecting the radioactive wastes transferred by the conveyor. Since an operator can conduct handling while wearing a working cloth attached to a partition wall as he wears his ordinary cloth, the operation condition can be improved and the efficiency for the separating work can be improved. When the area for settling conveyors and the area for the operators is depressurized, cruds on the surface of the wastes are not released to the outside and the working clothes can be prevented from being involved. Since the wastes are transferred by the conveyor, the operator's moving range is reduced, poisonous materials are fallen and moved through a sliding way to an area for collecting materials to be separated. Accordingly, the materials to be removed can be accumulated easily. (N.H.)
Trends in Modern Exception Handling
Directory of Open Access Journals (Sweden)
Marcin Kuta
2003-01-01
Full Text Available Exception handling is nowadays a necessary component of error proof information systems. The paper presents overview of techniques and models of exception handling, problems connected with them and potential solutions. The aspects of implementation of propagation mechanisms and exception handling, their effect on semantics and general program efficiency are also taken into account. Presented mechanisms were adopted to modern programming languages. Considering design area, formal methods and formal verification of program properties we can notice exception handling mechanisms are weakly present what makes a field for future research.
Students’ Covariational Reasoning in Solving Integrals’ Problems
Harini, N. V.; Fuad, Y.; Ekawati, R.
2018-01-01
Covariational reasoning plays an important role to indicate quantities vary in learning calculus. This study investigates students’ covariational reasoning during their studies concerning two covarying quantities in integral problem. Six undergraduate students were chosen to solve problems that involved interpreting and representing how quantities change in tandem. Interviews were conducted to reveal the students’ reasoning while solving covariational problems. The result emphasizes that undergraduate students were able to construct the relation of dependent variables that changes in tandem with the independent variable. However, students faced difficulty in forming images of continuously changing rates and could not accurately apply the concept of integrals. These findings suggest that learning calculus should be increased emphasis on coordinating images of two quantities changing in tandem about instantaneously rate of change and to promote conceptual knowledge in integral techniques.
Covariant Quantization with Extended BRST Symmetry
Geyer, B.; Gitman, D. M.; Lavrov, P. M.
1999-01-01
A short rewiev of covariant quantization methods based on BRST-antiBRST symmetry is given. In particular problems of correct definition of Sp(2) symmetric quantization scheme known as triplectic quantization are considered.
Covariant extensions and the nonsymmetric unified field
International Nuclear Information System (INIS)
Borchsenius, K.
1976-01-01
The problem of generally covariant extension of Lorentz invariant field equations, by means of covariant derivatives extracted from the nonsymmetric unified field, is considered. It is shown that the contracted curvature tensor can be expressed in terms of a covariant gauge derivative which contains the gauge derivative corresponding to minimal coupling, if the universal constant p, characterizing the nonsymmetric theory, is fixed in terms of Planck's constant and the elementary quantum of charge. By this choice the spinor representation of the linear connection becomes closely related to the spinor affinity used by Infeld and Van Der Waerden (Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl.; 9:380 (1933)) in their generally covariant formulation of Dirac's equation. (author)
Covariance Spectroscopy for Fissile Material Detection
International Nuclear Information System (INIS)
Trainham, Rusty; Tinsley, Jim; Hurley, Paul; Keegan, Ray
2009-01-01
Nuclear fission produces multiple prompt neutrons and gammas at each fission event. The resulting daughter nuclei continue to emit delayed radiation as neutrons boil off, beta decay occurs, etc. All of the radiations are causally connected, and therefore correlated. The correlations are generally positive, but when different decay channels compete, so that some radiations tend to exclude others, negative correlations could also be observed. A similar problem of reduced complexity is that of cascades radiation, whereby a simple radioactive decay produces two or more correlated gamma rays at each decay. Covariance is the usual means for measuring correlation, and techniques of covariance mapping may be useful to produce distinct signatures of special nuclear materials (SNM). A covariance measurement can also be used to filter data streams because uncorrelated signals are largely rejected. The technique is generally more effective than a coincidence measurement. In this poster, we concentrate on cascades and the covariance filtering problem
Covariant amplitudes in Polyakov string theory
International Nuclear Information System (INIS)
Aoyama, H.; Dhar, A.; Namazie, M.A.
1986-01-01
A manifestly Lorentz-covariant and reparametrization-invariant procedure for computing string amplitudes using Polyakov's formulation is described. Both bosonic and superstring theories are dealt with. The computation of string amplitudes is greatly facilitated by this formalism. (orig.)
Covariance upperbound controllers for networked control systems
International Nuclear Information System (INIS)
Ko, Sang Ho
2012-01-01
This paper deals with designing covariance upperbound controllers for a linear system that can be used in a networked control environment in which control laws are calculated in a remote controller and transmitted through a shared communication link to the plant. In order to compensate for possible packet losses during the transmission, two different techniques are often employed: the zero-input and the hold-input strategy. These use zero input and the latest control input, respectively, when a packet is lost. For each strategy, we synthesize a class of output covariance upperbound controllers for a given covariance upperbound and a packet loss probability. Existence conditions of the covariance upperbound controller are also provided for each strategy. Through numerical examples, performance of the two strategies is compared in terms of feasibility of implementing the controllers
Forecasting Covariance Matrices: A Mixed Frequency Approach
DEFF Research Database (Denmark)
Halbleib, Roxana; Voev, Valeri
This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...
Covariance data evaluation for experimental data
International Nuclear Information System (INIS)
Liu Tingjin
1993-01-01
Some methods and codes have been developed and utilized for covariance data evaluation of experimental data, including parameter analysis, physical analysis, Spline fitting etc.. These methods and codes can be used in many different cases
Earth Observing System Covariance Realism Updates
Ojeda Romero, Juan A.; Miguel, Fred
2017-01-01
This presentation will be given at the International Earth Science Constellation Mission Operations Working Group meetings June 13-15, 2017 to discuss the Earth Observing System Covariance Realism updates.
Laser Covariance Vibrometry for Unsymmetrical Mode Detection
National Research Council Canada - National Science Library
Kobold, Michael C
2006-01-01
Simulated cross - spectral covariance (CSC) from optical return from simulated surface vibration indicates CW phase modulation may be an appropriate phenomenology for adequate classification of vehicles by structural mode...
Error Covariance Estimation of Mesoscale Data Assimilation
National Research Council Canada - National Science Library
Xu, Qin
2005-01-01
The goal of this project is to explore and develop new methods of error covariance estimation that will provide necessary statistical descriptions of prediction and observation errors for mesoscale data assimilation...
Heteroscedasticity resistant robust covariance matrix estimator
Czech Academy of Sciences Publication Activity Database
Víšek, Jan Ámos
2010-01-01
Roč. 17, č. 27 (2010), s. 33-49 ISSN 1212-074X Grant - others:GA UK(CZ) GA402/09/0557 Institutional research plan: CEZ:AV0Z10750506 Keywords : Regression * Covariance matrix * Heteroscedasticity * Resistant Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/SI/visek-heteroscedasticity resistant robust covariance matrix estimator.pdf
Phase-covariant quantum cloning of qudits
International Nuclear Information System (INIS)
Fan Heng; Imai, Hiroshi; Matsumoto, Keiji; Wang, Xiang-Bin
2003-01-01
We study the phase-covariant quantum cloning machine for qudits, i.e., the input states in a d-level quantum system have complex coefficients with arbitrary phase but constant module. A cloning unitary transformation is proposed. After optimizing the fidelity between input state and single qudit reduced density operator of output state, we obtain the optimal fidelity for 1 to 2 phase-covariant quantum cloning of qudits and the corresponding cloning transformation
Noncommutative Gauge Theory with Covariant Star Product
International Nuclear Information System (INIS)
Zet, G.
2010-01-01
We present a noncommutative gauge theory with covariant star product on a space-time with torsion. In order to obtain the covariant star product one imposes some restrictions on the connection of the space-time. Then, a noncommutative gauge theory is developed applying this product to the case of differential forms. Some comments on the advantages of using a space-time with torsion to describe the gravitational field are also given.
Covariant phase difference observables in quantum mechanics
International Nuclear Information System (INIS)
Heinonen, Teiko; Lahti, Pekka; Pellonpaeae, Juha-Pekka
2003-01-01
Covariant phase difference observables are determined in two different ways, by a direct computation and by a group theoretical method. A characterization of phase difference observables which can be expressed as the difference of two phase observables is given. The classical limits of such phase difference observables are determined and the Pegg-Barnett phase difference distribution is obtained from the phase difference representation. The relation of Ban's theory to the covariant phase theories is exhibited
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Covariate analysis of bivariate survival data
Energy Technology Data Exchange (ETDEWEB)
Bennett, L.E.
1992-01-01
The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methods have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Covariant perturbations of Schwarzschild black holes
International Nuclear Information System (INIS)
Clarkson, Chris A; Barrett, Richard K
2003-01-01
We present a new covariant and gauge-invariant perturbation formalism for dealing with spacetimes having spherical symmetry (or some preferred spatial direction) in the background, and apply it to the case of gravitational wave propagation in a Schwarzschild black-hole spacetime. The 1 + 3 covariant approach is extended to a '1 + 1 + 2 covariant sheet' formalism by introducing a radial unit vector in addition to the timelike congruence, and decomposing all covariant quantities with respect to this. The background Schwarzschild solution is discussed and a covariant characterization is given. We give the full first-order system of linearized 1 + 1 + 2 covariant equations, and we show how, by introducing (time and spherical) harmonic functions, these may be reduced to a system of first-order ordinary differential equations and algebraic constraints for the 1 + 1 + 2 variables which may be solved straightforwardly. We show how both odd- and even-parity perturbations may be unified by the discovery of a covariant, frame- and gauge-invariant, transverse-traceless tensor describing gravitational waves, which satisfies a covariant wave equation equivalent to the Regge-Wheeler equation for both even- and odd-parity perturbations. We show how the Zerilli equation may be derived from this tensor, and derive a similar transverse-traceless tensor equation equivalent to this equation. The so-called special quasinormal modes with purely imaginary frequency emerge naturally. The significance of the degrees of freedom in the choice of the two frame vectors is discussed, and we demonstrate that, for a certain frame choice, the underlying dynamics is governed purely by the Regge-Wheeler tensor. The two transverse-traceless Weyl tensors which carry the curvature of gravitational waves are discussed, and we give the closed system of four first-order ordinary differential equations describing their propagation. Finally, we consider the extension of this work to the study of
Daye, Dania; Carrodeguas, Emmanuel; Glover, McKinley; Guerrier, Claude Emmanuel; Harvey, H Benjamin; Flores, Efrén J
2018-05-01
The aim of this study was to investigate the impact of wait days (WDs) on missed outpatient MRI appointments across different demographic and socioeconomic factors. An institutional review board-approved retrospective study was conducted among adult patients scheduled for outpatient MRI during a 12-month period. Scheduling data and demographic information were obtained. Imaging missed appointments were defined as missed scheduled imaging encounters. WDs were defined as the number of days from study order to appointment. Multivariate logistic regression was applied to assess the contribution of race and socioeconomic factors to missed appointments. Linear regression was performed to assess the relationship between missed appointment rates and WDs stratified by race, income, and patient insurance groups with analysis of covariance statistics. A total of 42,727 patients met the inclusion criteria. Mean WDs were 7.95 days. Multivariate regression showed increased odds ratio for missed appointments for patients with increased WDs (7-21 days: odds ratio [OR], 1.39; >21 days: OR, 1.77), African American patients (OR, 1.71), Hispanic patients (OR, 1.30), patients with noncommercial insurance (OR, 2.00-2.55), and those with imaging performed at the main hospital campus (OR, 1.51). Missed appointment rate linearly increased with WDs, with analysis of covariance revealing underrepresented minorities and Medicaid insurance as significant effect modifiers. Increased WDs for advanced imaging significantly increases the likelihood of missed appointments. This effect is most pronounced among underrepresented minorities and patients with lower socioeconomic status. Efforts to reduce WDs may improve equity in access to and utilization of advanced diagnostic imaging for all patients. Copyright © 2018. Published by Elsevier Inc.
Safety measuring for sodium handling
Energy Technology Data Exchange (ETDEWEB)
Jeong, Ji Young; Jeong, K C; Kim, T J; Kim, B H; Choi, J H
2001-09-01
This is the report for the safety measures of sodium handling. These contents are prerequisites for the development of sodium technology and thus the workers participate in sodium handling and experiments have to know them perfectly. As an appendix, the relating parts of the laws are presented.
Ole E. Barndorff-Nielsen; Neil Shephard
2002-01-01
This paper analyses multivariate high frequency financial data using realised covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis and covariance. It will be based on a fixed interval of time (e.g. a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions and covariances change through time. In particular w...
Waste Handling Building Conceptual Study
International Nuclear Information System (INIS)
G.W. Rowe
2000-01-01
The objective of the ''Waste Handling Building Conceptual Study'' is to develop proposed design requirements for the repository Waste Handling System in sufficient detail to allow the surface facility design to proceed to the License Application effort if the proposed requirements are approved by DOE. Proposed requirements were developed to further refine waste handling facility performance characteristics and design constraints with an emphasis on supporting modular construction, minimizing fuel inventory, and optimizing facility maintainability and dry handling operations. To meet this objective, this study attempts to provide an alternative design to the Site Recommendation design that is flexible, simple, reliable, and can be constructed in phases. The design concept will be input to the ''Modular Design/Construction and Operation Options Report'', which will address the overall program objectives and direction, including options and issues associated with transportation, the subsurface facility, and Total System Life Cycle Cost. This study (herein) is limited to the Waste Handling System and associated fuel staging system
A special covariance structure for random coefficient models with both between and within covariates
International Nuclear Information System (INIS)
Riedel, K.S.
1990-07-01
We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)
Are your covariates under control? How normalization can re-introduce covariate effects.
Pain, Oliver; Dudbridge, Frank; Ronald, Angelica
2018-04-30
Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.
Missed hormonal contraceptives: new recommendations.
Guilbert, Edith; Black, Amanda; Dunn, Sheila; Senikas, Vyta
2008-11-01
To provide evidence-based guidance for women and their health care providers on the management of missed or delayed hormonal contraceptive doses in order to prevent unintended pregnancy. Medline, PubMed, and the Cochrane Database were searched for articles published in English, from 1974 to 2007, about hormonal contraceptive methods that are available in Canada and that may be missed or delayed. Relevant publications and position papers from appropriate reproductive health and family planning organizations were also reviewed. The quality of evidence is rated using the criteria developed by the Canadian Task Force on Preventive Health Care. This committee opinion will help health care providers offer clear information to women who have not been adherent in using hormonal contraception with the purpose of preventing unintended pregnancy. The Society of Obstetricians and Gynaecologists of Canada. SUMMARY STATEMENTS: 1. Instructions for what women should do when they miss hormonal contraception have been complex and women do not understand them correctly. (I) 2. The highest risk of ovulation occurs when the hormone-free interval is prolonged for more than seven days, either by delaying the start of combined hormonal contraceptives or by missing active hormone doses during the first or third weeks of combined oral contraceptives. (II) Ovulation rarely occurs after seven consecutive days of combined oral contraceptive use. (II) RECOMMENDATIONS: 1. Health care providers should give clear, simple instructions, both written and oral, on missed hormonal contraceptive pills as part of contraceptive counselling. (III-A) 2. Health care providers should provide women with telephone/electronic resources for reference in the event of missed or delayed hormonal contraceptives. (III-A) 3. In order to avoid an increased risk of unintended pregnancy, the hormone-free interval should not exceed seven days in combined hormonal contraceptive users. (II-A) 4. Back-up contraception should
An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter
Song, Hajoon
2010-07-01
A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.
Missing data in forest ecology and management: advances in quantitative methods [Preface
Tara Barrett; Matti Maltomo
2012-01-01
In recent years, substantial progress has been made for handling missing data issues for applications in forest ecology and management, particularly in the area of imputation techniques. A session on this topic was held at the XXlll IUFRO World Congress in Seoul, South Korea, on August 23-28, 2010, resulting in this special issue of six papers that address recent...
Gralla, Preston
2011-01-01
Get the most from your Droid X right away with this entertaining Missing Manual. Veteran tech author Preston Gralla offers a guided tour of every feature, with lots of expert tips and tricks along the way. You'll learn how to use calling and texting features, take and share photos, enjoy streaming music and video, and much more. Packed with full-color illustrations, this engaging book covers everything from getting started to advanced features and troubleshooting. Unleash the power of Motorola's hot new device with Droid X: The Missing Manual. Get organized. Import your contacts and sync wit
Motorola Xoom The Missing Manual
Gralla, Preston
2011-01-01
Motorola Xoom is the first tablet to rival the iPad, and no wonder with all of the great features packed into this device. But learning how to use everything can be tricky-and Xoom doesn't come with a printed guide. That's where this Missing Manual comes in. Gadget expert Preston Gralla helps you master your Xoom with step-by-step instructions and clear explanations. As with all Missing Manuals, this book offers refreshing, jargon-free prose and informative illustrations. Use your Xoom as an e-book reader, music player, camcorder, and phoneKeep in touch with email, video and text chat, and so
Nuclear data covariances in the Indian context
International Nuclear Information System (INIS)
Ganesan, S.
2014-01-01
The topic of covariances is recognized as an important part of several ongoing nuclear data science activities, since 2007, in the Nuclear Data Physics Centre of India (NDPCI). A Phase-1 project in collaboration with the Statistics department in Manipal University, Karnataka (Prof. K.M. Prasad and Prof. S. Nair) on nuclear data covariances was executed successfully during 2007-2011 period. In Phase-I, the NDPCI has conducted three national Theme meetings sponsored by the DAE-BRNS in 2008, 2010 and 2013 on nuclear data covariances. In Phase-1, the emphasis was on a thorough basic understanding of the concept of covariances including assigning uncertainties to experimental data in terms of partial errors and micro correlations, through a study and a detailed discussion of open literature. Towards the end of Phase-1, measurements and a first time covariance analysis of cross-sections for 58 Ni (n, p) 58 Co reaction measured in Mumbai Pelletron accelerator using 7 Li (p,n) reactions as neutron source in the MeV energy region were performed under a PhD programme on nuclear data covariances in which enrolled are two students, Shri B.S. Shivashankar and Ms. Shanti Sheela. India is also successfully evolving a team of young researchers to code nuclear data of uncertainties, with the perspectives on covariances, in the IAEA-EXFOR format. A Phase-II DAE-BRNS-NDPCI proposal of project at Manipal has been submitted and the proposal is undergoing a peer-review at this time. In Phase-2, modern nuclear data evaluation techniques that including covariances will be further studied as a research and development effort, as a first time effort. These efforts include the use of techniques such as that of the Kalman filter. Presently, a 48 hours lecture series on treatment of errors and their propagation is being formulated under auspices of the Homi Bhabha National Institute. The talk describes the progress achieved thus far in the learning curve of the above-mentioned and exciting
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.
2015-05-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Schroedinger covariance states in anisotropic waveguides
International Nuclear Information System (INIS)
Angelow, A.; Trifonov, D.
1995-03-01
In this paper Squeezed and Covariance States based on Schroedinger inequality and their connection with other nonclassical states are considered for particular case of anisotropic waveguide in LiNiO 3 . Here, the problem of photon creation and generation of squeezed and Schroedinger covariance states in optical waveguides is solved in two steps: 1. Quantization of electromagnetic field is provided in the presence of dielectric waveguide using normal-mode expansion. The photon creation and annihilation operators are introduced, expanding the solution A-vector(r-vector,t) in a series in terms of the Sturm - Liouville mode-functions. 2. In terms of these operators the Hamiltonian of the field in a nonlinear waveguide is derived. For such Hamiltonian we construct the covariance states as stable (with nonzero covariance), which minimize the Schroedinger uncertainty relation. The evolutions of the three second momenta of q-circumflex j and p-circumflex j are calculated. For this Hamiltonian all three momenta are expressed in terms of one real parameters s only. It is found out how covariance, via this parameter s, depends on the waveguide profile n(x,y), on the mode-distributions u-vector j (x,y), and on the waveguide phase mismatching Δβ. (author). 37 refs
Form of the manifestly covariant Lagrangian
Johns, Oliver Davis
1985-10-01
The preferred form for the manifestly covariant Lagrangian function of a single, charged particle in a given electromagnetic field is the subject of some disagreement in the textbooks. Some authors use a ``homogeneous'' Lagrangian and others use a ``modified'' form in which the covariant Hamiltonian function is made to be nonzero. We argue in favor of the ``homogeneous'' form. We show that the covariant Lagrangian theories can be understood only if one is careful to distinguish quantities evaluated on the varied (in the sense of the calculus of variations) world lines from quantities evaluated on the unvaried world lines. By making this distinction, we are able to derive the Hamilton-Jacobi and Klein-Gordon equations from the ``homogeneous'' Lagrangian, even though the covariant Hamiltonian function is identically zero on all world lines. The derivation of the Klein-Gordon equation in particular gives Lagrangian theoretical support to the derivations found in standard quantum texts, and is also shown to be consistent with the Feynman path-integral method. We conclude that the ``homogeneous'' Lagrangian is a completely adequate basis for covariant Lagrangian theory both in classical and quantum mechanics. The article also explores the analogy with the Fermat theorem of optics, and illustrates a simple invariant notation for the Lagrangian and other four-vector equations.
Cross-covariance functions for multivariate geostatistics
Genton, Marc G.; Kleiber, William
2015-01-01
Continuously indexed datasets with multiple variables have become ubiquitous in the geophysical, ecological, environmental and climate sciences, and pose substantial analysis challenges to scientists and statisticians. For many years, scientists developed models that aimed at capturing the spatial behavior for an individual process; only within the last few decades has it become commonplace to model multiple processes jointly. The key difficulty is in specifying the cross-covariance function, that is, the function responsible for the relationship between distinct variables. Indeed, these cross-covariance functions must be chosen to be consistent with marginal covariance functions in such a way that the second-order structure always yields a nonnegative definite covariance matrix. We review the main approaches to building cross-covariance models, including the linear model of coregionalization, convolution methods, the multivariate Matérn and nonstationary and space-time extensions of these among others. We additionally cover specialized constructions, including those designed for asymmetry, compact support and spherical domains, with a review of physics-constrained models. We illustrate select models on a bivariate regional climate model output example for temperature and pressure, along with a bivariate minimum and maximum temperature observational dataset; we compare models by likelihood value as well as via cross-validation co-kriging studies. The article closes with a discussion of unsolved problems. © Institute of Mathematical Statistics, 2015.
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
Sophisticated fuel handling system evolved
International Nuclear Information System (INIS)
Ross, D.A.
1988-01-01
The control systems at Sellafield fuel handling plant are described. The requirements called for built-in diagnostic features as well as the ability to handle a large sequencing application. Speed was also important; responses better than 50ms were required. The control systems are used to automate operations within each of the three main process caves - two Magnox fuel decanners and an advanced gas-cooled reactor fuel dismantler. The fuel route within the fuel handling plant is illustrated and described. ASPIC (Automated Sequence Package for Industrial Control) which was developed as a controller for the plant processes is described. (U.K.)
Production management of window handles
Directory of Open Access Journals (Sweden)
Manuela Ingaldi
2014-12-01
Full Text Available In the chapter a company involved in the production of aluminum window and door handles was presented. The main customers of the company are primarily companies which produce PCV joinery and wholesalers supplying these companies. One chosen product from the research company - a single-arm pin-lift window handle - was described and its production process depicted technologically. The chapter also includes SWOT analysis conducted in the research company and the value stream of the single-arm pin-lift window handle.
CERN Bulletin
2015-01-01
Not wanting to miss a moment of the beautiful celestial dance that played out on Friday, 20 March, Jens Roder of CERN’s PH group took to the Jura mountains, where he got several shots of the event. Here are a selection of his photos, which he was kind enough to share with the Bulletin and its readers.
Missing Boxes in Central Europe
DEFF Research Database (Denmark)
Prockl, Günter; Weibrecht Kristensen, Kirsten
2015-01-01
The Chinese New Year is an event that obviously happens every year. Every year however it also causes severe problems for the companies involved in the industry in form of missing containers throughout the chain but in particular in the European Hinterland. Illustrated on the symptoms of the Chin...
An adaptive ES with a ranking based constraint handling strategy
Directory of Open Access Journals (Sweden)
Kusakci Ali Osman
2014-01-01
Full Text Available To solve a constrained optimization problem, equality constraints can be used to eliminate a problem variable. If it is not feasible, the relations imposed implicitly by the constraints can still be exploited. Most conventional constraint handling methods in Evolutionary Algorithms (EAs do not consider the correlations between problem variables imposed by the constraints. This paper relies on the idea that a proper search operator, which captures mentioned implicit correlations, can improve performance of evolutionary constrained optimization algorithms. To realize this, an Evolution Strategy (ES along with a simplified Covariance Matrix Adaptation (CMA based mutation operator is used with a ranking based constraint-handling method. The proposed algorithm is tested on 13 benchmark problems as well as on a real life design problem. The outperformance of the algorithm is significant when compared with conventional ES-based methods.
Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library
International Nuclear Information System (INIS)
Oblozinsky, P.; Oblozinsky, P.; Mattoon, C.M.; Herman, M.; Mughabghab, S.F.; Pigni, M.T.; Talou, P.; Hale, G.M.; Kahler, A.C.; Kawano, T.; Little, R.C.; Young, P.G
2009-01-01
Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10 -5 eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: 23 Na and 55 Mn where more detailed evaluation was done; improvements in major structural materials 52 Cr, 56 Fe and 58 Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for 23 Na and 56 Fe. LANL contributed improved covariance data for 235 U and 239 Pu including prompt neutron fission spectra and completely new evaluation for 240 Pu. New R-matrix evaluation for 16 O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.
ACORNS, Covariance and Correlation Matrix Diagonalization
International Nuclear Information System (INIS)
Szondi, E.J.
1990-01-01
1 - Description of program or function: The program allows the user to verify the different types of covariance/correlation matrices used in the activation neutron spectrometry. 2 - Method of solution: The program performs the diagonalization of the input covariance/relative covariance/correlation matrices. The Eigen values are then analyzed to determine the rank of the matrices. If the Eigen vectors of the pertinent correlation matrix have also been calculated, the program can perform a complete factor analysis (generation of the factor matrix and its rotation in Kaiser's 'varimax' sense to select the origin of the correlations). 3 - Restrictions on the complexity of the problem: Matrix size is limited to 60 on PDP and to 100 on IBM PC/AT
Group covariant protocols for quantum string commitment
International Nuclear Information System (INIS)
Tsurumaru, Toyohiro
2006-01-01
We study the security of quantum string commitment (QSC) protocols with group covariant encoding scheme. First we consider a class of QSC protocol, which is general enough to incorporate all the QSC protocols given in the preceding literatures. Then among those protocols, we consider group covariant protocols and show that the exact upperbound on the binding condition can be calculated. Next using this result, we prove that for every irreducible representation of a finite group, there always exists a corresponding nontrivial QSC protocol which reaches a level of security impossible to achieve classically
The covariant entropy bound in gravitational collapse
International Nuclear Information System (INIS)
Gao, Sijie; Lemos, Jose P. S.
2004-01-01
We study the covariant entropy bound in the context of gravitational collapse. First, we discuss critically the heuristic arguments advanced by Bousso. Then we solve the problem through an exact model: a Tolman-Bondi dust shell collapsing into a Schwarzschild black hole. After the collapse, a new black hole with a larger mass is formed. The horizon, L, of the old black hole then terminates at the singularity. We show that the entropy crossing L does not exceed a quarter of the area of the old horizon. Therefore, the covariant entropy bound is satisfied in this process. (author)
Modular invariance and covariant loop calculus
International Nuclear Information System (INIS)
Petersen, J.L.; Roland, K.O.; Sidenius, J.R.
1988-01-01
The covariant loop calculus provides and efficient technique for computing explicit expressions for the density on moduli space corresponding to arbitrary (bosonic string) loop diagrams. Since modular invariance is not manifest, however, we carry out a detailed comparison with known explicit 2- and 3- loop results derived using analytic geometry (1 loop is known to be ok). We establish identity to 'high' order in some moduli and exactly in others. Agreement is found as a result of various non-trivial cancellations, in part related to number theory. We feel our results provide very strong support for the correctness of the covariant loop calculus approach. (orig.)
Remarks on Bousso's covariant entropy bound
Mayo, A E
2002-01-01
Bousso's covariant entropy bound is put to the test in the context of a non-singular cosmological solution of general relativity found by Bekenstein. Although the model complies with every assumption made in Bousso's original conjecture, the entropy bound is violated due to the occurrence of negative energy density associated with the interaction of some the matter components in the model. We demonstrate how this property allows for the test model to 'elude' a proof of Bousso's conjecture which was given recently by Flanagan, Marolf and Wald. This corroborates the view that the covariant entropy bound should be applied only to stable systems for which every matter component carries positive energy density.
Modular invariance and covariant loop calculus
International Nuclear Information System (INIS)
Petersen, J.L.; Roland, K.O.; Sidenius, J.R.
1988-01-01
The covariant loop calculus provides an efficient technique for computing explicit expressions for the density on moduli space corresponding to arbitrary (bosonic string) loop diagrams. Since modular invariance is not manifest, however, we carry out a detailed comparison with known explicit two- and three-loop results derived using analytic geometry (one loop is known to be okay). We establish identity to 'high' order in some moduli and exactly in others. Agreement is found as a result of various nontrivial cancellations, in part related to number theory. We feel our results provide very strong support for the correctness of the covariant loop calculus approach. (orig.)
Covariant n2-plet mass formulas
International Nuclear Information System (INIS)
Davidson, A.
1979-01-01
Using a generalized internal symmetry group analogous to the Lorentz group, we have constructed a covariant n 2 -plet mass operator. This operator is built as a scalar matrix in the (n;n*) representation, and its SU(n) breaking parameters are identified as intrinsic boost ones. Its basic properties are: covariance, Hermiticity, positivity, charge conjugation, quark contents, and a self-consistent n 2 -1, 1 mixing. The GMO and the Okubo formulas are obtained by considering two different limits of the same generalized mass formula
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Safe handling of radiation sources
International Nuclear Information System (INIS)
Abd Nasir Ibrahim; Azali Muhammad; Ab Razak Hamzah; Abd Aziz Mohamed; Mohammad Pauzi Ismail
2004-01-01
This chapter discussed the subjects related to the safe handling of radiation sources: type of radiation sources, method of use: transport within premises, transport outside premises; Disposal of Gamma Sources
How Retailers Handle Complaint Management
DEFF Research Database (Denmark)
Hansen, Torben; Wilke, Ricky; Zaichkowsky, Judy
2009-01-01
This article fills a gap in the literature by providing insight about the handling of complaint management (CM) across a large cross section of retailers in the grocery, furniture, electronic and auto sectors. Determinants of retailers’ CM handling are investigated and insight is gained as to the......This article fills a gap in the literature by providing insight about the handling of complaint management (CM) across a large cross section of retailers in the grocery, furniture, electronic and auto sectors. Determinants of retailers’ CM handling are investigated and insight is gained...... as to the links between CM and redress of consumers’ complaints. The results suggest that retailers who attach large negative consequences to consumer dissatisfaction are more likely than other retailers to develop a positive strategic view on customer complaining, but at the same time an increase in perceived...
Ergonomic material-handling device
Barsnick, Lance E.; Zalk, David M.; Perry, Catherine M.; Biggs, Terry; Tageson, Robert E.
2004-08-24
A hand-held ergonomic material-handling device capable of moving heavy objects, such as large waste containers and other large objects requiring mechanical assistance. The ergonomic material-handling device can be used with neutral postures of the back, shoulders, wrists and knees, thereby reducing potential injury to the user. The device involves two key features: 1) gives the user the ability to adjust the height of the handles of the device to ergonomically fit the needs of the user's back, wrists and shoulders; and 2) has a rounded handlebar shape, as well as the size and configuration of the handles which keep the user's wrists in a neutral posture during manipulation of the device.
Activities on covariance estimation in Japanese Nuclear Data Committee
Energy Technology Data Exchange (ETDEWEB)
Shibata, Keiichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-03-01
Described are activities on covariance estimation in the Japanese Nuclear Data Committee. Covariances are obtained from measurements by using the least-squares methods. A simultaneous evaluation was performed to deduce covariances of fission cross sections of U and Pu isotopes. A code system, KALMAN, is used to estimate covariances of nuclear model calculations from uncertainties in model parameters. (author)
Analysis of the progression of systolic blood pressure using imputation of missing phenotype values
Vaitsiakhovich, Tatsiana; Drichel, Dmitriy; Angisch, Marina; Becker, Tim; Herold, Christine; Lacour, André
2014-01-01
We present a genome-wide association study of a quantitative trait, "progression of systolic blood pressure in time," in which 142 unrelated individuals of the Genetic Analysis Workshop 18 real genotype data were analyzed. Information on systolic blood pressure and other phenotypic covariates was missing at certain time points for a considerable part of the sample. We observed that the dropout process causing missingness is not independent of the initial systolic blood pressure; that is, the ...
The technique on handling radiation
International Nuclear Information System (INIS)
1997-11-01
This book describes measurement of radiation and handling radiation. The first part deals with measurement of radiation. The contents of this part are characteristic on measurement technique of radiation, radiation detector, measurement of energy spectrum, measurement of radioactivity, measurement for a level of radiation and county's statistics on radiation. The second parts explains handling radiation with treating of sealed radioisotope, treating unsealed source and radiation shield.
Civilsamfundets ABC: H for Handling
DEFF Research Database (Denmark)
Lund, Anker Brink; Meyer, Gitte
2015-01-01
Hvad er civilsamfundet? Anker Brink Lund og Gitte Meyer fra CBS Center for Civil Society Studies gennemgår civilsamfundet bogstav for bogstav. Vi er nået til H for Handling.......Hvad er civilsamfundet? Anker Brink Lund og Gitte Meyer fra CBS Center for Civil Society Studies gennemgår civilsamfundet bogstav for bogstav. Vi er nået til H for Handling....
Covariant canonical quantization of fields and Bohmian mechanics
International Nuclear Information System (INIS)
Nikolic, H.
2005-01-01
We propose a manifestly covariant canonical method of field quantization based on the classical De Donder-Weyl covariant canonical formulation of field theory. Owing to covariance, the space and time arguments of fields are treated on an equal footing. To achieve both covariance and consistency with standard non-covariant canonical quantization of fields in Minkowski spacetime, it is necessary to adopt a covariant Bohmian formulation of quantum field theory. A preferred foliation of spacetime emerges dynamically owing to a purely quantum effect. The application to a simple time-reparametrization invariant system and quantum gravity is discussed and compared with the conventional non-covariant Wheeler-DeWitt approach. (orig.)
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying
2015-01-01
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Zero curvature conditions and conformal covariance
International Nuclear Information System (INIS)
Akemann, G.; Grimm, R.
1992-05-01
Two-dimensional zero curvature conditions were investigated in detail, with special emphasis on conformal properties, and the appearance of covariant higher order differential operators constructed in terms of a projective connection was elucidated. The analysis is based on the Kostant decomposition of simple Lie algebras in terms of representations with respect to their 'principal' SL(2) subalgebra. (author) 27 refs
On superfield covariant quantization in general coordinates
International Nuclear Information System (INIS)
Gitman, D.M.; Moshin, P. Yu.; Tomazelli, J.L.
2005-01-01
We propose a natural extension of the BRST-antiBRST superfield covariant scheme in general coordinates. Thus, the coordinate dependence of the basic tensor fields and scalar density of the formalism is extended from the base supermanifold to the complete set of superfield variables. (orig.)
On superfield covariant quantization in general coordinates
Energy Technology Data Exchange (ETDEWEB)
Gitman, D.M. [Universidade de Sao Paulo, Instituto de Fisica, Sao Paulo, S.P (Brazil); Moshin, P. Yu. [Universidade de Sao Paulo, Instituto de Fisica, Sao Paulo, S.P (Brazil); Tomsk State Pedagogical University, Tomsk (Russian Federation); Tomazelli, J.L. [UNESP, Departamento de Fisica e Quimica, Campus de Guaratingueta (Brazil)
2005-12-01
We propose a natural extension of the BRST-antiBRST superfield covariant scheme in general coordinates. Thus, the coordinate dependence of the basic tensor fields and scalar density of the formalism is extended from the base supermanifold to the complete set of superfield variables. (orig.)
Covariant field theory of closed superstrings
International Nuclear Information System (INIS)
Siopsis, G.
1989-01-01
The authors construct covariant field theories of both type-II and heterotic strings. Toroidal compactification is also considered. The interaction vertices are based on Witten's vertex representing three strings interacting at the mid-point. For closed strings, the authors thus obtain a bilocal interaction
Conformally covariant composite operators in quantum chromodynamics
International Nuclear Information System (INIS)
Craigie, N.S.; Dobrev, V.K.; Todorov, I.T.
1983-03-01
Conformal covariance is shown to determine renormalization properties of composite operators in QCD and in the C 6 3 -model at the one-loop level. Its relevance to higher order (renormalization group improved) perturbative calculations in the short distance limit is also discussed. Light cone operator product expansions and spectral representations for wave functions in QCD are derived. (author)
Soft covariant gauges on the lattice
Energy Technology Data Exchange (ETDEWEB)
Henty, D.S.; Oliveira, O.; Parrinello, C.; Ryan, S. [Department of Physics and Astronomy, University of Edinburgh, Edinburgh EH9 3JZ, Scotland (UKQCD Collaboration)
1996-12-01
We present an exploratory study of a one-parameter family of covariant, nonperturbative lattice gauge-fixing conditions that can be implemented through a simple Monte Carlo algorithm. We demonstrate that at the numerical level the procedure is feasible, and as a first application we examine the gauge dependence of the gluon propagator. {copyright} {ital 1996 The American Physical Society.}
Covariant differential calculus on the quantum hyperplane
International Nuclear Information System (INIS)
Wess, J.
1991-01-01
We develop a differential calculus on the quantum hyperplane covariant with respect to the action of the quantum group GL q (n). This is a concrete example of noncommutative differential geometry. We describe the general constraints for a noncommutative differential calculus and verify that the example given here satisfies all these constraints. We also discuss briefly the integration over the quantum plane. (orig.)
Covariant single-hole optical potential
International Nuclear Information System (INIS)
Kam, J. de
1982-01-01
In this investigation a covariant optical potential model is constructed for scattering processes of mesons from nuclei in which the meson interacts repeatedly with one of the target nucleons. The nuclear binding interactions in the intermediate scattering state are consistently taken into account. In particular for pions and K - projectiles this is important in view of the strong energy dependence of the elementary projectile-nucleon amplitude. Furthermore, this optical potential satisfies unitarity and relativistic covariance. The starting point in our discussion is the three-body model for the optical potential. To obtain a practical covariant theory I formulate the three-body model as a relativistic quasi two-body problem. Expressions for the transition interactions and propagators in the quasi two-body equations are found by imposing the correct s-channel unitarity relations and by using dispersion integrals. This is done in such a way that the correct non-relativistic limit is obtained, avoiding clustering problems. Corrections to the quasi two-body treatment from the Pauli principle and the required ground-state exclusion are taken into account. The covariant equations that we arrive at are amenable to practical calculations. (orig.)
Nonlinear realization of general covariance group
International Nuclear Information System (INIS)
Hamamoto, Shinji
1979-01-01
The structure of the theory resulting from the nonlinear realization of general covariance group is analysed. We discuss the general form of free Lagrangian for Goldstone fields, and propose as a special choice one reasonable form which is shown to describe a gravitational theory with massless tensor graviton and massive vector tordion. (author)
Covariant quantum mechanics on a null plane
International Nuclear Information System (INIS)
Leutwyler, H.; Stern, J.
1977-03-01
Lorentz invariance implies that the null plane wave functions factorize into a kinematical part describing the motion of the system as a whole and an inner wave function that involves the specific dynamical properties of the system - in complete correspondence with the non-relativistic situation. Covariance is equivalent to an angular condition which admits non-trivial solutions
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Approximate methods for derivation of covariance data
International Nuclear Information System (INIS)
Tagesen, S.
1992-01-01
Several approaches for the derivation of covariance information for evaluated nuclear data files (EFF2 and ENDF/B-VI) have been developed and used at IRK and ORNL respectively. Considerations, governing the choice of a distinct method depending on the quantity and quality of available data are presented, advantages/disadvantages are discussed and examples of results are given
Optimal covariate designs theory and applications
Das, Premadhis; Mandal, Nripes Kumar; Sinha, Bikas Kumar
2015-01-01
This book primarily addresses the optimality aspects of covariate designs. A covariate model is a combination of ANOVA and regression models. Optimal estimation of the parameters of the model using a suitable choice of designs is of great importance; as such choices allow experimenters to extract maximum information for the unknown model parameters. The main emphasis of this monograph is to start with an assumed covariate model in combination with some standard ANOVA set-ups such as CRD, RBD, BIBD, GDD, BTIBD, BPEBD, cross-over, multi-factor, split-plot and strip-plot designs, treatment control designs, etc. and discuss the nature and availability of optimal covariate designs. In some situations, optimal estimations of both ANOVA and the regression parameters are provided. Global optimality and D-optimality criteria are mainly used in selecting the design. The standard optimality results of both discrete and continuous set-ups have been adapted, and several novel combinatorial techniques have been applied for...
Asymptotics for the minimum covariance determinant estimator
Butler, R.W.; Davies, P.L.; Jhun, M.
1993-01-01
Consistency is shown for the minimum covariance determinant (MCD) estimators of multivariate location and scale and asymptotic normality is shown for the former. The proofs are made possible by showing a separating ellipsoid property for the MCD subset of observations. An analogous property is shown
EQUIVALENT MODELS IN COVARIANCE STRUCTURE-ANALYSIS
LUIJBEN, TCW
1991-01-01
Defining equivalent models as those that reproduce the same set of covariance matrices, necessary and sufficient conditions are stated for the local equivalence of two expanded identified models M1 and M2 when fitting the more restricted model M0. Assuming several regularity conditions, the rank
Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z
2015-11-01
Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.
ENDF-6 File 30: Data covariances obtained from parameter covariances and sensitivities
International Nuclear Information System (INIS)
Muir, D.W.
1989-01-01
File 30 is provided as a means of describing the covariances of tabulated cross sections, multiplicities, and energy-angle distributions that result from propagating the covariances of a set of underlying parameters (for example, the input parameters of a nuclear-model code), using an evaluator-supplied set of parameter covariances and sensitivities. Whenever nuclear data are evaluated primarily through the application of nuclear models, the covariances of the resulting data can be described very adequately, and compactly, by specifying the covariance matrix for the underlying nuclear parameters, along with a set of sensitivity coefficients giving the rate of change of each nuclear datum of interest with respect to each of the model parameters. Although motivated primarily by these applications of nuclear theory, use of File 30 is not restricted to any one particular evaluation methodology. It can be used to describe data covariances of any origin, so long as they can be formally separated into a set of parameters with specified covariances and a set of data sensitivities
Miss Julie: A Psychoanalytic Study
Directory of Open Access Journals (Sweden)
Sonali Jain
2015-10-01
Full Text Available Sigmund Freud theorized that ‘the hero of the tragedy must suffer…to bear the burden of tragic guilt…(that lay in rebellion against some divine or human authority.’ August Strindberg, the Swedish poet, playwright, author and visual artist, like Shakespeare before him, portrayed insanity as the ultimate of tragic conflict. In this paper I seek to explore and reiterate the dynamics of human relationships that are as relevant today as they were in Strindberg’s time. I propose to examine Strindberg’s Miss Julie, a play set in nineteenth century Sweden, through a psychoanalytic lens. The play deals with bold themes of class and sexual identity politics. Notwithstanding the progress made in breaking down gender barriers, the inequalities inherent in a patriarchal system persist in modern society. Miss Julie highlights these imbalances. My analysis of the play deals with issues of culture and psyche, and draws on Freud, Melanie Klein, Lacan, Luce Irigaray and other contemporary feminists. Miss Julie is a discourse on hysteria, which is still pivotal to psychoanalysis. Prominent philosophers like Hegel and the psychoanalyst Jacques Lacan have written about the dialectic of the master and the slave – a relationship that is characterized by dependence, demand and cruelty. The history of human civilization shows beyond any doubt that there is an intimate connection between cruelty and the sexual instinct. An analysis of the text is carried out using the sado-masochistic dynamic as well the slave-master discourse. I argue that Miss Julie subverts the slave-master relationship. The struggle for dominance and power is closely linked with the theme of sexuality in the unconscious. To quote the English actor and director Alan Rickman, ‘Watching or working on the plays of Strindberg is like seeing the skin, flesh and bones of life separated from each other. Challenging and timeless.’
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Shephard, N.
2004-01-01
This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing...... the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities....
PUFF-IV, Code System to Generate Multigroup Covariance Matrices from ENDF/B-VI Uncertainty Files
International Nuclear Information System (INIS)
2007-01-01
1 - Description of program or function: The PUFF-IV code system processes ENDF/B-VI formatted nuclear cross section covariance data into multigroup covariance matrices. PUFF-IV is the newest release in this series of codes used to process ENDF uncertainty information and to generate the desired multi-group correlation matrix for the evaluation of interest. This version includes corrections and enhancements over previous versions. It is written in Fortran 90 and allows for a more modular design, thus facilitating future upgrades. PUFF-IV enhances support for resonance parameter covariance formats described in the ENDF standard and now handles almost all resonance parameter covariance information in the resolved region, with the exception of the long range covariance sub-subsections. PUFF-IV is normally used in conjunction with an AMPX master library containing group averaged cross section data. Two utility modules are included in this package to facilitate the data interface. The module SMILER allows one to use NJOY generated GENDF files containing group averaged cross section data in conjunction with PUFF-IV. The module COVCOMP allows one to compare two files written in COVERX format. 2 - Methods: Cross section and flux values on a 'super energy grid,' consisting of the union of the required energy group structure and the energy data points in the ENDF/B-V file, are interpolated from the input cross sections and fluxes. Covariance matrices are calculated for this grid and then collapsed to the required group structure. 3 - Restrictions on the complexity of the problem: PUFF-IV cannot process covariance information for energy and angular distributions of secondary particles. PUFF-IV does not process covariance information in Files 34 and 35; nor does it process covariance information in File 40. These new formats will be addressed in a future version of PUFF
Managing distance and covariate information with point-based clustering
Directory of Open Access Journals (Sweden)
Peter A. Whigham
2016-09-01
Full Text Available Abstract Background Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley’s K and applied to the problem of clustering with deliberate self-harm (DSH, is presented. Methods Point-based Monte-Carlo simulation of Ripley’s K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years’ emergency hospital presentations (n = 136 in a New Zealand town (population ~50,000. Study area was defined by residential (housing land parcels representing a finite set of possible point addresses. Results Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Conclusions Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley’s K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for
Resche-Rigon, Matthieu; White, Ian R
2018-06-01
In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.
Survival analysis with functional covariates for partial follow-up studies.
Fang, Hong-Bin; Wu, Tong Tong; Rapoport, Aaron P; Tan, Ming
2016-12-01
Predictive or prognostic analysis plays an increasingly important role in the era of personalized medicine to identify subsets of patients whom the treatment may benefit the most. Although various time-dependent covariate models are available, such models require that covariates be followed in the whole follow-up period. This article studies a new class of functional survival models where the covariates are only monitored in a time interval that is shorter than the whole follow-up period. This paper is motivated by the analysis of a longitudinal study on advanced myeloma patients who received stem cell transplants and T cell infusions after the transplants. The absolute lymphocyte cell counts were collected serially during hospitalization. Those patients are still followed up if they are alive after hospitalization, while their absolute lymphocyte cell counts cannot be measured after that. Another complication is that absolute lymphocyte cell counts are sparsely and irregularly measured. The conventional method using Cox model with time-varying covariates is not applicable because of the different lengths of observation periods. Analysis based on each single observation obviously underutilizes available information and, more seriously, may yield misleading results. This so-called partial follow-up study design represents increasingly common predictive modeling problem where we have serial multiple biomarkers up to a certain time point, which is shorter than the total length of follow-up. We therefore propose a solution to the partial follow-up design. The new method combines functional principal components analysis and survival analysis with selection of those functional covariates. It also has the advantage of handling sparse and irregularly measured longitudinal observations of covariates and measurement errors. Our analysis based on functional principal components reveals that it is the patterns of the trajectories of absolute lymphocyte cell counts, instead of
Methods for Mediation Analysis with Missing Data
Zhang, Zhiyong; Wang, Lijuan
2013-01-01
Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
Asthma, guides for diagnostic and handling
International Nuclear Information System (INIS)
Salgado, Carlos E; Caballero A, Andres S; Garcia G, Elizabeth
1999-01-01
The paper defines the asthma, includes topics as diagnostic, handling of the asthma, special situations as asthma and pregnancy, handling of the asthmatic patient's perioperatory and occupational asthma
International Nuclear Information System (INIS)
Yamada, Koji
1987-01-01
Automatic handling device for the steam relief valves (SRV's) is developed in order to achieve a decrease in exposure of workers, increase in availability factor, improvement in reliability, improvement in safety of operation, and labor saving. A survey is made during a periodical inspection to examine the actual SVR handling operation. An SRV automatic handling device consists of four components: conveyor, armed conveyor, lifting machine, and control/monitoring system. The conveyor is so designed that the existing I-rail installed in the containment vessel can be used without any modification. This is employed for conveying an SRV along the rail. The armed conveyor, designed for a box rail, is used for an SRV installed away from the rail. By using the lifting machine, an SRV installed away from the I-rail is brought to a spot just below the rail so that the SRV can be transferred by the conveyor. The control/monitoring system consists of a control computer, operation panel, TV monitor and annunciator. The SRV handling device is operated by remote control from a control room. A trial equipment is constructed and performance/function testing is carried out using actual SRV's. As a result, is it shown that the SRV handling device requires only two operators to serve satisfactorily. The required time for removal and replacement of one SRV is about 10 minutes. (Nogami, K.)
Missing Data Bias on a Selective Hedging Strategy
Directory of Open Access Journals (Sweden)
Kiss Gábor Dávid
2017-03-01
Full Text Available Foreign exchange rates affect corporate profitability both on the macro and cash-flow level. The current study analyses the bias of missing data on a selective hedging strategy, where currency options are applied in case of Value at Risk (1% signs. However, there can be special occasions when one or some data is missing due to lack of a trading activity. This paper focuses on the impact of different missing data handling methods on GARCH and Value at Risk model parameters, because of selective hedging and option pricing based on them. The main added value of the current paper is the comparison of the impact of different methods, such as listwise deletion, mean substitution, and maximum likelihood based Expectation Maximization, on risk management because this subject has insufficient literature. The current study tested daily closing data of floating currencies from Kenya (KES, Ghana (GHS, South Africa (ZAR, Tanzania (TZS, Uganda (UGX, Gambia (GMD, Madagascar (MGA and Mozambique (MZN in USD denomination against EUR/USD rate between March 8, 2000 and March 6, 2015 acquired from the Bloomberg database. Our results suggested the biases of missingness on Value at Risk and volatility models, presenting significant differences among the number of extreme fluctuations or model parameters. A selective hedging strategy can have different expenditures due to the choice of method. This paper suggests the usage of mean substitution or listwise deletion for daily financial time series due to their tendency to have a close to zero first momentum
International Nuclear Information System (INIS)
Olson, P.H.
1994-01-01
The regulations governing the handling of port-generated waste are often national and/or local legislation, whereas the handling of ship-generated waste is governed by the MARPOL Convention in most parts of the world. The handling of waste consists of two main phases -collection and treatment. Waste has to be collected in every port and on board every ship, whereas generally only some wastes are treated and to a certain degree in ports and on board ships. This paper considers the different kinds of waste generated in both ports and on board ships, where and how it is generated, how it could be collected and treated. The two sources are treated together to show how some ship-generated waste may be treated in port installations primarily constructed for the treatment of the port-generated waste, making integrated use of the available treatment facilities. (author)
Kessels-Habraken, M.M.P.; Schaaf, van der T.W.; Jonge, de J.; Rutte, C.G.
2010-01-01
Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and
Hamaker, E.L.; Grasman, R.P.P.P.
2012-01-01
Many psychological processes are characterized by recurrent shifts between distinct regimes or states. Examples that are considered in this paper are the switches between different states associated with premenstrual syndrome, hourly fluctuations in affect during a major depressive episode, and
DEFF Research Database (Denmark)
Jørgensen, Anders W.; Lundstrøm, Lars H; Wetterslev, Jørn
2014-01-01
BACKGROUND: In randomised trials of medical interventions, the most reliable analysis follows the intention-to-treat (ITT) principle. However, the ITT analysis requires that missing outcome data have to be imputed. Different imputation techniques may give different results and some may lead to bias...... of handling missing data in a 60-week placebo controlled anti-obesity drug trial on topiramate. METHODS: We compared an analysis of complete cases with datasets where missing body weight measurements had been replaced using three different imputation methods: LOCF, baseline carried forward (BOCF) and MI...
Nsubuga-Nyombi, Tamara; Sensalire, Simon; Karamagi, Esther; Aloyo, Judith; Byabagambi, John; Rahimzai, Mirwais; Nabitaka, Linda Kisaakye; Calnan, Jacqueline
2018-03-31
As part of efforts to improve the prevention of mother-to-child transmission in Northern Uganda, we explored reasons for poor viral suppression among 122 pregnant and lactating women who were in care, received viral load tests, but had not achieved viral suppression and had more than 1000 copies/mL. Understanding the patient factors associated with low viral suppression was of interest to the Ministry of Health to guide the development of tools and interventions to achieve viral suppression for pregnant and lactating women newly initiating on ART as well as those on ART with unsuppressed viral load. A facility-based cross-sectional and mixed methods study design was used, with retrospective medical record review. We assessed 122 HIV-positive mothers with known low viral suppression across 31 health facilities in Northern Uganda. Adjusted odds ratios were used to determine the covariates of adherence among HIV positive mothers using logistic regression. A study among health care providers shed further light on predictors of low viral suppression and a history of low early retention. This study was part of a larger national evaluation of the performance of integrated care services for mothers. Adherence defined as taking antiretroviral medications correctly everyday was low at 67.2%. The covariates of low adherence are: taking other medications in addition to ART, missed appointments in the past 6 months, experienced violence in the past 6 months, and faces obstacles to treatment. Mothers who were experiencing each of these covariates were less likely to adhere to treatment. These covariates were triangulated with perspectives of health providers as covariates of low adherence and included: long distances to health facility, missed appointments, running out of pills, sharing antiretroviral drugs, violence, and social lifestyles such as multiple sexual partners coupled with non-disclosure to partners. Inadequate counseling, stigma, and lack of client identity are
Directory of Open Access Journals (Sweden)
K. Karthikeyan
2012-10-01
Full Text Available This paper describes the application of an evolutionary algorithm, Restart Covariance Matrix Adaptation Evolution Strategy (RCMA-ES to the Generation Expansion Planning (GEP problem. RCMA-ES is a class of continuous Evolutionary Algorithm (EA derived from the concept of self-adaptation in evolution strategies, which adapts the covariance matrix of a multivariate normal search distribution. The original GEP problem is modified by incorporating Virtual Mapping Procedure (VMP. The GEP problem of a synthetic test systems for 6-year, 14-year and 24-year planning horizons having five types of candidate units is considered. Two different constraint-handling methods are incorporated and impact of each method has been compared. In addition, comparison and validation has also made with dynamic programming method.
Determination of covariant Schwinger terms in anomalous gauge theories
International Nuclear Information System (INIS)
Kelnhofer, G.
1991-01-01
A functional integral method is used to determine equal time commutators between the covariant currents and the covariant Gauss-law operators in theories which are affected by an anomaly. By using a differential geometrical setup we show how the derivation of consistent- and covariant Schwinger terms can be understood on an equal footing. We find a modified consistency condition for the covariant anomaly. As a by-product the Bardeen-Zumino functional, which relates consistent and covariant anomalies, can be interpreted as connection on a certain line bundle over all gauge potentials. Finally the covariant commutator anomalies are calculated for the two- and four dimensional case. (orig.)
Fischer, M. J.
2014-02-01
There are many different methods for investigating the coupling between two climate fields, which are all based on the multivariate regression model. Each different method of solving the multivariate model has its own attractive characteristics, but often the suitability of a particular method for a particular problem is not clear. Continuum regression methods search the solution space between the conventional methods and thus can find regression model subspaces that mix the attractive characteristics of the end-member subspaces. Principal covariates regression is a continuum regression method that is easily applied to climate fields and makes use of two end-members: principal components regression and redundancy analysis. In this study, principal covariates regression is extended to additionally span a third end-member (partial least squares or maximum covariance analysis). The new method, regularized principal covariates regression, has several attractive features including the following: it easily applies to problems in which the response field has missing values or is temporally sparse, it explores a wide range of model spaces, and it seeks a model subspace that will, for a set number of components, have a predictive skill that is the same or better than conventional regression methods. The new method is illustrated by applying it to the problem of predicting the southern Australian winter rainfall anomaly field using the regional atmospheric pressure anomaly field. Regularized principal covariates regression identifies four major coupled patterns in these two fields. The two leading patterns, which explain over half the variance in the rainfall field, are related to the subtropical ridge and features of the zonally asymmetric circulation.
Paragrassmann analysis and covariant quantum algebras
International Nuclear Information System (INIS)
Filippov, A.T.; Isaev, A.P.; Kurdikov, A.B.; Pyatov, P.N.
1993-01-01
This report is devoted to the consideration from the algebraic point of view the paragrassmann algebras with one and many paragrassmann generators Θ i , Θ p+1 i = 0. We construct the paragrassmann versions of the Heisenberg algebra. For the special case, this algebra is nothing but the algebra for coordinates and derivatives considered in the context of covariant differential calculus on quantum hyperplane. The parameter of deformation q in our case is (p+1)-root of unity. Our construction is nondegenerate only for even p. Taking bilinear combinations of paragrassmann derivatives and coordinates we realize generators for the covariant quantum algebras as tensor products of (p+1) x (p+1) matrices. (orig./HSI)
Covariant holography of a tachyonic accelerating universe
Energy Technology Data Exchange (ETDEWEB)
Rozas-Fernandez, Alberto [Consejo Superior de Investigaciones Cientificas, Instituto de Fisica Fundamental, Madrid (Spain); University of Portsmouth, Institute of Cosmology and Gravitation, Portsmouth (United Kingdom)
2014-08-15
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state w = p/ρ, both for w > -1 and w < -1. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analyzed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of an S-matrix at infinite distances. (orig.)
Twisted covariant noncommutative self-dual gravity
International Nuclear Information System (INIS)
Estrada-Jimenez, S.; Garcia-Compean, H.; Obregon, O.; Ramirez, C.
2008-01-01
A twisted covariant formulation of noncommutative self-dual gravity is presented. The formulation for constructing twisted noncommutative Yang-Mills theories is used. It is shown that the noncommutative torsion is solved at any order of the θ expansion in terms of the tetrad and some extra fields of the theory. In the process the first order expansion in θ for the Plebanski action is explicitly obtained.
Superfield quantization in Sp(2) covariant formalism
Lavrov, P M
2001-01-01
The rules of the superfield Sp(2) covariant quantization of the arbitrary gauge theories for the case of the introduction of the gauging with the derivative equations for the gauge functional are generalized. The possibilities of realization of the expanded anti-brackets are considered and it is shown, that only one of the realizations is compatible with the transformations of the expanded BRST-symmetry in the form of super translations along the Grassmann superspace coordinates
Torsion and geometrostasis in covariant superstrings
International Nuclear Information System (INIS)
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs
Covariant derivatives of the Berezin transform
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav; Otáhalová, R.
2011-01-01
Roč. 363, č. 10 (2011), s. 5111-5129 ISSN 0002-9947 R&D Projects: GA AV ČR IAA100190802 Keywords : Berezin transform * Berezin symbol * covariant derivative Subject RIV: BA - General Mathematics Impact factor: 1.093, year: 2011 http://www.ams.org/journals/tran/2011-363-10/S0002-9947-2011-05111-1/home.html
Torsion and geometrostasis in covariant superstrings
Energy Technology Data Exchange (ETDEWEB)
Zachos, C.
1985-01-01
The covariant action for freely propagating heterotic superstrings consists of a metric and a torsion term with a special relative strength. It is shown that the strength for which torsion flattens the underlying 10-dimensional superspace geometry is precisely that which yields free oscillators on the light cone. This is in complete analogy with the geometrostasis of two-dimensional sigma-models with Wess-Zumino interactions. 13 refs.
Covariance expressions for eigenvalue and eigenvector problems
Liounis, Andrew J.
There are a number of important scientific and engineering problems whose solutions take the form of an eigenvalue--eigenvector problem. Some notable examples include solutions to linear systems of ordinary differential equations, controllability of linear systems, finite element analysis, chemical kinetics, fitting ellipses to noisy data, and optimal estimation of attitude from unit vectors. In many of these problems, having knowledge of the eigenvalue and eigenvector Jacobians is either necessary or is nearly as important as having the solution itself. For instance, Jacobians are necessary to find the uncertainty in a computed eigenvalue or eigenvector estimate. This uncertainty, which is usually represented as a covariance matrix, has been well studied for problems similar to the eigenvalue and eigenvector problem, such as singular value decomposition. There has been substantially less research on the covariance of an optimal estimate originating from an eigenvalue-eigenvector problem. In this thesis we develop two general expressions for the Jacobians of eigenvalues and eigenvectors with respect to the elements of their parent matrix. The expressions developed make use of only the parent matrix and the eigenvalue and eigenvector pair under consideration. In addition, they are applicable to any general matrix (including complex valued matrices, eigenvalues, and eigenvectors) as long as the eigenvalues are simple. Alongside this, we develop expressions that determine the uncertainty in a vector estimate obtained from an eigenvalue-eigenvector problem given the uncertainty of the terms of the matrix. The Jacobian expressions developed are numerically validated with forward finite, differencing and the covariance expressions are validated using Monte Carlo analysis. Finally, the results from this work are used to determine covariance expressions for a variety of estimation problem examples and are also applied to the design of a dynamical system.
Linear Covariance Analysis for a Lunar Lander
Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael
2017-01-01
A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.
The covariant formulation of f ( T ) gravity
International Nuclear Information System (INIS)
Krššák, Martin; Saridakis, Emmanuel N
2016-01-01
We show that the well-known problem of frame dependence and violation of local Lorentz invariance in the usual formulation of f ( T ) gravity is a consequence of neglecting the role of spin connection. We re-formulate f ( T ) gravity starting from, instead of the ‘pure tetrad’ teleparallel gravity, the covariant teleparallel gravity, using both the tetrad and the spin connection as dynamical variables, resulting in a fully covariant, consistent, and frame-independent version of f ( T ) gravity, which does not suffer from the notorious problems of the usual, pure tetrad, f ( T ) theory. We present the method to extract solutions for the most physically important cases, such as the Minkowski, the Friedmann–Robertson–Walker (FRW) and the spherically symmetric ones. We show that in covariant f ( T ) gravity we are allowed to use an arbitrary tetrad in an arbitrary coordinate system along with the corresponding spin connection, resulting always in the same physically relevant field equations. (paper)
Development of covariance capabilities in EMPIRE code
Energy Technology Data Exchange (ETDEWEB)
Herman,M.; Pigni, M.T.; Oblozinsky, P.; Mughabghab, S.F.; Mattoon, C.M.; Capote, R.; Cho, Young-Sik; Trkov, A.
2008-06-24
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance and fast neutron regions. The Atlas of Neutron Resonances by Mughabghab is used as a primary source of information on uncertainties at low energies. Care is taken to ensure consistency among the resonance parameter uncertainties and those for thermal cross sections. The resulting resonance parameter covariances are formatted in the ENDF-6 File 32. In the fast neutron range our methodology is based on model calculations with the code EMPIRE combined with experimental data through several available approaches. The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures yield comparable results. The Kalman filter and/or the generalized least square fitting procedures are employed to incorporate experimental information. We compare the two approaches analyzing results for the major reaction channels on {sup 89}Y. We also discuss a long-standing issue of unreasonably low uncertainties and link it to the rigidity of the model.
Covariant electrodynamics in linear media: Optical metric
Thompson, Robert T.
2018-03-01
While the postulate of covariance of Maxwell's equations for all inertial observers led Einstein to special relativity, it was the further demand of general covariance—form invariance under general coordinate transformations, including between accelerating frames—that led to general relativity. Several lines of inquiry over the past two decades, notably the development of metamaterial-based transformation optics, has spurred a greater interest in the role of geometry and space-time covariance for electrodynamics in ponderable media. I develop a generally covariant, coordinate-free framework for electrodynamics in general dielectric media residing in curved background space-times. In particular, I derive a relation for the spatial medium parameters measured by an arbitrary timelike observer. In terms of those medium parameters I derive an explicit expression for the pseudo-Finslerian optical metric of birefringent media and show how it reduces to a pseudo-Riemannian optical metric for nonbirefringent media. This formulation provides a basis for a unified approach to ray and congruence tracing through media in curved space-times that may smoothly vary among positively refracting, negatively refracting, and vacuum.
SG39 Deliverables. Comments on Covariance Data
International Nuclear Information System (INIS)
Yokoyama, Kenji
2015-01-01
The covariance matrix of a scattered data set, x_i (i=1,n), must be symmetric and positive-definite. As one of WPEC/SG39 contributions to the SG40/CIELO project, several comments or recommendations on the covariance data are described here from the viewpoint of nuclear-data users. To make the comments concrete and useful for nuclear-data evaluators, the covariance data of the latest evaluated nuclear data library, JENDL-4.0 and ENDF/B-VII.1 are treated here as the representative materials. The surveyed nuclides are five isotopes that are most important for fast reactor application. The nuclides, reactions and energy regions dealt with are followings: Pu-239: fission (2.5∼10 keV) and capture (2.5∼10 keV), U-235: fission (500 eV∼10 keV) and capture (500 eV∼30 keV), U-238: fission (1∼10 MeV), capture (below 20 keV, 20∼150 keV), inelastic (above 100 keV) and elastic (above 20 keV), Fe-56: elastic (below 850 keV) and average scattering cosine (above 10 keV), and, Na-23: capture (600 eV∼600 keV), inelastic (above 1 MeV) and elastic (around 2 keV)
Allahbadia, Gautam N
2002-09-01
The epidemic of gender selection is ravaging countries like India & China. Approximately fifty million women are "missing" in the Indian population. Generally three principle causes are given: female infanticide, better food and health care for boys and maternal death at childbirth. Prenatal sex determination and the abortion of female fetuses threatens to skew the sex ratio to new highs. Estimates of the number of female fetuses being destroyed every year in India vary from two million to five million. This review from India attempts to summarize all the currently available methods of sex selection and also highlights the current medical practice regards the subject in south-east Asia.
Measurement of Missing Tranverse Energy
The ATLAS Collaboration
2009-01-01
This note discusses the overall ATLAS detector performance for the reconstruction of the missing transverse energy, ETmiss. Two reconstruction algorithms are discussed and their performance is evaluated for a variety of simulated physics processes which probe different topologies and different total transverse energy regimes. In addition, effects of fake ETmiss, resulting from instrumental effects and from false reconstructions are investigated. Finally, studies with first data, corresponding to an integrated luminosity of 100 pb-1, are suggested which can be used to assess and calibrate the ETmiss performance at the startup of data taking.
Pogue, David
2010-01-01
In early reviews, geeks raved about Windows 7. But if you're an ordinary mortal, learning what this new system is all about will be challenging. Fear not: David Pogue's Windows 7: The Missing Manual comes to the rescue. Like its predecessors, this book illuminates its subject with reader-friendly insight, plenty of wit, and hardnosed objectivity for beginners as well as veteran PC users. Windows 7 fixes many of Vista's most painful shortcomings. It's speedier, has fewer intrusive and nagging screens, and is more compatible with peripherals. Plus, Windows 7 introduces a slew of new features,
Are there missing convective currents?
International Nuclear Information System (INIS)
Chen, C.Y.
1992-01-01
It is revealed in this letter that as far as distribution functions obtained from gyrokinetic equations are concerned, the standard formulae to evaluate currents in plasmas are not applicable due to the fact that those distribution functions are given in a moving coordinate frame and the moving is essentially related to perturbed fields. With heuristic and analytic approaches, appropriate formulae are obtained to evaluate several types of currents in plasmas of which some have been missing in previous approaches. (author). 6 refs, 1 fig
Camille R. Whitney; Jing Liu
2017-01-01
For schools and teachers to help students develop knowledge and skills, students need to show up to class. Yet absenteeism is prevalent, especially in secondary schools. This study uses a rich data set tracking class attendance by day for over 50,000 middle and high school students from an urban district in academic years 2007–2008 through 2012–2013. Our results extend and modify the extant findings on absenteeism that have been based almost exclusively on full-day absenteeism, missing class-...
International Nuclear Information System (INIS)
Van der Merwe, W.G.
1984-01-01
The report deals with SEMFIP, a computer code for determining magnetic field measurements. The program is written in FORTRAN and ASSEMBLER. The preparations for establishing SEMFIP, the actual measurements, data handling and the problems that were experienced are discussed. Details on the computer code are supplied in an appendix
Welding method by remote handling
International Nuclear Information System (INIS)
Hashinokuchi, Minoru.
1994-01-01
Water is charged into a pit (or a water reservoir) and an article to be welded is placed on a support in the pit by remote handling. A steel plate is disposed so as to cover the article to be welded by remote handling. The welding device is positioned to the portion to be welded and fixed in a state where the article to be welded is shielded from radiation by water and the steel plate. Water in the pit is drained till the portion to be welded is exposed to the atmosphere. Then, welding is conducted. After completion of the welding, water is charged again to the pit and the welding device and fixing jigs are decomposed in a state where the article to be welded is shielded again from radiation by water and the steel plate. Subsequently, the steel plate is removed by remote handling. Then, the article to be welded is returned from the pit to a temporary placing pool by remote handling. This can reduce operator's exposure. Further, since the amount of the shielding materials can be minimized, the amount of radioactive wastes can be decreased. (I.N.)
Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices
Lan, Shiwei; Holbrook, Andrew; Fortin, Norbert J.; Ombao, Hernando; Shahbaba, Babak
2017-01-01
Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix
ERRORJ. Covariance processing code. Version 2.2
International Nuclear Information System (INIS)
Chiba, Go
2004-07-01
ERRORJ is the covariance processing code that can produce covariance data of multi-group cross sections, which are essential for uncertainty analyses of nuclear parameters, such as neutron multiplication factor. The ERRORJ code can process the covariance data of cross sections including resonance parameters, angular and energy distributions of secondary neutrons. Those covariance data cannot be processed by the other covariance processing codes. ERRORJ has been modified and the version 2.2 has been developed. This document describes the modifications and how to use. The main topics of the modifications are as follows. Non-diagonal elements of covariance matrices are calculated in the resonance energy region. Option for high-speed calculation is implemented. Perturbation amount is optimized in a sensitivity calculation. Effect of the resonance self-shielding on covariance of multi-group cross section can be considered. It is possible to read a compact covariance format proposed by N.M. Larson. (author)
New perspective in covariance evaluation for nuclear data
International Nuclear Information System (INIS)
Kanda, Y.
1992-01-01
Methods of nuclear data evaluation have been highly developed during the past decade, especially after introducing the concept of covariance. This makes it utmost important how to evaluate covariance matrices for nuclear data. It can be said that covariance evaluation is just the nuclear data evaluation, because the covariance matrix has quantitatively decisive function in current evaluation methods. The covariance primarily represents experimental uncertainties. However, correlation of individual uncertainties between different data must be taken into account and it can not be conducted without detailed physical considerations on experimental conditions. This procedure depends on the evaluator and the estimated covariance does also. The mathematical properties of the covariance have been intensively discussed. Their physical properties should be studied to apply it to the nuclear data evaluation, and then, in this report, are reviewed to give the base for further development of the covariance application. (orig.)
International Nuclear Information System (INIS)
Mazure, A.
1989-01-01
The first evidence for missing mass or dark matter comes from the 30's. On one hand, Oort noted that in the solar neighbourhood the mass of the stars (inferred from count numbers) cannot account for their observed velocities. On the other hand, observation on the sky of various galaxy condensations like the Coma cluster let suppose that they are actual bound systems and not only statistical fluctuations. However, with such an assumption, Zwicky concluded that the velocity dispersion of galaxies in Coma required 100 times more mass than contained in galaxies. Since this period, refined observations, analyses and a reevaluation of the cosmic distance scale reduced this factor but the problem is still present. It is particularly striking for spiral galaxies where systematic observations of rotation curves lead to infer the presence of spherical massive halos. These dynamical evidences form the first missing mass problem. The second one appears with the development of Great Unified Theories for which the natural laboratory is the very early Universe. A consequence of these theories is that our Universe could be closed by exotic particles which interact only gravitationally [fr
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Development of software for handling ship's pharmacy.
Nittari, Giulio; Peretti, Alessandro; Sibilio, Fabio; Ioannidis, Nicholas; Amenta, Francesco
2016-01-01
Ships are required to carry a given amount of medicinal products and medications depending on the flag and the type of vessel. These medicines are stored in the so called ship's "medicine chest" or more properly - a ship pharmacy. Owing to the progress of medical sciences and to the increase in the mean age of seafarers employed on board ships, the number of pharmaceutical products and medical devices required by regulations to be carried on board ships is increasing. This may make handling of the ship's medicine chest a problem primarily on large ships sailing on intercontinental routes due to the difficulty in identifying the correspondence between medicines obtained abroad with those available at the national market. To minimise these problems a tool named Pharmacy Ship (acronym: PARSI) has been developed. The application PARSI is based on a database containing the information about medicines and medical devices required by different countries regulations. In the first application the system was standardised to comply with the Italian regulations issued on the 1st October, 2015 which entered into force on the 18 January 2016. Thanks to PARSI it was possible to standardize the inventory procedures, facilitate the work of maritime health authorities and make it easier for the crew, not professional in the field, to handle the 'medicine chest' correctly by automating the procedures for medicines management. As far as we know there are no other similar tools available at the moment. The application of the software, as well as the automation of different activities, currently carried out manually, will help manage (qualitatively and quantitatively) the ship's pharmacy. The system developed in this study has proved to be an effective tool which serves to guarantee the compliance of the ship pharmacy with regulations of the flag state in terms of medicinal products and medications. Sharing the system with the Telemedical Maritime Assistance Service may result in
Prague, Melanie; Wang, Rui; Stephens, Alisa; Tchetgen Tchetgen, Eric; DeGruttola, Victor
2016-12-01
Semi-parametric methods are often used for the estimation of intervention effects on correlated outcomes in cluster-randomized trials (CRTs). When outcomes are missing at random (MAR), Inverse Probability Weighted (IPW) methods incorporating baseline covariates can be used to deal with informative missingness. Also, augmented generalized estimating equations (AUG) correct for imbalance in baseline covariates but need to be extended for MAR outcomes. However, in the presence of interactions between treatment and baseline covariates, neither method alone produces consistent estimates for the marginal treatment effect if the model for interaction is not correctly specified. We propose an AUG-IPW estimator that weights by the inverse of the probability of being a complete case and allows different outcome models in each intervention arm. This estimator is doubly robust (DR); it gives correct estimates whether the missing data process or the outcome model is correctly specified. We consider the problem of covariate interference which arises when the outcome of an individual may depend on covariates of other individuals. When interfering covariates are not modeled, the DR property prevents bias as long as covariate interference is not present simultaneously for the outcome and the missingness. An R package is developed implementing the proposed method. An extensive simulation study and an application to a CRT of HIV risk reduction-intervention in South Africa illustrate the method. © 2016, The International Biometric Society.
Comparative Analyses of Phenotypic Trait Covariation within and among Populations.
Peiman, Kathryn S; Robinson, Beren W
2017-10-01
Many morphological, behavioral, physiological, and life-history traits covary across the biological scales of individuals, populations, and species. However, the processes that cause traits to covary also change over these scales, challenging our ability to use patterns of trait covariance to infer process. Trait relationships are also widely assumed to have generic functional relationships with similar evolutionary potentials, and even though many different trait relationships are now identified, there is little appreciation that these may influence trait covariation and evolution in unique ways. We use a trait-performance-fitness framework to classify and organize trait relationships into three general classes, address which ones more likely generate trait covariation among individuals in a population, and review how selection shapes phenotypic covariation. We generate predictions about how trait covariance changes within and among populations as a result of trait relationships and in response to selection and consider how these can be tested with comparative data. Careful comparisons of covariation patterns can narrow the set of hypothesized processes that cause trait covariation when the form of the trait relationship and how it responds to selection yield clear predictions about patterns of trait covariation. We discuss the opportunities and limitations of comparative approaches to evaluate hypotheses about the evolutionary causes and consequences of trait covariation and highlight the importance of evaluating patterns within populations replicated in the same and in different selective environments. Explicit hypotheses about trait relationships are key to generating effective predictions about phenotype and its evolution using covariance data.
Covariant differential calculus on quantum spheres of odd dimension
International Nuclear Information System (INIS)
Welk, M.
1998-01-01
Covariant differential calculus on the quantum spheres S q 2N-1 is studied. Two classification results for covariant first order differential calculi are proved. As an important step towards a description of the noncommutative geometry of the quantum spheres, a framework of covariant differential calculus is established, including first and higher order calculi and a symmetry concept. (author)
On the covariance matrices in the evaluated nuclear data
International Nuclear Information System (INIS)
Corcuera, R.P.
1983-05-01
The implications of the uncertainties of nuclear data on reactor calculations are shown. The concept of variance, covariance and correlation are expressed first by intuitive definitions and then through statistical theory. The format of the covariance data for ENDF/B is explained and the formulas to obtain the multigroup covariances are given. (Author) [pt
Evaluation of covariance in theoretical calculation of nuclear data
International Nuclear Information System (INIS)
Kikuchi, Yasuyuki
1981-01-01
Covariances of the cross sections are discussed on the statistical model calculations. Two categories of covariance are discussed: One is caused by the model approximation and the other by the errors in the model parameters. As an example, the covariances are calculated for 100 Ru. (author)
Covariate Imbalance and Precision in Measuring Treatment Effects
Liu, Xiaofeng Steven
2011-01-01
Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…
Earth Observation System Flight Dynamics System Covariance Realism
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
Evaluation of covariance for 238U cross sections
International Nuclear Information System (INIS)
Kawano, Toshihiko; Nakamura, Masahiro; Matsuda, Nobuyuki; Kanda, Yukinori
1995-01-01
Covariances of 238 U are generated using analytic functions for representation of the cross sections. The covariances of the (n,2n) and (n,3n) reactions are derived with a spline function, while the covariances of the total and the inelastic scattering cross section are estimated with a linearized nuclear model calculation. (author)
MATXTST, Basic Operations for Covariance Matrices
International Nuclear Information System (INIS)
Geraldo, Luiz P.; Smith, Donald
1989-01-01
1 - Description of program or function: MATXTST and MATXTST1 perform the following operations for a covariance matrix: - test for singularity; - test for positive definiteness; - compute the inverse if the matrix is non-singular; - compute the determinant; - determine the number of positive, negative, and zero eigenvalues; - examine all possible 3 X 3 cross correlations within a sub-matrix corresponding to a leading principal minor which is non-positive definite. While the two programs utilize the same input, the calculational procedures employed are somewhat different and their functions are complementary. The available input options include: i) the full covariance matrix, ii) the basic variables plus the relative covariance matrix, or iii) uncertainties in the basic variables plus the correlation matrix. 2 - Method of solution: MATXTST employs LINPACK subroutines SPOFA and SPODI to test for positive definiteness and to perform further optional calculations. Subroutine SPOFA factors a symmetric matrix M using the Cholesky algorithm to determine the elements of a matrix R which satisfies the relation M=R'R, where R' is the transposed matrix of R. Each leading principal minor of M is tested until the first one is found which is not positive definite. MATXTST1 uses LINPACK subroutines SSICO, SSIFA, and SSIDI to estimate whether the matrix is near to singularity or not (SSICO), and to perform the matrix diagonalization process (SSIFA). The algorithm used in SSIFA is generalization of the Method of Lagrange Reduction. SSIDI is used to compute the determinant and inertia of the matrix. 3 - Restrictions on the complexity of the problem: Matrices of sizes up to 50 X 50 elements can be treated by present versions of the programs
DEFF Research Database (Denmark)
Stentoft, Peter Alexander; Munk-Nielsen, Thomas; Mikkelsen, Peter Steen
2017-01-01
. The measurements may also be temporarily unavailable because of recalibration, communication faults or other errors. Here we present a method that handles such delay and missing observations. The model is based on zero order hold stochastic differential equations which use binary signals for influent flow...
Non-evaluation applications for covariance matrices
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.
1982-05-01
The possibility for application of covariance matrix techniques to a variety of common research problems other than formal data evaluation are demonstrated by means of several examples. These examples deal with such matters as fitting spectral data, deriving uncertainty estimates for results calculated from experimental data, obtaining the best values for plurally-measured quantities, and methods for analysis of cross section errors based on properties of the experiment. The examples deal with realistic situations encountered in the laboratory, and they are treated in sufficient detail to enable a careful reader to extrapolate the methods to related problems.
Covariant, chirally symmetric, confining model of mesons
International Nuclear Information System (INIS)
Gross, F.; Milana, J.
1991-01-01
We introduce a new model of mesons as quark-antiquark bound states. The model is covariant, confining, and chirally symmetric. Our equations give an analytic solution for a zero-mass pseudoscalar bound state in the case of exact chiral symmetry, and also reduce to the familiar, highly successful nonrelativistic linear potential models in the limit of heavy-quark mass and lightly bound systems. In this fashion we are constructing a unified description of all the mesons from the π through the Υ. Numerical solutions for other cases are also presented
Cosmology of a covariant Galilean field.
De Felice, Antonio; Tsujikawa, Shinji
2010-09-10
We study the cosmology of a covariant scalar field respecting a Galilean symmetry in flat space-time. We show the existence of a tracker solution that finally approaches a de Sitter fixed point responsible for cosmic acceleration today. The viable region of model parameters is clarified by deriving conditions under which ghosts and Laplacian instabilities of scalar and tensor perturbations are absent. The field equation of state exhibits a peculiar phantomlike behavior along the tracker, which allows a possibility to observationally distinguish the Galileon gravity from the cold dark matter model with a cosmological constant.
Covariant differential complexes of quantum linear groups
International Nuclear Information System (INIS)
Isaev, A.P.; Pyatov, P.N.
1993-01-01
We consider the possible covariant external algebra structures for Cartan's 1-forms (Ω) on G L q (N) and S L q (N). Our starting point is that Ω s realize an adjoint representation of quantum group and all monomials of Ω s possess the unique ordering. For the obtained external algebras we define the differential mapping d possessing the usual nilpotence condition, and the generally deformed version of Leibnitz rules. The status of the known examples of G L q (N)-differential calculi in the proposed classification scheme and the problems of S L q (N)-reduction are discussed. (author.). 26 refs
Minimal covariant observables identifying all pure states
Energy Technology Data Exchange (ETDEWEB)
Carmeli, Claudio, E-mail: claudio.carmeli@gmail.com [D.I.M.E., Università di Genova, Via Cadorna 2, I-17100 Savona (Italy); I.N.F.N., Sezione di Genova, Via Dodecaneso 33, I-16146 Genova (Italy); Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku (Finland); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); I.N.F.N., Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy)
2013-09-02
It has been recently shown by Heinosaari, Mazzarella and Wolf (2013) [1] that an observable that identifies all pure states of a d-dimensional quantum system has minimally 4d−4 outcomes or slightly less (the exact number depending on d). However, no simple construction of this type of minimal observable is known. We investigate covariant observables that identify all pure states and have minimal number of outcomes. It is shown that the existence of this kind of observables depends on the dimension of the Hilbert space.
Linear Covariance Analysis and Epoch State Estimators
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
Agnostic Estimation of Mean and Covariance
Lai, Kevin A.; Rao, Anup B.; Vempala, Santosh
2016-01-01
We consider the problem of estimating the mean and covariance of a distribution from iid samples in $\\mathbb{R}^n$, in the presence of an $\\eta$ fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. The agnostic problem includes many interesting special cases, e.g., learning the parameters of a single Gaussian (or finding the best-fit Gaussian) when $\\eta$ fraction of data is adversarially corrupted, agn...
On the Galilean covariance of classical mechanics
International Nuclear Information System (INIS)
Horzela, A.; Kapuscik, E.; Kempczynski, J.; Joint Inst. for Nuclear Research, Dubna
1991-08-01
A Galilean covariant approach to classical mechanics of a single interacting particle is described. In this scheme constitutive relations defining forces are rejected and acting forces are determined by some fundamental differential equations. It is shown that total energy of the interacting particle transforms under Galilean transformations differently from the kinetic energy. The statement is illustrated on the exactly solvable examples of the harmonic oscillator and the case of constant forces and also, in the suitable version of the perturbation theory, for the anharmonic oscillator. (author)
Experience in handling concentrated tritium
International Nuclear Information System (INIS)
Holtslander, W.J.
1985-12-01
The notes describe the experience in handling concentrated tritium in the hydrogen form accumulated in the Chalk River Nuclear Laboratories Tritium Laboratory. The techniques of box operation, pumping systems, hydriding and dehydriding operations, and analysis of tritium are discussed. Information on the Chalk River Tritium Extraction Plant is included as a collection of reprints of papers presented at the Dayton Meeting on Tritium Technology, 1985 April 30 - May 2
International handling of fissionable material
International Nuclear Information System (INIS)
1975-01-01
The opinion of the ministry for foreign affairs on international handling of fissionable materials is given. As an introduction a survey is given of the possibilities to produce nuclear weapons from materials used in or produced by power reactors. Principles for international control of fissionable materials are given. International agreements against proliferation of nuclear weapons are surveyed and methods to improve them are proposed. (K.K.)
Confinement facilities for handling plutonium
International Nuclear Information System (INIS)
Maraman, W.J.; McNeese, W.D.; Stafford, R.G.
1975-01-01
Plutonium handling on a multigram scale began in 1944. Early criteria, equipment, and techniques for confining contamination have been superseded by more stringent criteria and vastly improved equipment and techniques for in-process contamination control, effluent air cleaning and treatment of liquid wastes. This paper describes the evolution of equipment and practices to minimize exposure of workers and escape of contamination into work areas and into the environment. Early and current contamination controls are compared. (author)
Remote handling equipment for SNS
International Nuclear Information System (INIS)
Poulten, B.H.
1983-01-01
This report gives information on the areas of the SNS, facility which become highly radioactive preventing hands-on maintenance. Levels of activity are sufficiently high in the Target Station Area of the SNS, especially under fault conditions, to warrant reactor technology to be used in the design of the water, drainage and ventilation systems. These problems, together with the type of remote handling equipment required in the SNS, are discussed
Remote handling in reprocessing plants
International Nuclear Information System (INIS)
Streiff, G.
1984-01-01
Remote control will be the rule for maintenance in hot cells of future spent fuel reprocessing plants because of the radioactivity level. New handling equipments will be developed and intervention principles defined. Existing materials, recommendations for use and new manipulators are found in the PMDS' documentation. It is also a help in the choice and use of intervention means and a guide for the user [fr
Equipment for the handling of thorium materials
International Nuclear Information System (INIS)
Heisler, S.W. Jr.; Mihalovich, G.S.
1988-01-01
The Feed Materials Production Center (FMPC) is the United States Department of Energy's storage facility for thorium. FMPC thorium handling and overpacking projects ensure the continued safe handling and storage of the thorium inventory until final disposition of the materials is determined and implemented. The handling and overpacking of the thorium materials requires the design of a system that utilizes remote handling and overpacking equipment not currently utilized at the FMPC in the handling of uranium materials. The use of remote equipment significantly reduces radiation exposure to personnel during the handling and overpacking efforts. The design system combines existing technologies from the nuclear industry, the materials processing and handling industry and the mining industry. The designed system consists of a modified fork lift truck for the transport of thorium containers, automated equipment for material identification and inventory control, and remote handling and overpacking equipment for material identification and inventory control, and remote handling and overpacking equipment for repackaging of the thorium materials
Time Series Forecasting with Missing Values
Shin-Fu Wu; Chia-Yung Chang; Shie-Jue Lee
2015-01-01
Time series prediction has become more popular in various kinds of applications such as weather prediction, control engineering, financial analysis, industrial monitoring, etc. To deal with real-world problems, we are often faced with missing values in the data due to sensor malfunctions or human errors. Traditionally, the missing values are simply omitted or replaced by means of imputation methods. However, omitting those missing values may cause temporal discontinuity. Imputation methods, o...
Enteral Feeding Set Handling Techniques.
Lyman, Beth; Williams, Maria; Sollazzo, Janet; Hayden, Ashley; Hensley, Pam; Dai, Hongying; Roberts, Cristine
2017-04-01
Enteral nutrition therapy is common practice in pediatric clinical settings. Often patients will receive a pump-assisted bolus feeding over 30 minutes several times per day using the same enteral feeding set (EFS). This study aims to determine the safest and most efficacious way to handle the EFS between feedings. Three EFS handling techniques were compared through simulation for bacterial growth, nursing time, and supply costs: (1) rinsing the EFS with sterile water after each feeding, (2) refrigerating the EFS between feedings, and (3) using a ready-to-hang (RTH) product maintained at room temperature. Cultures were obtained at baseline, hour 12, and hour 21 of the 24-hour cycle. A time-in-motion analysis was conducted and reported in average number of seconds to complete each procedure. Supply costs were inventoried for 1 month comparing the actual usage to our estimated usage. Of 1080 cultures obtained, the overall bacterial growth rate was 8.7%. The rinse and refrigeration techniques displayed similar bacterial growth (11.4% vs 10.3%, P = .63). The RTH technique displayed the least bacterial growth of any method (4.4%, P = .002). The time analysis in minutes showed the rinse method was the most time-consuming (44.8 ± 2.7) vs refrigeration (35.8 ± 2.6) and RTH (31.08 ± 0.6) ( P refrigerating the EFS between uses is the next most efficacious method for handling the EFS between bolus feeds.
MacDonald, Matthew
2011-01-01
HTML5 is more than a markup language-it's a dozen independent web standards all rolled into one. Until now, all it's been missing is a manual. With this thorough, jargon-free guide, you'll learn how to build web apps that include video tools, dynamic drawings, geolocation, offline web apps, drag-and-drop, and many other features. HTML5 is the future of the Web, and with this book you'll reach it quickly. The important stuff you need to know: Structure web pages in a new way. Learn how HTML5 helps make web design tools and search engines work smarter.Add audio and video without plugins. Build
Integrative missing value estimation for microarray data.
Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine
2006-10-12
Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.
Integrative missing value estimation for microarray data
Directory of Open Access Journals (Sweden)
Zhou Xianghong
2006-10-01
Full Text Available Abstract Background Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. Results We present the integrative Missing Value Estimation method (iMISS by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS imputation algorithm by up to 15% improvement in our benchmark tests. Conclusion We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.
Determination of covariant Schwinger terms in anomalous gauge theories
International Nuclear Information System (INIS)
Kelnhofer, G.
1991-01-01
A functional integral method is used to determine equal time commutators between the covariant currents and the covariant Gauss-law operators in theories which are affected by an anomaly. By using a differential geometrical setup we show how the derivation of consistent- and covariant Schwinger terms can be understood on an equal footing. We find a modified consistency condition for the covariant anomaly. As a by-product the Bardeen-Zumino functional, which relates consistent and covariant anomalies, can be interpreted as connection on a certain line bundle over all gauge potentials. Finally the commutator anomalies are calculated for the two- and four dimensional case. (Author) 13 refs
ERRORJ. Covariance processing code system for JENDL. Version 2
International Nuclear Information System (INIS)
Chiba, Gou
2003-09-01
ERRORJ is the covariance processing code system for Japanese Evaluated Nuclear Data Library (JENDL) that can produce group-averaged covariance data to apply it to the uncertainty analysis of nuclear characteristics. ERRORJ can treat the covariance data for cross sections including resonance parameters as well as angular distributions and energy distributions of secondary neutrons which could not be dealt with by former covariance processing codes. In addition, ERRORJ can treat various forms of multi-group cross section and produce multi-group covariance file with various formats. This document describes an outline of ERRORJ and how to use it. (author)
Hopke, P K; Liu, C; Rubin, D B
2001-03-01
Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.
Piecewise linear regression splines with hyperbolic covariates
International Nuclear Information System (INIS)
Cologne, John B.; Sposto, Richard
1992-09-01
Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)
Hierarchical multivariate covariance analysis of metabolic connectivity.
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-12-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI).
Spatiotemporal noise covariance estimation from limited empirical magnetoencephalographic data
International Nuclear Information System (INIS)
Jun, Sung C; Plis, Sergey M; Ranken, Doug M; Schmidt, David M
2006-01-01
The performance of parametric magnetoencephalography (MEG) and electroencephalography (EEG) source localization approaches can be degraded by the use of poor background noise covariance estimates. In general, estimation of the noise covariance for spatiotemporal analysis is difficult mainly due to the limited noise information available. Furthermore, its estimation requires a large amount of storage and a one-time but very large (and sometimes intractable) calculation or its inverse. To overcome these difficulties, noise covariance models consisting of one pair or a sum of multi-pairs of Kronecker products of spatial covariance and temporal covariance have been proposed. However, these approaches cannot be applied when the noise information is very limited, i.e., the amount of noise information is less than the degrees of freedom of the noise covariance models. A common example of this is when only averaged noise data are available for a limited prestimulus region (typically at most a few hundred milliseconds duration). For such cases, a diagonal spatiotemporal noise covariance model consisting of sensor variances with no spatial or temporal correlation has been the common choice for spatiotemporal analysis. In this work, we propose a different noise covariance model which consists of diagonal spatial noise covariance and Toeplitz temporal noise covariance. It can easily be estimated from limited noise information, and no time-consuming optimization and data-processing are required. Thus, it can be used as an alternative choice when one-pair or multi-pair noise covariance models cannot be estimated due to lack of noise information. To verify its capability we used Bayesian inference dipole analysis and a number of simulated and empirical datasets. We compared this covariance model with other existing covariance models such as conventional diagonal covariance, one-pair and multi-pair noise covariance models, when noise information is sufficient to estimate them. We
Gaskins, J T; Daniels, M J
2016-01-02
The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.
2010-01-01
... the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE DATA COLLECTION, REPORTING AND RECORDKEEPING REQUIREMENTS APPLICABLE TO CRANBERRIES NOT SUBJECT TO THE CRANBERRY MARKETING ORDER § 926.9 Handle. Handle...
HMSRP Hawaiian Monk Seal Handling Data
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains records for all handling and measurement of Hawaiian monk seals since 1981. Live seals are handled and measured during a variety of events...
Student versus Faculty Perceptions of Missing Class.
Sleigh, Merry J.; Ritzer, Darren R.; Casey, Michael B.
2002-01-01
Examines and compares student and faculty attitudes towards students missing classes and class attendance. Surveys undergraduate students (n=231) in lower and upper level psychology courses and psychology faculty. Reports that students found more reasons acceptable for missing classes and that the amount of in-class material on the examinations…
Methods to Minimize Zero-Missing Phenomenon
DEFF Research Database (Denmark)
da Silva, Filipe Miguel Faria; Bak, Claus Leth; Gudmundsdottir, Unnur Stella
2010-01-01
With the increasing use of high-voltage AC cables at transmission levels, phenomena such as current zero-missing start to appear more often in transmission systems. Zero-missing phenomenon can occur when energizing cable lines with shunt reactors. This may considerably delay the opening of the ci...
Noisy covariance matrices and portfolio optimization II
Pafka, Szilárd; Kondor, Imre
2003-03-01
Recent studies inspired by results from random matrix theory (Galluccio et al.: Physica A 259 (1998) 449; Laloux et al.: Phys. Rev. Lett. 83 (1999) 1467; Risk 12 (3) (1999) 69; Plerou et al.: Phys. Rev. Lett. 83 (1999) 1471) found that covariance matrices determined from empirical financial time series appear to contain such a high amount of noise that their structure can essentially be regarded as random. This seems, however, to be in contradiction with the fundamental role played by covariance matrices in finance, which constitute the pillars of modern investment theory and have also gained industry-wide applications in risk management. Our paper is an attempt to resolve this embarrassing paradox. The key observation is that the effect of noise strongly depends on the ratio r= n/ T, where n is the size of the portfolio and T the length of the available time series. On the basis of numerical experiments and analytic results for some toy portfolio models we show that for relatively large values of r (e.g. 0.6) noise does, indeed, have the pronounced effect suggested by Galluccio et al. (1998), Laloux et al. (1999) and Plerou et al. (1999) and illustrated later by Laloux et al. (Int. J. Theor. Appl. Finance 3 (2000) 391), Plerou et al. (Phys. Rev. E, e-print cond-mat/0108023) and Rosenow et al. (Europhys. Lett., e-print cond-mat/0111537) in a portfolio optimization context, while for smaller r (around 0.2 or below), the error due to noise drops to acceptable levels. Since the length of available time series is for obvious reasons limited in any practical application, any bound imposed on the noise-induced error translates into a bound on the size of the portfolio. In a related set of experiments we find that the effect of noise depends also on whether the problem arises in asset allocation or in a risk measurement context: if covariance matrices are used simply for measuring the risk of portfolios with a fixed composition rather than as inputs to optimization, the
LACIE data-handling techniques
Waits, G. H. (Principal Investigator)
1979-01-01
Techniques implemented to facilitate processing of LANDSAT multispectral data between 1975 and 1978 are described. The data that were handled during the large area crop inventory experiment and the storage mechanisms used for the various types of data are defined. The overall data flow, from the placing of the LANDSAT orders through the actual analysis of the data set, is discussed. An overview is provided of the status and tracking system that was developed and of the data base maintenance and operational task. The archiving of the LACIE data is explained.
The handling of radiation accidents
International Nuclear Information System (INIS)
Macdonald, H.F.; Orchard, H.C.; Walker, C.W.
1977-04-01
Some of the more interesting and important contributions to a recent International Symposium on the Handling of Radiation Accidents are discussed and personal comments on many of the papers presented are included. The principal conclusion of the Symposium was that although the nuclear industry has an excellent safety record, there is no room for complacency. Continuing attention to emergency planning and exercising are essential in order to maintain this position. A full list of the papers presented at the Symposium is included as an Appendix. (author)
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Whey handling. 58.443 Section 58.443 Agriculture... Procedures § 58.443 Whey handling. (a) Adequate sanitary facilities shall be provided for the handling of whey. If outside, necessary precautions shall be taken to minimize flies, insects and development of...
Computing more proper covariances of energy dependent nuclear data
International Nuclear Information System (INIS)
Vanhanen, R.
2016-01-01
Highlights: • We present conditions for covariances of energy dependent nuclear data to be proper. • We provide methods to detect non-positive and inconsistent covariances in ENDF-6 format. • We propose methods to find nearby more proper covariances. • The methods can be used as a part of a quality assurance program. - Abstract: We present conditions for covariances of energy dependent nuclear data to be proper in the sense that the covariances are positive, i.e., its eigenvalues are non-negative, and consistent with respect to the sum rules of nuclear data. For the ENDF-6 format covariances we present methods to detect non-positive and inconsistent covariances. These methods would be useful as a part of a quality assurance program. We also propose methods that can be used to find nearby more proper energy dependent covariances. These methods can be used to remove unphysical components, while preserving most of the physical components. We consider several different senses in which the nearness can be measured. These methods could be useful if a re-evaluation of improper covariances is not feasible. Two practical examples are processed and analyzed. These demonstrate some of the properties of the methods. We also demonstrate that the ENDF-6 format covariances of linearly dependent nuclear data should usually be encoded with the derivation rules.
Impact of the 235U Covariance Data in Benchmark Calculations
International Nuclear Information System (INIS)
Leal, Luiz C.; Mueller, D.; Arbanas, G.; Wiarda, D.; Derrien, H.
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235U. The resulting 235U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235U covariance data in calculations of critical benchmark systems
Development of covariance date for fast reactor cores. 3
International Nuclear Information System (INIS)
Shibata, Keiichi; Hasegawa, Akira
1999-03-01
Covariances have been estimated for nuclear data contained in JENDL-3.2. As for Cr and Ni, the physical quantities for which covariances are deduced are cross sections and the first order Legendre-polynomial coefficient for the angular distribution of elastically scattered neutrons. The covariances were estimated by using the same methodology that had been used in the JENDL-3.2 evaluation in order to keep a consistency between mean values and their covariances. In a case where evaluated data were based on experimental data, the covariances were estimated from the same experimental data. For cross section that had been evaluated by nuclear model calculations, the same model was applied to generate the covariances. The covariances obtained were compiled into ENDF-6 format files. The covariances, which had been prepared by the previous fiscal year, were re-examined, and some improvements were performed. Parts of Fe and 235 U covariances were updated. Covariances of nu-p and nu-d for 241 Pu and of fission neutron spectra for 233,235,238 U and 239,240 Pu were newly added to data files. (author)
Anomalous current from the covariant Wigner function
Prokhorov, George; Teryaev, Oleg
2018-04-01
We consider accelerated and rotating media of weakly interacting fermions in local thermodynamic equilibrium on the basis of kinetic approach. Kinetic properties of such media can be described by covariant Wigner function incorporating the relativistic distribution functions of particles with spin. We obtain the formulae for axial current by summation of the terms of all orders of thermal vorticity tensor, chemical potential, both for massive and massless particles. In the massless limit all the terms of fourth and higher orders of vorticity and third order of chemical potential and temperature equal zero. It is shown, that axial current gets a topological component along the 4-acceleration vector. The similarity between different approaches to baryon polarization is established.
Covariant non-commutative space–time
Directory of Open Access Journals (Sweden)
Jonathan J. Heckman
2015-05-01
Full Text Available We introduce a covariant non-commutative deformation of 3+1-dimensional conformal field theory. The deformation introduces a short-distance scale ℓp, and thus breaks scale invariance, but preserves all space–time isometries. The non-commutative algebra is defined on space–times with non-zero constant curvature, i.e. dS4 or AdS4. The construction makes essential use of the representation of CFT tensor operators as polynomials in an auxiliary polarization tensor. The polarization tensor takes active part in the non-commutative algebra, which for dS4 takes the form of so(5,1, while for AdS4 it assembles into so(4,2. The structure of the non-commutative correlation functions hints that the deformed theory contains gravitational interactions and a Regge-like trajectory of higher spin excitations.
Covariant entropy bound and loop quantum cosmology
International Nuclear Information System (INIS)
Ashtekar, Abhay; Wilson-Ewing, Edward
2008-01-01
We examine Bousso's covariant entropy bound conjecture in the context of radiation filled, spatially flat, Friedmann-Robertson-Walker models. The bound is violated near the big bang. However, the hope has been that quantum gravity effects would intervene and protect it. Loop quantum cosmology provides a near ideal setting for investigating this issue. For, on the one hand, quantum geometry effects resolve the singularity and, on the other hand, the wave function is sharply peaked at a quantum corrected but smooth geometry, which can supply the structure needed to test the bound. We find that the bound is respected. We suggest that the bound need not be an essential ingredient for a quantum gravity theory but may emerge from it under suitable circumstances.
Nonparametric Bayesian models for a spatial covariance.
Reich, Brian J; Fuentes, Montserrat
2012-01-01
A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.
Covariant Derivatives and the Renormalization Group Equation
Dolan, Brian P.
The renormalization group equation for N-point correlation functions can be interpreted in a geometrical manner as an equation for Lie transport of amplitudes in the space of couplings. The vector field generating the diffeomorphism has components given by the β functions of the theory. It is argued that this simple picture requires modification whenever any one of the points at which the amplitude is evaluated becomes close to any other. This modification necessitates the introduction of a connection on the space of couplings and new terms appear in the renormalization group equation involving covariant derivatives of the β function and the curvature associated with the connection. It is shown how the connection is related to the operator product expansion coefficients, but there remains an arbitrariness in its definition.
Generation of phase-covariant quantum cloning
International Nuclear Information System (INIS)
Karimipour, V.; Rezakhani, A.T.
2002-01-01
It is known that in phase-covariant quantum cloning, the equatorial states on the Bloch sphere can be cloned with a fidelity higher than the optimal bound established for universal quantum cloning. We generalize this concept to include other states on the Bloch sphere with a definite z component of spin. It is shown that once we know the z component, we can always clone a state with a fidelity higher than the universal value and that of equatorial states. We also make a detailed study of the entanglement properties of the output copies and show that the equatorial states are the only states that give rise to a separable density matrix for the outputs
Covariant formulation of scalar-torsion gravity
Hohmann, Manuel; Järv, Laur; Ualikhanova, Ulbossyn
2018-05-01
We consider a generalized teleparallel theory of gravitation, where the action contains an arbitrary function of the torsion scalar and a scalar field, f (T ,ϕ ) , thus encompassing the cases of f (T ) gravity and a nonminimally coupled scalar field as subclasses. The action is manifestly Lorentz invariant when besides the tetrad one allows for a flat but nontrivial spin connection. We derive the field equations and demonstrate how the antisymmetric part of the tetrad equations is automatically satisfied when the spin connection equation holds. The spin connection equation is a vital part of the covariant formulation, since it determines the spin connection associated with a given tetrad. We discuss how the spin connection equation can be solved in general and provide the cosmological and spherically symmetric examples. Finally, we generalize the theory to an arbitrary number of scalar fields.
Missed opportunities in child healthcare
Directory of Open Access Journals (Sweden)
Linda Jonker
2014-08-01
Objectives: This article describes the experiences of mothers that utilised comprehensive child health services in the Cape Metropolitan area of South Africa. Services included treatment for diseases; preventative interventions such as immunisation; and promotive interventions, such as improvement in nutrition and promotion of breastfeeding. Method: A qualitative, descriptive phenomenological approach was applied to explore the experiences and perceptions of mothers and/or carers utilising child healthcare services. Thirty percent of the clinics were selected purposively from the total population. A convenience purposive non-probability sampling method was applied to select 17 mothers who met the criteria and gave written consent. Interviews were conducted and recorded digitally using an interview guide. The data analysis was done using Tesch’s eight step model. Results: Findings of the study indicated varied experiences. Not all mothers received information about the Road to Health book or card. According to the mothers, integrated child healthcare services were not practised. The consequences were missed opportunities in immunisation, provision of vitamin A, absence of growth monitoring, feeding assessment and provision of nutritional advice. Conclusion: There is a need for simple interventions such as oral rehydration, early recognition and treatment of diseases, immunisation, growth monitoring and appropriate nutrition advice. These services were not offered diligently. Such interventions could contribute to reducing the incidence of child morbidity and mortality.
Directory of Open Access Journals (Sweden)
Caroline Long
2016-10-01
Full Text Available The purpose of the South African Mathematics Olympiad is to generate interest in mathematics and to identify the most talented mathematical minds. Our focus is on how the handling of missing data affects the selection of the ‘best’ contestants. Two approaches handling missing data, applying the Rasch model, are described. The issue of guessing is investigated through a tailored analysis. We present two microanalyses to illustate how missing data may impact selection; the first investigates groups of contestants that may miss selection under particular conditions; the second focuses on two contestants each of whom answer 14 items correctly. This comparison raises questions about the proportion of correct to incorrect answers. Recommendations are made for future scoring of the test, which include reconsideration of negative marking and weighting as well as considering the inclusion of 150 or 200 contestants as opposed to 100 contestants for participation in the final round.
Baker, Jannah; White, Nicole; Mengersen, Kerrie
2014-11-20
Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.
Introduction to covariant formulation of superstring (field) theory
International Nuclear Information System (INIS)
Anon.
1987-01-01
The author discusses covariant formulation of superstring theories based on BRS invariance. New formulation of superstring was constructed by Green and Schwarz in the light-cone gauge first and then a covariant action was discovered. The covariant action has some interesting geometrical interpretation, however, covariant quantizations are difficult to perform because of existence of local supersymmetries. Introducing extra variables into the action, a modified action has been proposed. However, it would be difficult to prescribe constraints to define a physical subspace, or to reproduce the correct physical spectrum. Hence the old formulation, i.e., the Neveu-Schwarz-Ramond (NSR) model for covariant quantization is used. The author begins by quantizing the NSR model in a covariant way using BRS charges. Then the author discusses the field theory of (free) superstring
The method of covariant symbols in curved space-time
International Nuclear Information System (INIS)
Salcedo, L.L.
2007-01-01
Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)
Physical properties of the Schur complement of local covariance matrices
International Nuclear Information System (INIS)
Haruna, L F; Oliveira, M C de
2007-01-01
General properties of global covariance matrices representing bipartite Gaussian states can be decomposed into properties of local covariance matrices and their Schur complements. We demonstrate that given a bipartite Gaussian state ρ 12 described by a 4 x 4 covariance matrix V, the Schur complement of a local covariance submatrix V 1 of it can be interpreted as a new covariance matrix representing a Gaussian operator of party 1 conditioned to local parity measurements on party 2. The connection with a partial parity measurement over a bipartite quantum state and the determination of the reduced Wigner function is given and an operational process of parity measurement is developed. Generalization of this procedure to an n-partite Gaussian state is given, and it is demonstrated that the n - 1 system state conditioned to a partial parity projection is given by a covariance matrix such that its 2 x 2 block elements are Schur complements of special local matrices
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Safety of Cargo Aircraft Handling Procedure
Directory of Open Access Journals (Sweden)
Daniel Hlavatý
2017-07-01
Full Text Available The aim of this paper is to get acquainted with the ways how to improve the safety management system during cargo aircraft handling. The first chapter is dedicated to general information about air cargo transportation. This includes the history or types of cargo aircraft handling, but also the means of handling. The second part is focused on detailed description of cargo aircraft handling, including a description of activities that are performed before and after handling. The following part of this paper covers a theoretical interpretation of safety, safety indicators and legislative provisions related to the safety of cargo aircraft handling. The fourth part of this paper analyzes the fault trees of events which might occur during handling. The factors found by this analysis are compared with safety reports of FedEx. Based on the comparison, there is a proposal on how to improve the safety management in this transportation company.
Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.
Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin
2016-10-10
We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.
Fermionic covariant prolongation structure theory for supernonlinear evolution equation
International Nuclear Information System (INIS)
Cheng Jipeng; Wang Shikun; Wu Ke; Zhao Weizhong
2010-01-01
We investigate the superprincipal bundle and its associated superbundle. The super(nonlinear)connection on the superfiber bundle is constructed. Then by means of the connection theory, we establish the fermionic covariant prolongation structure theory of the supernonlinear evolution equation. In this geometry theory, the fermionic covariant fundamental equations determining the prolongation structure are presented. As an example, the supernonlinear Schroedinger equation is analyzed in the framework of this fermionic covariant prolongation structure theory. We obtain its Lax pairs and Baecklund transformation.
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Transfer Area Mechanical Handling Calculation
International Nuclear Information System (INIS)
Dianda, B.
2004-01-01
This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC--28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC--28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use of these components or their
Imputing data that are missing at high rates using a boosting algorithm
Energy Technology Data Exchange (ETDEWEB)
Cauthen, Katherine Regina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lambert, Gregory [Apple Inc., Cupertino, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lefantzi, Sophia [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2016-09-01
Traditional multiple imputation approaches may perform poorly for datasets with high rates of missingness unless many m imputations are used. This paper implements an alternative machine learning-based approach to imputing data that are missing at high rates. Here, we use boosting to create a strong learner from a weak learner fitted to a dataset missing many observations. This approach may be applied to a variety of types of learners (models). The approach is demonstrated by application to a spatiotemporal dataset for predicting dengue outbreaks in India from meteorological covariates. A Bayesian spatiotemporal CAR model is boosted to produce imputations, and the overall RMSE from a k-fold cross-validation is used to assess imputation accuracy.
Missed opportunities in child healthcare
Directory of Open Access Journals (Sweden)
Linda Jonker
2014-01-01
Full Text Available Background: Various policies in health, such as Integrated Management of Childhood Illnesses, were introduced to enhance integrated service delivery in child healthcare. During clinical practice the researcher observed that integrated services may not be rendered.Objectives: This article describes the experiences of mothers that utilised comprehensive child health services in the Cape Metropolitan area of South Africa. Services included treatment for diseases; preventative interventions such as immunisation; and promotive interventions, such as improvement in nutrition and promotion of breastfeeding.Method: A qualitative, descriptive phenomenological approach was applied to explore the experiences and perceptions of mothers and/or carers utilising child healthcare services. Thirty percent of the clinics were selected purposively from the total population. A convenience purposive non-probability sampling method was applied to select 17 mothers who met the criteria and gave written consent. Interviews were conducted and recorded digitally using an interview guide. The data analysis was done using Tesch’s eight step model.Results: Findings of the study indicated varied experiences. Not all mothers received information about the Road to Health book or card. According to the mothers, integrated child healthcare services were not practised. The consequences were missed opportunities in immunisation, provision of vitamin A, absence of growth monitoring, feeding assessment and provision of nutritional advice.Conclusion: There is a need for simple interventions such as oral rehydration, early recognition and treatment of diseases, immunisation, growth monitoring and appropriate nutrition advice. These services were not offered diligently. Such interventions could contribute to reducing the incidence of child morbidity and mortality.
Some remarks on general covariance of quantum theory
International Nuclear Information System (INIS)
Schmutzer, E.
1977-01-01
If one accepts Einstein's general principle of relativity (covariance principle) also for the sphere of microphysics (quantum, mechanics, quantum field theory, theory of elemtary particles), one has to ask how far the fundamental laws of traditional quantum physics fulfil this principle. Attention is here drawn to a series of papers that have appeared during the last years, in which the author criticized the usual scheme of quantum theory (Heisenberg picture, Schroedinger picture etc.) and presented a new foundation of the basic laws of quantum physics, obeying the 'principle of fundamental covariance' (Einstein's covariance principle in space-time and covariance principle in Hilbert space of quantum operators and states). (author)
How much do genetic covariances alter the rate of adaptation?
Agrawal, Aneil F; Stinchcombe, John R
2009-03-22
Genetically correlated traits do not evolve independently, and the covariances between traits affect the rate at which a population adapts to a specified selection regime. To measure the impact of genetic covariances on the rate of adaptation, we compare the rate fitness increases given the observed G matrix to the expected rate if all the covariances in the G matrix are set to zero. Using data from the literature, we estimate the effect of genetic covariances in real populations. We find no net tendency for covariances to constrain the rate of adaptation, though the quality and heterogeneity of the data limit the certainty of this result. There are some examples in which covariances strongly constrain the rate of adaptation but these are balanced by counter examples in which covariances facilitate the rate of adaptation; in many cases, covariances have little or no effect. We also discuss how our metric can be used to identify traits or suites of traits whose genetic covariances to other traits have a particularly large impact on the rate of adaptation.
Summary report of technical meeting on neutron cross section covariances
International Nuclear Information System (INIS)
Trkov, A.; Smith, D.L.; Capote Noy, R.
2011-01-01
A summary is given of the Technical Meeting on Neutron Cross Section Covariances. The meeting goal was to assess covariance data needs and recommend appropriate methodologies to address those needs. Discussions on covariance data focused on three general topics: 1) Resonance and unresolved resonance regions; 2) Fast neutron region; and 3) Users' perspective: benchmarks' uncertainty and reactor dosimetry. A number of recommendations for further work were generated and the important work that remains to be done in the field of covariances was identified. (author)
CANISTER HANDLING FACILITY DESCRIPTION DOCUMENT
Energy Technology Data Exchange (ETDEWEB)
J.F. Beesley
2005-04-21
The purpose of this facility description document (FDD) is to establish requirements and associated bases that drive the design of the Canister Handling Facility (CHF), which will allow the design effort to proceed to license application. This FDD will be revised at strategic points as the design matures. This FDD identifies the requirements and describes the facility design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This FDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This FDD is part of an iterative design process. It leads the design process with regard to the flowdown of upper tier requirements onto the facility. Knowledge of these requirements is essential in performing the design process. The FDD follows the design with regard to the description of the facility. The description provided in this FDD reflects the current results of the design process.
Bulk handling benefits from ICT
Energy Technology Data Exchange (ETDEWEB)
NONE
2007-11-15
The efficiency and accuracy of bulk handling is being improved by the range of management information systems and services available today. As part of the program to extend Richards Bay Coal Terminal, Siemens is installing a manufacturing execution system which coordinates and monitors all movements of raw materials. The article also reports recent developments by AXSMarine, SunGuard Energy, Fuelworx and Railworx in providing integrated tools for tracking, managing and optimising solid/liquid fuels and rail car maintenance activities. QMASTOR Ltd. has secured a contract with Anglo Coal Australia to provide its Pit to Port.net{reg_sign} and iFuse{reg_sign} software systems across all their Australians sites, to include pit-to-product stockpile management. 2 figs.
Handling and transport problems (1960)
International Nuclear Information System (INIS)
Pomarola, J.; Savouyaud, J.
1960-01-01
I. The handling and transport of radioactive wastes involves the danger of irradiation and contamination. It is indispensable: - to lay down a special set of rules governing the removal and transport of wastes within centres or from one centre to another; - to give charge of this transportation to a group containing teams of specialists. The organisation, equipment and output of these teams is being examined. II. Certain materials are particularly dangerous to transport, and for these special vehicles and fixed installations are necessary. This is the case especially for the evacuation of very active liquids. A transport vehicle is described, consisting of a trailer tractor and a recipient holding 500 litres of liquid of which the activity can reach 1000 C/l; the decanting operation, the route to be followed by the vehicle, and the precautions taken are also described. (author) [fr
CANISTER HANDLING FACILITY DESCRIPTION DOCUMENT
International Nuclear Information System (INIS)
Beesley. J.F.
2005-01-01
The purpose of this facility description document (FDD) is to establish requirements and associated bases that drive the design of the Canister Handling Facility (CHF), which will allow the design effort to proceed to license application. This FDD will be revised at strategic points as the design matures. This FDD identifies the requirements and describes the facility design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This FDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This FDD is part of an iterative design process. It leads the design process with regard to the flowdown of upper tier requirements onto the facility. Knowledge of these requirements is essential in performing the design process. The FDD follows the design with regard to the description of the facility. The description provided in this FDD reflects the current results of the design process
Fuel Handling Facility Description Document
International Nuclear Information System (INIS)
M.A. LaFountain
2005-01-01
The purpose of the facility description document (FDD) is to establish the requirements and their bases that drive the design of the Fuel Handling Facility (FHF) to allow the design effort to proceed to license application. This FDD is a living document that will be revised at strategic points as the design matures. It identifies the requirements and describes the facility design as it currently exists, with emphasis on design attributes provided to meet the requirements. This FDD was developed as an engineering tool for design control. Accordingly, the primary audience and users are design engineers. It leads the design process with regard to the flow down of upper tier requirements onto the facility. Knowledge of these requirements is essential to performing the design process. It trails the design with regard to the description of the facility. This description is a reflection of the results of the design process to date
Data Handling and Parameter Estimation
DEFF Research Database (Denmark)
Sin, Gürkan; Gernaey, Krist
2016-01-01
,engineers, and professionals. However, it is also expected that they will be useful both for graduate teaching as well as a stepping stone for academic researchers who wish to expand their theoretical interest in the subject. For the models selected to interpret the experimental data, this chapter uses available models from...... literature that are mostly based on the ActivatedSludge Model (ASM) framework and their appropriate extensions (Henze et al., 2000).The chapter presents an overview of the most commonly used methods in the estimation of parameters from experimental batch data, namely: (i) data handling and validation, (ii......Modelling is one of the key tools at the disposal of modern wastewater treatment professionals, researchers and engineers. It enables them to study and understand complex phenomena underlying the physical, chemical and biological performance of wastewater treatment plants at different temporal...
Creating Web Sites The Missing Manual
MacDonald, Matthew
2006-01-01
Think you have to be a technical wizard to build a great web site? Think again. For anyone who wants to create an engaging web site--for either personal or business purposes--Creating Web Sites: The Missing Manual demystifies the process and provides tools, techniques, and expert guidance for developing a professional and reliable web presence. Like every Missing Manual, you can count on Creating Web Sites: The Missing Manual to be entertaining and insightful and complete with all the vital information, clear-headed advice, and detailed instructions you need to master the task at hand. Autho
Scalable Tensor Factorizations with Missing Data
DEFF Research Database (Denmark)
Acar, Evrim; Dunlavy, Daniel M.; Kolda, Tamara G.
2010-01-01
of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP...... is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram...
Cask system design guidance for robotic handling
International Nuclear Information System (INIS)
Griesmeyer, J.M.; Drotning, W.D.; Morimoto, A.K.; Bennett, P.C.
1990-10-01
Remote automated cask handling has the potential to reduce both the occupational exposure and the time required to process a nuclear waste transport cask at a handling facility. The ongoing Advanced Handling Technologies Project (AHTP) at Sandia National Laboratories is described. AHTP was initiated to explore the use of advanced robotic systems to perform cask handling operations at handling facilities for radioactive waste, and to provide guidance to cask designers regarding the impact of robotic handling on cask design. The proof-of-concept robotic systems developed in AHTP are intended to extrapolate from currently available commercial systems to the systems that will be available by the time that a repository would be open for operation. The project investigates those cask handling operations that would be performed at a nuclear waste repository facility during cask receiving and handling. The ongoing AHTP indicates that design guidance, rather than design specification, is appropriate, since the requirements for robotic handling do not place severe restrictions on cask design but rather focus on attention to detail and design for limited dexterity. The cask system design features that facilitate robotic handling operations are discussed, and results obtained from AHTP design and operation experience are summarized. The application of these design considerations is illustrated by discussion of the robot systems and their operation on cask feature mock-ups used in the AHTP project. 11 refs., 11 figs
Smooth individual level covariates adjustment in disease mapping.
Huque, Md Hamidul; Anderson, Craig; Walton, Richard; Woolford, Samuel; Ryan, Louise
2018-05-01
Spatial models for disease mapping should ideally account for covariates measured both at individual and area levels. The newly available "indiCAR" model fits the popular conditional autoregresssive (CAR) model by accommodating both individual and group level covariates while adjusting for spatial correlation in the disease rates. This algorithm has been shown to be effective but assumes log-linear associations between individual level covariates and outcome. In many studies, the relationship between individual level covariates and the outcome may be non-log-linear, and methods to track such nonlinearity between individual level covariate and outcome in spatial regression modeling are not well developed. In this paper, we propose a new algorithm, smooth-indiCAR, to fit an extension to the popular conditional autoregresssive model that can accommodate both linear and nonlinear individual level covariate effects while adjusting for group level covariates and spatial correlation in the disease rates. In this formulation, the effect of a continuous individual level covariate is accommodated via penalized splines. We describe a two-step estimation procedure to obtain reliable estimates of individual and group level covariate effects where both individual and group level covariate effects are estimated separately. This distributed computing framework enhances its application in the Big Data domain with a large number of individual/group level covariates. We evaluate the performance of smooth-indiCAR through simulation. Our results indicate that the smooth-indiCAR method provides reliable estimates of all regression and random effect parameters. We illustrate our proposed methodology with an analysis of data on neutropenia admissions in New South Wales (NSW), Australia. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Central subspace dimensionality reduction using covariance operators.
Kim, Minyoung; Pavlovic, Vladimir
2011-04-01
We consider the task of dimensionality reduction informed by real-valued multivariate labels. The problem is often treated as Dimensionality Reduction for Regression (DRR), whose goal is to find a low-dimensional representation, the central subspace, of the input data that preserves the statistical correlation with the targets. A class of DRR methods exploits the notion of inverse regression (IR) to discover central subspaces. Whereas most existing IR techniques rely on explicit output space slicing, we propose a novel method called the Covariance Operator Inverse Regression (COIR) that generalizes IR to nonlinear input/output spaces without explicit target slicing. COIR's unique properties make DRR applicable to problem domains with high-dimensional output data corrupted by potentially significant amounts of noise. Unlike recent kernel dimensionality reduction methods that employ iterative nonconvex optimization, COIR yields a closed-form solution. We also establish the link between COIR, other DRR techniques, and popular supervised dimensionality reduction methods, including canonical correlation analysis and linear discriminant analysis. We then extend COIR to semi-supervised settings where many of the input points lack their labels. We demonstrate the benefits of COIR on several important regression problems in both fully supervised and semi-supervised settings.
The covariance of GPS coordinates and frames
International Nuclear Information System (INIS)
Lachieze-Rey, Marc
2006-01-01
We explore, in the general relativistic context, the properties of the recently introduced global positioning system (GPS) coordinates, as well as those of the associated frames and coframes that they define. We show that they are covariant and completely independent of any observer. We show that standard spectroscopic and astrometric observations allow any observer to measure (i) the values of the GPS coordinates at his position (ii) the components of his 4-velocity and (iii) the components of the metric in the GPS frame. This provides this system with a unique value both for conceptual discussion (no frame dependence) and for practical use (involved quantities are directly measurable): localization, motion monitoring, astrometry, cosmography and tests of gravitation theories. We show explicitly, in the general relativistic context, how an observer may estimate his position and motion, and reconstruct the components of the metric. This arises from two main results: the extension of the velocity fields of the probes to the whole (curved) spacetime, and the identification of the components of the observer's velocity in the GPS frame with the (inversed) observed redshifts of the probes. Specific cases (non-relativistic velocities, Minkowski and Friedmann-Lemaitre spacetimes, geodesic motions) are studied in detail
Covariant path integrals on hyperbolic surfaces
Schaefer, Joe
1997-11-01
DeWitt's covariant formulation of path integration [B. De Witt, "Dynamical theory in curved spaces. I. A review of the classical and quantum action principles," Rev. Mod. Phys. 29, 377-397 (1957)] has two practical advantages over the traditional methods of "lattice approximations;" there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature -1. The Pauli-DeWitt curvature correction term arises, as in DeWitt's work. Introducing a Fuchsian group Γ of the first kind, and a continuous, bounded, Γ-automorphic potential V, we obtain a Feynman-Kac formula for the automorphic Schrödinger equation on the Riemann surface ΓH. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, Path Integration on Manifolds, Mathematical Aspects of Superspace, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47-90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, "The path integral on the Poincare upper half plane and for Liouville quantum mechanics," Phys. Lett. A 123, 319-328 (1987).
Maternal death and near miss measurement
African Journals Online (AJOL)
ABEOLUGBENGAS
2008-05-26
May 26, 2008 ... Maternal health services need to be accountable more than ever ... of maternal death and near miss audit, surveillance and review is ..... (d) A fundamental principle of these ..... quality assurance in obstetrics in Nigeria - a.
Clustering with Missing Values: No Imputation Required
Wagstaff, Kiri
2004-01-01
Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.
Missed medical appointment among hypertensive and diabetic ...
African Journals Online (AJOL)
Keywords: Missed medical appointments, Hypertensive, Diabetic outpatients, Medication adherence, ... 12 weeks, at 95 % confidence level and 5 % error margin, 300 hypertensive ... monthly income and health insurance status of respondents ...
Missed Radiation Therapy and Cancer Recurrence
Patients who miss radiation therapy sessions during cancer treatment have an increased risk of their disease returning, even if they eventually complete their course of radiation treatment, according to a new study.
Evaluation of digital soil mapping approaches with large sets of environmental covariates
Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas
2018-01-01
The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over
Hot Laboratories and Remote Handling
International Nuclear Information System (INIS)
Bart, G.; Blanc, J.Y.; Duwe, R.
2003-01-01
The European Working Group on ' Hot Laboratories and Remote Handling' is firmly established as the major contact forum for the nuclear R and D facilities at the European scale. The yearly plenary meetings intend to: - Exchange experience on analytical methods, their implementation in hot cells, the methodologies used and their application in nuclear research; - Share experience on common infrastructure exploitation matters such as remote handling techniques, safety features, QA-certification, waste handling; - Promote normalization and co-operation, e.g., by looking at mutual complementarities; - Prospect present and future demands from the nuclear industry and to draw strategic conclusions regarding further needs. The 41. plenary meeting was held in CEA Saclay from September 22 to 24, 2003 in the premises and with the technical support of the INSTN (National Institute for Nuclear Science and Technology). The Nuclear Energy Division of CEA sponsored it. The Saclay meeting was divided in three topical oral sessions covering: - Post irradiation examination: new analysis methods and methodologies, small specimen technology, programmes and results; - Hot laboratory infrastructure: decommissioning, refurbishment, waste, safety, nuclear transports; - Prospective research on materials for future applications: innovative fuels (Generation IV, HTR, transmutation, ADS), spallation source materials, and candidate materials for fusion reactor. A poster session was opened to transport companies and laboratory suppliers. The meeting addressed in three sessions the following items: Session 1 - Post Irradiation Examinations. Out of 12 papers (including 1 poster) 7 dealt with surface and solid state micro analysis, another one with an equally complex wet chemical instrumental analytical technique, while the other four papers (including the poster) presented new concepts for digital x-ray image analysis; Session 2 - Hot laboratory infrastructure (including waste theme) which was
Development of commercial robots for radwaste handling
International Nuclear Information System (INIS)
Colborn, K.A.
1988-01-01
The cost and dose burden associated with low level radwaste handling activities is a matter of increasing concern to the commercial nuclear power industry. This concern is evidenced by the fact that many utilities have begun to revaluate waste generation, handling, and disposal activities at their plants in an effort to improve their overall radwaste handling operations. This paper reports on the project Robots for Radwaste Handling, to identify the potential of robots to improve radwaste handling operations. The project has focussed on the potential of remote or automated technology to improve well defined, recognizable radwaste operations. The project focussed on repetitive, low skill level radwaste handling and decontamination tasks which involve significant radiation exposure
Spinors, tensors and the covariant form of Dirac's equation
International Nuclear Information System (INIS)
Chen, W.Q.; Cook, A.H.
1986-01-01
The relations between tensors and spinors are used to establish the form of the covariant derivative of a spinor, making use of the fact that certain bilinear combinations of spinors are vectors. The covariant forms of Dirac's equation are thus obtained and examples in specific coordinate systems are displayed. (author)
A scale invariant covariance structure on jet space
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2005-01-01
This paper considers scale invariance of statistical image models. We study statistical scale invariance of the covariance structure of jet space under scale space blurring and derive the necessary structure and conditions of the jet covariance matrix in order for it to be scale invariant. As par...
Transformation of covariant quark Wigner operator to noncovariant one
International Nuclear Information System (INIS)
Selikhov, A.V.
1989-01-01
The gauge in which covariant and noncovariant quark Wigner operators coincide has been found. In this gauge the representations of vector potential via field strength tensor is valid. The system of equations for the coefficients of covariant Wigner operator expansion in the basis γ-matrices algebra is obtained. 12 refs.; 3 figs
Covariance as input to and output from resonance analyses
International Nuclear Information System (INIS)
Larson, N.M.
1992-01-01
Accurate data analysis requires understanding of the roles played by both data and parameter covariance matrices. In this paper the entire data reduction/analysis process is examined, for neutron-induced reactions in the resonance region. Interrelationships between data and parameter covariance matrices are examined and alternative reduction/analysis methods discussed
A three domain covariance framework for EEG/MEG data
Ros, B.P.; Bijma, F.; de Gunst, M.C.M.; de Munck, J.C.
2015-01-01
In this paper we introduce a covariance framework for the analysis of single subject EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three
application of covariance analysis to feed/ ration experimental data
African Journals Online (AJOL)
Prince Acheampong
ABSTRACT. The use Analysis of Covariance (ANOCOVA) to feed/ration experimental data for birds was examined. Correlation and Regression analyses were used to adjust for the covariate – initial weight of the experimental birds. The Fisher's F statistic for the straight forward Analysis of Variance (ANOVA) showed ...
Covariant Theory of Gravitation in the Spacetime with Finsler Structure
Huang, Xin-Bing
2007-01-01
The theory of gravitation in the spacetime with Finsler structure is constructed. It is shown that the theory keeps general covariance. Such theory reduces to Einstein's general relativity when the Finsler structure is Riemannian. Therefore, this covariant theory of gravitation is an elegant realization of Einstein's thoughts on gravitation in the spacetime with Finsler structure.
Some observations on interpolating gauges and non-covariant gauges
Indian Academy of Sciences (India)
We discuss the viability of using interpolating gauges to deﬁne the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition deﬁning term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
1996-01-01
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continous-time system excited by Gaussian white noise. This result is generalized...
Theory of Covariance Equivalent ARMAV Models of Civil Engineering Structures
DEFF Research Database (Denmark)
Andersen, P.; Brincker, Rune; Kirkegaard, Poul Henning
In this paper the theoretical background for using covariance equivalent ARMAV models in modal analysis is discussed. It is shown how to obtain a covariance equivalent ARMA model for a univariate linear second order continuous-time system excited by Gaussian white noise. This result is generalize...
Validity of covariance models for the analysis of geographical variation
DEFF Research Database (Denmark)
Guillot, Gilles; Schilling, Rene L.; Porcu, Emilio
2014-01-01
1. Due to the availability of large molecular data-sets, covariance models are increasingly used to describe the structure of genetic variation as an alternative to more heavily parametrised biological models. 2. We focus here on a class of parametric covariance models that received sustained att...
The K-Step Spatial Sign Covariance Matrix
Croux, C.; Dehon, C.; Yadine, A.
2010-01-01
The Sign Covariance Matrix is an orthogonal equivariant estimator of mul- tivariate scale. It is often used as an easy-to-compute and highly robust estimator. In this paper we propose a k-step version of the Sign Covariance Matrix, which improves its e±ciency while keeping the maximal breakdown
On the bilinear covariants associated to mass dimension one spinors
Energy Technology Data Exchange (ETDEWEB)
Silva, J.M.H. da; Villalobos, C.H.C.; Rogerio, R.J.B. [DFQ, UNESP, Guaratingueta, SP (Brazil); Scatena, E. [Universidade Federal de Santa Catarina-CEE, Blumenau, SC (Brazil)
2016-10-15
In this paper we approach the issue of Clifford algebra basis deformation, allowing for bilinear covariants associated to Elko spinors which satisfy the Fierz-Pauli-Kofink identities. We present a complete analysis of covariance, taking into account the involved dual structure associated to Elko spinors. Moreover, the possible generalizations to the recently presented new dual structure are performed. (orig.)
Multilevel maximum likelihood estimation with application to covariance matrices
Czech Academy of Sciences Publication Activity Database
Turčičová, Marie; Mandel, J.; Eben, Kryštof
Published online: 23 January ( 2018 ) ISSN 0361-0926 R&D Projects: GA ČR GA13-34856S Institutional support: RVO:67985807 Keywords : Fisher information * High dimension * Hierarchical maximum likelihood * Nested parameter spaces * Spectral diagonal covariance model * Sparse inverse covariance model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.311, year: 2016
Positive semidefinite integrated covariance estimation, factorizations and asynchronicity
DEFF Research Database (Denmark)
Boudt, Kris; Laurent, Sébastien; Lunde, Asger
2017-01-01
An estimator of the ex-post covariation of log-prices under asynchronicity and microstructure noise is proposed. It uses the Cholesky factorization of the covariance matrix in order to exploit the heterogeneity in trading intensities to estimate the different parameters sequentially with as many...
Precomputing Process Noise Covariance for Onboard Sequential Filters
Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell
2017-01-01
Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.
Cross-population myelination covariance of human cerebral cortex.
Ma, Zhiwei; Zhang, Nanyin
2017-09-01
Cross-population covariance of brain morphometric quantities provides a measure of interareal connectivity, as it is believed to be determined by the coordinated neurodevelopment of connected brain regions. Although useful, structural covariance analysis predominantly employed bulky morphological measures with mixed compartments, whereas studies of the structural covariance of any specific subdivisions such as myelin are rare. Characterizing myelination covariance is of interest, as it will reveal connectivity patterns determined by coordinated development of myeloarchitecture between brain regions. Using myelin content MRI maps from the Human Connectome Project, here we showed that the cortical myelination covariance was highly reproducible, and exhibited a brain organization similar to that previously revealed by other connectivity measures. Additionally, the myelination covariance network shared common topological features of human brain networks such as small-worldness. Furthermore, we found that the correlation between myelination covariance and resting-state functional connectivity (RSFC) was uniform within each resting-state network (RSN), but could considerably vary across RSNs. Interestingly, this myelination covariance-RSFC correlation was appreciably stronger in sensory and motor networks than cognitive and polymodal association networks, possibly due to their different circuitry structures. This study has established a new brain connectivity measure specifically related to axons, and this measure can be valuable to investigating coordinated myeloarchitecture development. Hum Brain Mapp 38:4730-4743, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Liu, Siwei; Molenaar, Peter C M
2014-12-01
This article introduces iVAR, an R program for imputing missing data in multivariate time series on the basis of vector autoregressive (VAR) models. We conducted a simulation study to compare iVAR with three methods for handling missing data: listwise deletion, imputation with sample means and variances, and multiple imputation ignoring time dependency. The results showed that iVAR produces better estimates for the cross-lagged coefficients than do the other three methods. We demonstrate the use of iVAR with an empirical example of time series electrodermal activity data and discuss the advantages and limitations of the program.
Sequence trajectory generation for garment handling systems
Liu, Honghai; Lin, Hua
2008-01-01
This paper presents a novel generic approach to the planning strategy of garment handling systems. An assumption is proposed to separate the components of such systems into a component for intelligent gripper techniques and a component for handling planning strategies. Researchers can concentrate on one of the two components first, then merge the two problems together. An algorithm is addressed to generate the trajectory position and a clothes handling sequence of clothes partitions, which ar...
Enclosure for handling high activity materials
International Nuclear Information System (INIS)
Jimeno de Osso, F.
1977-01-01
One of the most important problems that are met at the laboratories producing and handling radioisotopes is that of designing, building and operating enclosures suitable for the safe handling of active substances. With this purpose in mind, an enclosure has been designed and built for handling moderately high activities under a shielding made of 150 mm thick lead. In this report a description is given of those aspects that may be of interest to people working in this field. (Author)
Enclosure for handling high activity materials abstract
International Nuclear Information System (INIS)
Jimeno de Osso, F.; Dominguez Rodriguez, G.; Cruz Castillo, F. de la; Rodriguez Esteban, A.
1977-01-01
One of the most important problems that are met at the laboratories producing and handling radioisotopes is that of designing, building and operating enclosures suitable for the safe handling of active substances. With that purpose in mind, an enclosure has been designed and built for handling moderately high activities under a shielding made of 150 mm thick lead. A description is given of those aspects that may be of interest to people working in this field. (author) [es
Scheduling of outbound luggage handling at airports
DEFF Research Database (Denmark)
Barth, Torben C.; Pisinger, David
2012-01-01
This article considers the outbound luggage handling problem at airports. The problem is to assign handling facilities to outbound flights and decide about the handling start time. This dynamic, near real-time assignment problem is part of the daily airport operations. Quality, efficiency......). Another solution method is a decomposition approach. The problem is divided into different subproblems and solved in iterative steps. The different solution approaches are tested on real world data from Frankfurt Airport....
ATA diagnostic data handling system: an overview
International Nuclear Information System (INIS)
Chambers, F.W.; Kallman, J.; McDonald, J.; Slominski, M.
1984-01-01
The functions to be performed by the ATA diagnostic data handling system are discussed. The capabilities of the present data acquisition system (System 0) are presented. The goals for the next generation acquisition system (System 1), currently under design, are discussed. Facilities on the Octopus system for data handling are reviewed. Finally, we discuss what has been learned about diagnostics and computer based data handling during the past year
Enclosure for handling high activity materials
Energy Technology Data Exchange (ETDEWEB)
Jimeno de Osso, F
1977-07-01
One of the most important problems that are met at the laboratories producing and handling radioisotopes is that of designing, building and operating enclosures suitable for the safe handling of active substances. With this purpose in mind, an enclosure has been designed and built for handling moderately high activities under a shielding made of 150 mm thick lead. In this report a description is given of those aspects that may be of interest to people working in this field. (Author)
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
Covariate-adjusted measures of discrimination for survival data
DEFF Research Database (Denmark)
White, Ian R; Rapsomaniki, Eleni; Frikke-Schmidt, Ruth
2015-01-01
by the study design (e.g. age and sex) influence discrimination and can make it difficult to compare model discrimination between studies. Although covariate adjustment is a standard procedure for quantifying disease-risk factor associations, there are no covariate adjustment methods for discrimination...... statistics in censored survival data. OBJECTIVE: To develop extensions of the C-index and D-index that describe the prognostic ability of a model adjusted for one or more covariate(s). METHOD: We define a covariate-adjusted C-index and D-index for censored survival data, propose several estimators......, and investigate their performance in simulation studies and in data from a large individual participant data meta-analysis, the Emerging Risk Factors Collaboration. RESULTS: The proposed methods perform well in simulations. In the Emerging Risk Factors Collaboration data, the age-adjusted C-index and D-index were...
Parametric Covariance Model for Horizon-Based Optical Navigation
Hikes, Jacob; Liounis, Andrew J.; Christian, John A.
2016-01-01
This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.
Moderating the Covariance Between Family Member’s Substance Use Behavior
Eaves, Lindon J.; Neale, Michael C.
2014-01-01
Twin and family studies implicitly assume that the covariation between family members remains constant across differences in age between the members of the family. However, age-specificity in gene expression for shared environmental factors could generate higher correlations between family members who are more similar in age. Cohort effects (cohort × genotype or cohort × common environment) could have the same effects, and both potentially reduce effect sizes estimated in genome-wide association studies where the subjects are heterogeneous in age. In this paper we describe a model in which the covariance between twins and non-twin siblings is moderated as a function of age difference. We describe the details of the model and simulate data using a variety of different parameter values to demonstrate that model fitting returns unbiased parameter estimates. Power analyses are then conducted to estimate the sample sizes required to detect the effects of moderation in a design of twins and siblings. Finally, the model is applied to data on cigarette smoking. We find that (1) the model effectively recovers the simulated parameters, (2) the power is relatively low and therefore requires large sample sizes before small to moderate effect sizes can be found reliably, and (3) the genetic covariance between siblings for smoking behavior decays very rapidly. Result 3 implies that, e.g., genome-wide studies of smoking behavior that use individuals assessed at different ages, or belonging to different birth-year cohorts may have had substantially reduced power to detect effects of genotype on cigarette use. It also implies that significant special twin environmental effects can be explained by age-moderation in some cases. This effect likely contributes to the missing heritability paradox. PMID:24647834
Structural covariance networks in the mouse brain.
Pagani, Marco; Bifone, Angelo; Gozzi, Alessandro
2016-04-01
The presence of networks of correlation between regional gray matter volume as measured across subjects in a group of individuals has been consistently described in several human studies, an approach termed structural covariance MRI (scMRI). Complementary to prevalent brain mapping modalities like functional and diffusion-weighted imaging, the approach can provide precious insights into the mutual influence of trophic and plastic processes in health and pathological states. To investigate whether analogous scMRI networks are present in lower mammal species amenable to genetic and experimental manipulation such as the laboratory mouse, we employed high resolution morphoanatomical MRI in a large cohort of genetically-homogeneous wild-type mice (C57Bl6/J) and mapped scMRI networks using a seed-based approach. We show that the mouse brain exhibits robust homotopic scMRI networks in both primary and associative cortices, a finding corroborated by independent component analyses of cortical volumes. Subcortical structures also showed highly symmetric inter-hemispheric correlations, with evidence of distributed antero-posterior networks in diencephalic regions of the thalamus and hypothalamus. Hierarchical cluster analysis revealed six identifiable clusters of cortical and sub-cortical regions corresponding to previously described neuroanatomical systems. Our work documents the presence of homotopic cortical and subcortical scMRI networks in the mouse brain, thus supporting the use of this species to investigate the elusive biological and neuroanatomical underpinnings of scMRI network development and its derangement in neuropathological states. The identification of scMRI networks in genetically homogeneous inbred mice is consistent with the emerging view of a key role of environmental factors in shaping these correlational networks. Copyright © 2016 Elsevier Inc. All rights reserved.
Covariant path integrals on hyperbolic surfaces
International Nuclear Information System (INIS)
Schaefer, J.
1997-01-01
DeWitt close-quote s covariant formulation of path integration [B. De Witt, open-quotes Dynamical theory in curved spaces. I. A review of the classical and quantum action principles,close quotes Rev. Mod. Phys. 29, 377 endash 397 (1957)] has two practical advantages over the traditional methods of open-quotes lattice approximations;close quotes there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature -1. The Pauli endash DeWitt curvature correction term arises, as in DeWitt close-quote s work. Introducing a Fuchsian group Γ of the first kind, and a continuous, bounded, Γ-automorphic potential V, we obtain a Feynman endash Kac formula for the automorphic Schroedinger equation on the Riemann surface Γ backslash H. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, Path Integration on Manifolds, Mathematical Aspects of Superspace, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47 endash 90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, open-quotes The path integral on the Poincare upper half plane and for Liouville quantum mechanics,close quotes Phys. Lett. A 123, 319 endash 328 (1987). copyright 1997 American Institute of Physics
Schwinger mechanism in linear covariant gauges
Aguilar, A. C.; Binosi, D.; Papavassiliou, J.
2017-02-01
In this work we explore the applicability of a special gluon mass generating mechanism in the context of the linear covariant gauges. In particular, the implementation of the Schwinger mechanism in pure Yang-Mills theories hinges crucially on the inclusion of massless bound-state excitations in the fundamental nonperturbative vertices of the theory. The dynamical formation of such excitations is controlled by a homogeneous linear Bethe-Salpeter equation, whose nontrivial solutions have been studied only in the Landau gauge. Here, the form of this integral equation is derived for general values of the gauge-fixing parameter, under a number of simplifying assumptions that reduce the degree of technical complexity. The kernel of this equation consists of fully dressed gluon propagators, for which recent lattice data are used as input, and of three-gluon vertices dressed by a single form factor, which is modeled by means of certain physically motivated Ansätze. The gauge-dependent terms contributing to this kernel impose considerable restrictions on the infrared behavior of the vertex form factor; specifically, only infrared finite Ansätze are compatible with the existence of nontrivial solutions. When such Ansätze are employed, the numerical study of the integral equation reveals a continuity in the type of solutions as one varies the gauge-fixing parameter, indicating a smooth departure from the Landau gauge. Instead, the logarithmically divergent form factor displaying the characteristic "zero crossing," while perfectly consistent in the Landau gauge, has to undergo a dramatic qualitative transformation away from it, in order to yield acceptable solutions. The possible implications of these results are briefly discussed.
Impact of the 235U covariance data in benchmark calculations
International Nuclear Information System (INIS)
Leal, Luiz; Mueller, Don; Arbanas, Goran; Wiarda, Dorothea; Derrien, Herve
2008-01-01
The error estimation for calculated quantities relies on nuclear data uncertainty information available in the basic nuclear data libraries such as the U.S. Evaluated Nuclear Data File (ENDF/B). The uncertainty files (covariance matrices) in the ENDF/B library are generally obtained from analysis of experimental data. In the resonance region, the computer code SAMMY is used for analyses of experimental data and generation of resonance parameters. In addition to resonance parameters evaluation, SAMMY also generates resonance parameter covariance matrices (RPCM). SAMMY uses the generalized least-squares formalism (Bayes' method) together with the resonance formalism (R-matrix theory) for analysis of experimental data. Two approaches are available for creation of resonance-parameter covariance data. (1) During the data-evaluation process, SAMMY generates both a set of resonance parameters that fit the experimental data and the associated resonance-parameter covariance matrix. (2) For existing resonance-parameter evaluations for which no resonance-parameter covariance data are available, SAMMY can retroactively create a resonance-parameter covariance matrix. The retroactive method was used to generate covariance data for 235 U. The resulting 235 U covariance matrix was then used as input to the PUFF-IV code, which processed the covariance data into multigroup form, and to the TSUNAMI code, which calculated the uncertainty in the multiplication factor due to uncertainty in the experimental cross sections. The objective of this work is to demonstrate the use of the 235 U covariance data in calculations of critical benchmark systems. (authors)
A New Approach for Nuclear Data Covariance and Sensitivity Generation
International Nuclear Information System (INIS)
Leal, L.C.; Larson, N.M.; Derrien, H.; Kawano, T.; Chadwick, M.B.
2005-01-01
Covariance data are required to correctly assess uncertainties in design parameters in nuclear applications. The error estimation of calculated quantities relies on the nuclear data uncertainty information available in the basic nuclear data libraries, such as the U.S. Evaluated Nuclear Data File, ENDF/B. The uncertainty files in the ENDF/B library are obtained from the analysis of experimental data and are stored as variance and covariance data. The computer code SAMMY is used in the analysis of the experimental data in the resolved and unresolved resonance energy regions. The data fitting of cross sections is based on generalized least-squares formalism (Bayes' theory) together with the resonance formalism described by R-matrix theory. Two approaches are used in SAMMY for the generation of resonance-parameter covariance data. In the evaluation process SAMMY generates a set of resonance parameters that fit the data, and, in addition, it also provides the resonance-parameter covariances. For existing resonance-parameter evaluations where no resonance-parameter covariance data are available, the alternative is to use an approach called the 'retroactive' resonance-parameter covariance generation. In the high-energy region the methodology for generating covariance data consists of least-squares fitting and model parameter adjustment. The least-squares fitting method calculates covariances directly from experimental data. The parameter adjustment method employs a nuclear model calculation such as the optical model and the Hauser-Feshbach model, and estimates a covariance for the nuclear model parameters. In this paper we describe the application of the retroactive method and the parameter adjustment method to generate covariance data for the gadolinium isotopes
Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Su, Edwin P; Grauer, Jonathan N
2018-03-01
Despite the advantages of large, national datasets, one continuing concern is missing data values. Complete case analysis, where only cases with complete data are analyzed, is commonly used rather than more statistically rigorous approaches such as multiple imputation. This study characterizes the potential selection bias introduced using complete case analysis and compares the results of common regressions using both techniques following unicompartmental knee arthroplasty. Patients undergoing unicompartmental knee arthroplasty were extracted from the 2005 to 2015 National Surgical Quality Improvement Program. As examples, the demographics of patients with and without missing preoperative albumin and hematocrit values were compared. Missing data were then treated with both complete case analysis and multiple imputation (an approach that reproduces the variation and associations that would have been present in a full dataset) and the conclusions of common regressions for adverse outcomes were compared. A total of 6117 patients were included, of which 56.7% were missing at least one value. Younger, female, and healthier patients were more likely to have missing preoperative albumin and hematocrit values. The use of complete case analysis removed 3467 patients from the study in comparison with multiple imputation which included all 6117 patients. The 2 methods of handling missing values led to differing associations of low preoperative laboratory values with commonly studied adverse outcomes. The use of complete case analysis can introduce selection bias and may lead to different conclusions in comparison with the statistically rigorous multiple imputation approach. Joint surgeons should consider the methods of handling missing values when interpreting arthroplasty research. Copyright © 2017 Elsevier Inc. All rights reserved.
Hot Laboratories and Remote Handling
International Nuclear Information System (INIS)
2007-01-01
The Opening talk of the workshop 'Hot Laboratories and Remote Handling' was given by Marin Ciocanescu with the communication 'Overview of R and D Program in Romanian Institute for Nuclear Research'. The works of the meeting were structured into three sections addressing the following items: Session 1. Hot cell facilities: Infrastructure, Refurbishment, Decommissioning; Session 2. Waste, transport, safety and remote handling issues; Session 3. Post-Irradiation examination techniques. In the frame of Section 1 the communication 'Overview of hot cell facilities in South Africa' by Wouter Klopper, Willie van Greunen et al, was presented. In the framework of the second session there were given the following four communications: 'The irradiated elements cell at PHENIX' by Laurent Breton et al., 'Development of remote equipment for DUPIC fuel fabrication at KAERI', by Jung Won Lee et al., 'Aspects of working with manipulators and small samples in an αβγ-box, by Robert Zubler et al., and 'The GIOCONDA experience of the Joint Research Centre Ispra: analysis of the experimental assemblies finalized to their safe recovery and dismantling', by Roberto Covini. Finally, in the framework of the third section the following five communications were presented: 'PIE of a CANDU fuel element irradiated for a load following test in the INR TRIGA reactor' by Marcel Parvan et al., 'Adaptation of the pole figure measurement to the irradiated items from zirconium alloys' by Yury Goncharenko et al., 'Fuel rod profilometry with a laser scan micrometer' by Daniel Kuster et al., 'Raman spectroscopy, a new facility at LECI laboratory to investigate neutron damage in irradiated materials' by Lionel Gosmain et al., and 'Analysis of complex nuclear materials with the PSI shielded analytical instruments' by Didier Gavillet. In addition, eleven more presentations were given as posters. Their titles were: 'Presentation of CETAMA activities (CEA analytic group)' by Alain Hanssens et al. 'Analysis of
Analysis of longitudinal data from animals where some data are missing in SPSS
Duricki, DA; Soleman, S; Moon, LDF
2017-01-01
Testing of therapies for disease or injury often involves analysis of longitudinal data from animals. Modern analytical methods have advantages over conventional methods (particularly where some data are missing) yet are not used widely by pre-clinical researchers. We provide here an easy to use protocol for analysing longitudinal data from animals and present a click-by-click guide for performing suitable analyses using the statistical package SPSS. We guide readers through analysis of a real-life data set obtained when testing a therapy for brain injury (stroke) in elderly rats. We show that repeated measures analysis of covariance failed to detect a treatment effect when a few data points were missing (due to animal drop-out) whereas analysis using an alternative method detected a beneficial effect of treatment; specifically, we demonstrate the superiority of linear models (with various covariance structures) analysed using Restricted Maximum Likelihood estimation (to include all available data). This protocol takes two hours to follow. PMID:27196723
Analysis of longitudinal data from animals with missing values using SPSS.
Duricki, Denise A; Soleman, Sara; Moon, Lawrence D F
2016-06-01
Testing of therapies for disease or injury often involves the analysis of longitudinal data from animals. Modern analytical methods have advantages over conventional methods (particularly when some data are missing), yet they are not used widely by preclinical researchers. Here we provide an easy-to-use protocol for the analysis of longitudinal data from animals, and we present a click-by-click guide for performing suitable analyses using the statistical package IBM SPSS Statistics software (SPSS). We guide readers through the analysis of a real-life data set obtained when testing a therapy for brain injury (stroke) in elderly rats. If a few data points are missing, as in this example data set (for example, because of animal dropout), repeated-measures analysis of covariance may fail to detect a treatment effect. An alternative analysis method, such as the use of linear models (with various covariance structures), and analysis using restricted maximum likelihood estimation (to include all available data) can be used to better detect treatment effects. This protocol takes 2 h to carry out.
Fuel handling problems at KANUPP
Energy Technology Data Exchange (ETDEWEB)
Ahmed, I; Mazhar Hasan, S; Mugtadir, A [Karachi Nuclear Power Plant (KANUPP), Karachi (Pakistan)
1991-04-01
KANUPP experienced two abnormal fuel and fuel handling related problems during the year 1990. One of these had arisen due to development of end plate to end plate coupling between the two bundles at the leading end of the fuel string in channel HO2-S. The incident occurred when attempts were being made to fuel this channel. Due to pulling of sticking bundles into the acceptor fuelling machine (north) magazine, which was not designed to accommodate two bundles, a magazine rotary stop occurred. The forward motion of the charge tube was simultaneously discovered to be restricted. The incident led to stalling of fuelling machine locked on to the channel HO2, necessitating a reactor shut down. Removal of the fuelling machine was accomplished sometime later after draining of the channel. The second incident which made the fuelling of channel KO5-N temporarily inexecutable, occurred during attempts to remove its north end shield plug when this channel came up for fuelling. The incident resulted due to breaking of the lugs of the shield plug, making its withdrawal impossible. The Plant however kept operating with suspended fuelling of channel KO5, until it could no longer sustain a further increase in fuel burnup at the maximum rating position. Resolving both these problems necessitated draining of the respective channels, leaving the resident fuel uncovered for the duration of the associated operation. Due to substantial difference in the oxidation temperatures Of UO{sub 2} and Zircaloy and its influence as such on the cooling requirement, it was necessary either to determine explicitly that the respective channels did not contain defective fuel bundles or wait for time long enough to allow the decay heat to reduce to manageable proportions. This had a significant bearing on the Plant down time necessary for the rectification of the problems. This paper describes the two incidents in detail and dwells upon the measures adopted to resolve the related problems. (author)
Fuel handling problems at KANUPP
International Nuclear Information System (INIS)
Ahmed, I.; Mazhar Hasan, S.; Mugtadir, A.
1991-01-01
KANUPP experienced two abnormal fuel and fuel handling related problems during the year 1990. One of these had arisen due to development of end plate to end plate coupling between the two bundles at the leading end of the fuel string in channel HO2-S. The incident occurred when attempts were being made to fuel this channel. Due to pulling of sticking bundles into the acceptor fuelling machine (north) magazine, which was not designed to accommodate two bundles, a magazine rotary stop occurred. The forward motion of the charge tube was simultaneously discovered to be restricted. The incident led to stalling of fuelling machine locked on to the channel HO2, necessitating a reactor shut down. Removal of the fuelling machine was accomplished sometime later after draining of the channel. The second incident which made the fuelling of channel KO5-N temporarily inexecutable, occurred during attempts to remove its north end shield plug when this channel came up for fuelling. The incident resulted due to breaking of the lugs of the shield plug, making its withdrawal impossible. The Plant however kept operating with suspended fuelling of channel KO5, until it could no longer sustain a further increase in fuel burnup at the maximum rating position. Resolving both these problems necessitated draining of the respective channels, leaving the resident fuel uncovered for the duration of the associated operation. Due to substantial difference in the oxidation temperatures Of UO 2 and Zircaloy and its influence as such on the cooling requirement, it was necessary either to determine explicitly that the respective channels did not contain defective fuel bundles or wait for time long enough to allow the decay heat to reduce to manageable proportions. This had a significant bearing on the Plant down time necessary for the rectification of the problems. This paper describes the two incidents in detail and dwells upon the measures adopted to resolve the related problems. (author)
Directory of Open Access Journals (Sweden)
Xiaosong Zhao
2015-01-01
Full Text Available Missing data is an inevitable problem when measuring CO2, water, and energy fluxes between biosphere and atmosphere by eddy covariance systems. To find the optimum gap-filling method for short vegetations, we review three-methods mean diurnal variation (MDV, look-up tables (LUT, and nonlinear regression (NLR for estimating missing values of net ecosystem CO2 exchange (NEE in eddy covariance time series and evaluate their performance for different artificial gap scenarios based on benchmark datasets from marsh and cropland sites in China. The cumulative errors for three methods have no consistent bias trends, which ranged between −30 and +30 mgCO2 m−2 from May to October at three sites. To reduce sum bias in maximum, combined gap-filling methods were selected for short vegetation. The NLR or LUT method was selected after plant rapidly increasing in spring and before the end of plant growing, and MDV method was used to the other stage. The sum relative error (SRE of optimum method ranged between −2 and +4% for four-gap level at three sites, except for 55% gaps at soybean site, which also obviously reduced standard deviation of error.
Galaxy-galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Graphical representation of covariant-contravariant modal formulae
Directory of Open Access Journals (Sweden)
Miguel Palomino
2011-08-01
Full Text Available Covariant-contravariant simulation is a combination of standard (covariant simulation, its contravariant counterpart and bisimulation. We have previously studied its logical characterization by means of the covariant-contravariant modal logic. Moreover, we have investigated the relationships between this model and that of modal transition systems, where two kinds of transitions (the so-called may and must transitions were combined in order to obtain a simple framework to express a notion of refinement over state-transition models. In a classic paper, Boudol and Larsen established a precise connection between the graphical approach, by means of modal transition systems, and the logical approach, based on Hennessy-Milner logic without negation, to system specification. They obtained a (graphical representation theorem proving that a formula can be represented by a term if, and only if, it is consistent and prime. We show in this paper that the formulae from the covariant-contravariant modal logic that admit a "graphical" representation by means of processes, modulo the covariant-contravariant simulation preorder, are also the consistent and prime ones. In order to obtain the desired graphical representation result, we first restrict ourselves to the case of covariant-contravariant systems without bivariant actions. Bivariant actions can be incorporated later by means of an encoding that splits each bivariant action into its covariant and its contravariant parts.
Using machine learning to assess covariate balance in matching studies.
Linden, Ariel; Yarnold, Paul R
2016-12-01
In order to assess the effectiveness of matching approaches in observational studies, investigators typically present summary statistics for each observed pre-intervention covariate, with the objective of showing that matching reduces the difference in means (or proportions) between groups to as close to zero as possible. In this paper, we introduce a new approach to distinguish between study groups based on their distributions of the covariates using a machine-learning algorithm called optimal discriminant analysis (ODA). Assessing covariate balance using ODA as compared with the conventional method has several key advantages: the ability to ascertain how individuals self-select based on optimal (maximum-accuracy) cut-points on the covariates; the application to any variable metric and number of groups; its insensitivity to skewed data or outliers; and the use of accuracy measures that can be widely applied to all analyses. Moreover, ODA accepts analytic weights, thereby extending the assessment of covariate balance to any study design where weights are used for covariate adjustment. By comparing the two approaches using empirical data, we are able to demonstrate that using measures of classification accuracy as balance diagnostics produces highly consistent results to those obtained via the conventional approach (in our matched-pairs example, ODA revealed a weak statistically significant relationship not detected by the conventional approach). Thus, investigators should consider ODA as a robust complement, or perhaps alternative, to the conventional approach for assessing covariate balance in matching studies. © 2016 John Wiley & Sons, Ltd.
Galaxy–galaxy lensing estimators and their covariance properties
International Nuclear Information System (INIS)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; Slosar, Anze; Gonzalez, Jose Vazquez
2017-01-01
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
2010-01-01
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Handling. 3.118 Section 3.118 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE ANIMAL WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment, and Transportation of Marine...
How to Handle Impasses in Bargaining.
Durrant, Robert E.
Guidelines in an outline format are presented to school board members and administrators on how to handle impasses in bargaining. The following two rules are given: there sometimes may be strikes, but there always will be settlements; and on the way to settlements, there always will be impasses. Suggestions for handling impasses are listed under…
Handling uncertainty through adaptiveness in planning approaches
Zandvoort, M.; Vlist, van der M.J.; Brink, van den A.
2018-01-01
Planners and water managers seek to be adaptive to handle uncertainty through the use of planning approaches. In this paper, we study what type of adaptiveness is proposed and how this may be operationalized in planning approaches to adequately handle different uncertainties. We took a
Survey of postharvest handling, preservation and processing ...
African Journals Online (AJOL)
Survey of postharvest handling, preservation and processing practices along the camel milk chain in Isiolo district, Kenya. ... Despite the important contribution of camel milk to food security for pastoralists in Kenya, little is known about the postharvest handling, preservation and processing practices. In this study, existing ...
PND fuel handling decontamination: facilities and techniques
International Nuclear Information System (INIS)
Pan, R.Y.
1996-01-01
The use of various decontamination techniques and equipment has become a critical part of Fuel Handling maintenance work at Ontario Hydro's Pickering Nuclear Division. This paper presents an overview of the set up and techniques used for decontamination in the PND Fuel Handling Maintenance Facility and the effectiveness of each. (author). 1 tab., 9 figs
Handling Kids in Crisis with Care
Bushinski, Cari
2018-01-01
The Handle with Care program helps schools help students who experience trauma. While at the scene of an event like a domestic violence call, drug raid, or car accident, law enforcement personnel determine the names and school of any children present. They notify that child's school to "handle ___ with care" the next day, and the school…
PND fuel handling decontamination: facilities and techniques
Energy Technology Data Exchange (ETDEWEB)
Pan, R Y [Ontario Hydro, Toronto, ON (Canada)
1997-12-31
The use of various decontamination techniques and equipment has become a critical part of Fuel Handling maintenance work at Ontario Hydro`s Pickering Nuclear Division. This paper presents an overview of the set up and techniques used for decontamination in the PND Fuel Handling Maintenance Facility and the effectiveness of each. (author). 1 tab., 9 figs.
Handling knowledge on osteoporosis - a qualitative study
DEFF Research Database (Denmark)
Nielsen, Dorthe; Huniche, Lotte; Brixen, Kim
2013-01-01
Scand J Caring Sci; 2012 Handling knowledge on osteoporosis - a qualitative study The aim of this qualitative study was to increase understanding of the importance of osteoporosis information and knowledge for patients' ways of handling osteoporosis in their everyday lives. Interviews were...
MISSE PEACE Polymers Atomic Oxygen Erosion Results
deGroh, Kim, K.; Banks, Bruce A.; McCarthy, Catherine E.; Rucker, Rochelle N.; Roberts, Lily M.; Berger, Lauren A.
2006-01-01
Forty-one different polymer samples, collectively called the Polymer Erosion and Contamination Experiment (PEACE) Polymers, have been exposed to the low Earth orbit (LEO) environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of Materials International Space Station Experiment 2 (MISSE 2). The objective of the PEACE Polymers experiment was to determine the atomic oxygen erosion yield of a wide variety of polymeric materials after long term exposure to the space environment. The polymers range from those commonly used for spacecraft applications, such as Teflon (DuPont) FEP, to more recently developed polymers, such as high temperature polyimide PMR (polymerization of monomer reactants). Additional polymers were included to explore erosion yield dependence upon chemical composition. The MISSE PEACE Polymers experiment was flown in MISSE Passive Experiment Carrier 2 (PEC 2), tray 1, on the exterior of the ISS Quest Airlock and was exposed to atomic oxygen along with solar and charged particle radiation. MISSE 2 was successfully retrieved during a space walk on July 30, 2005, during Discovery s STS-114 Return to Flight mission. Details on the specific polymers flown, flight sample fabrication, pre-flight and post-flight characterization techniques, and atomic oxygen fluence calculations are discussed along with a summary of the atomic oxygen erosion yield results. The MISSE 2 PEACE Polymers experiment is unique because it has the widest variety of polymers flown in LEO for a long duration and provides extremely valuable erosion yield data for spacecraft design purposes.
Breast carcinomas: why are they missed?
Muttarak, M; Pojchamarnwiputh, S; Chaiwun, B
2006-10-01
Mammography has proven to be an effective modality for the detection of early breast carcinoma. However, 4-34 percent of breast cancers may be missed at mammography. Delayed diagnosis of breast carcinoma results in an unfavourable prognosis. The objective of this study was to determine the causes and characteristics of breast carcinomas missed by mammography at our institution, with the aim of reducing the rate of missed carcinoma. We reviewed the reports of 13,191 mammograms performed over a five-year period. Breast Imaging Reporting and Data Systems (BI-RADS) were used for the mammographical assessment, and reports were cross-referenced with the histological diagnosis of breast carcinoma. Causes of missed carcinomas were classified. Of 344 patients who had breast carcinoma and had mammograms done prior to surgery, 18 (5.2 percent) failed to be diagnosed by mammography. Of these, five were caused by dense breast parenchyma obscuring the lesions, 11 were due to perception and interpretation errors, and one each from unusual lesion characteristics and poor positioning. Several factors, including dense breast parenchyma obscuring a lesion, perception error, interpretation error, unusual lesion characteristics, and poor technique or positioning, are possible causes of missed breast cancers.
Substituting missing data in compositional analysis
Energy Technology Data Exchange (ETDEWEB)
Real, Carlos, E-mail: carlos.real@usc.es [Area de Ecologia, Departamento de Biologia Celular y Ecologia, Escuela Politecnica Superior, Universidad de Santiago de Compostela, 27002 Lugo (Spain); Angel Fernandez, J.; Aboal, Jesus R.; Carballeira, Alejo [Area de Ecologia, Departamento de Biologia Celular y Ecologia, Facultad de Biologia, Universidad de Santiago de Compostela, 15782 Santiago de Compostela (Spain)
2011-10-15
Multivariate analysis of environmental data sets requires the absence of missing values or their substitution by small values. However, if the data is transformed logarithmically prior to the analysis, this solution cannot be applied because the logarithm of a small value might become an outlier. Several methods for substituting the missing values can be found in the literature although none of them guarantees that no distortion of the structure of the data set is produced. We propose a method for the assessment of these distortions which can be used for deciding whether to retain or not the samples or variables containing missing values and for the investigation of the performance of different substitution techniques. The method analyzes the structure of the distances among samples using Mantel tests. We present an application of the method to PCDD/F data measured in samples of terrestrial moss as part of a biomonitoring study. - Highlights: > Missing values in multivariate data sets must be substituted prior to analysis. > The substituted values can modify the structure of the data set. > We developed a method to estimate the magnitude of the alterations. > The method is simple and based on the Mantel test. > The method allowed the identification of problematic variables in a sample data set. - A method is presented for the assessment of the possible distortions in multivariate analysis caused by the substitution of missing values.
Substituting missing data in compositional analysis
International Nuclear Information System (INIS)
Real, Carlos; Angel Fernandez, J.; Aboal, Jesus R.; Carballeira, Alejo
2011-01-01
Multivariate analysis of environmental data sets requires the absence of missing values or their substitution by small values. However, if the data is transformed logarithmically prior to the analysis, this solution cannot be applied because the logarithm of a small value might become an outlier. Several methods for substituting the missing values can be found in the literature although none of them guarantees that no distortion of the structure of the data set is produced. We propose a method for the assessment of these distortions which can be used for deciding whether to retain or not the samples or variables containing missing values and for the investigation of the performance of different substitution techniques. The method analyzes the structure of the distances among samples using Mantel tests. We present an application of the method to PCDD/F data measured in samples of terrestrial moss as part of a biomonitoring study. - Highlights: → Missing values in multivariate data sets must be substituted prior to analysis. → The substituted values can modify the structure of the data set. → We developed a method to estimate the magnitude of the alterations. → The method is simple and based on the Mantel test. → The method allowed the identification of problematic variables in a sample data set. - A method is presented for the assessment of the possible distortions in multivariate analysis caused by the substitution of missing values.
Covarient quantization of heterotic strings in supersymmetric chiral boson formulation
International Nuclear Information System (INIS)
Yu, F.
1992-01-01
This dissertation presents the covariant supersymmetric chiral boson formulation of the heterotic strings. The main feature of this formulation is the covariant quantization of the so-called leftons and rightons -- the (1,0) supersymmetric generalizations of the world-sheet chiral bosons -- that constitute basic building blocks of general heterotic-type string models. Although the (Neveu-Schwarz-Ramond or Green-Schwarz) heterotic strings provide the most realistic string models, their covariant quantization, with the widely-used Siegel formalism, has never been rigorously carried out. It is clarified in this dissertation that the covariant Siegel formalism is pathological upon quantization. As a test, a general classical covariant (NSR) heterotic string action that has the Siegel symmetry is constructed in arbitrary curved space-time coupled to (1,0) world-sheet super-gravity. In the light-cone gauge quantization, the critical dimensions are derived for such an action with leftons and rightons compactified on group manifolds G L x G R . The covariant quantization of this action does not agree with the physical results in the light-cone gauge quantization. This dissertation establishes a new formalism for the covariant quantization of heterotic strings. The desired consistent covariant path integral quantization of supersymmetric chiral bosons, and thus the general (NSR) heterotic-type strings with leftons and rightons compactified on torus circle-times d L S 1 x circle-times d R S 1 are carried out. An infinite set of auxiliary (1,0) scalar superfields is introduced to convert the second-class chiral constraint into first-class ones. The covariant gauge-fixed action has an extended BRST symmetry described by the graded algebra GL(1/1). A regularization respecting this symmetry is proposed to deal with the contributions of the infinite towers of auxiliary fields and associated ghosts
DDOS ATTACK DETECTION SIMULATION AND HANDLING MECHANISM
Directory of Open Access Journals (Sweden)
Ahmad Sanmorino
2013-11-01
Full Text Available In this study we discuss how to handle DDoS attack that coming from the attacker by using detection method and handling mechanism. Detection perform by comparing number of packets and number of flow. Whereas handling mechanism perform by limiting or drop the packets that detected as a DDoS attack. The study begins with simulation on real network, which aims to get the real traffic data. Then, dump traffic data obtained from the simulation used for detection method on our prototype system called DASHM (DDoS Attack Simulation and Handling Mechanism. From the result of experiment that has been conducted, the proposed method successfully detect DDoS attack and handle the incoming packet sent by attacker.
MRI of meniscal bucket-handle tears
Energy Technology Data Exchange (ETDEWEB)
Magee, T.H.; Hinson, G.W. [Menorah Medical Center, Overland Park, KS (United States). Dept. of Radiology
1998-09-01
A meniscal bucket-handle tear is a tear with an attached fragment displaced from the meniscus of the knee joint. Low sensitivity of MRI for detection of bucket-handle tears (64% as compared with arthroscopy) has been reported previously. We report increased sensitivity for detecting bucket-handle tears with the use of coronal short tau inversion recovery (STIR) images. Results. By using four criteria for diagnosis of meniscal bucket-handle tears, our overall sensitivity compared with arthroscopy was 93% (28 of 30 meniscal bucket-handle tears seen at arthroscopy were detected by MRI). The meniscal fragment was well visualized in all 28 cases on coronal STIR images. The double posterior cruciate ligament sign was seen in 8 of 30 cases, the flipped meniscus was seen in 10 of 30 cases and a fragment in the intercondylar notch was seen in 18 of 30 cases. (orig.)
ML-MG: Multi-label Learning with Missing Labels Using a Mixed Graph
Wu, Baoyuan
2015-12-07
This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels (i.e. some of their labels are missing). To handle missing labels, we propose a unified model of label dependencies by constructing a mixed graph, which jointly incorporates (i) instance-level similarity and class co-occurrence as undirected edges and (ii) semantic label hierarchy as directed edges. Unlike most MLML methods, We formulate this learning problem transductively as a convex quadratic matrix optimization problem that encourages training label consistency and encodes both types of label dependencies (i.e. undirected and directed edges) using quadratic terms and hard linear constraints. The alternating direction method of multipliers (ADMM) can be used to exactly and efficiently solve this problem. To evaluate our proposed method, we consider two popular applications (image and video annotation), where the label hierarchy can be derived from Wordnet. Experimental results show that our method achieves a significant improvement over state-of-the-art methods in performance and robustness to missing labels.
More on Estimation of Banded and Banded Toeplitz Covariance Matrices
Berntsson, Fredrik; Ohlson, Martin
2017-01-01
In this paper we consider two different linear covariance structures, e.g., banded and bended Toeplitz, and how to estimate them using different methods, e.g., by minimizing different norms. One way to estimate the parameters in a linear covariance structure is to use tapering, which has been shown to be the solution to a universal least squares problem. We know that tapering not always guarantee the positive definite constraints on the estimated covariance matrix and may not be a suitable me...
Covariance matrices and applications to the field of nuclear data
International Nuclear Information System (INIS)
Smith, D.L.
1981-11-01
A student's introduction to covariance error analysis and least-squares evaluation of data is provided. It is shown that the basic formulas used in error propagation can be derived from a consideration of the geometry of curvilinear coordinates. Procedures for deriving covariances for scaler and vector functions of several variables are presented. Proper methods for reporting experimental errors and for deriving covariance matrices from these errors are indicated. The generalized least-squares method for evaluating experimental data is described. Finally, the use of least-squares techniques in data fitting applications is discussed. Specific examples of the various procedures are presented to clarify the concepts
The utility of covariance of combining ability in plant breeding.
Arunachalam, V
1976-11-01
The definition of covariances of half- and full sibs, and hence that of variances of general and specific combining ability with regard to a quantitative character, is extended to take into account the respective covariances between a pair of characters. The interpretation of the dispersion and correlation matrices of general and specific combining ability is discussed by considering a set of single, three- and four-way crosses, made using diallel and line × tester mating systems in Pennisetum typhoides. The general implications of the concept of covariance of combining ability in plant breeding are discussed.
Asset allocation with different covariance/correlation estimators
Μανταφούνη, Σοφία
2007-01-01
The subject of the study is to test whether the use of different covariance – correlation estimators than the historical covariance matrix that is widely used, would help in portfolio optimization through the mean-variance analysis. In other words, if an investor would like to use the mean-variance analysis in order to invest in assets like stocks or indices, would it be of some help to use more sophisticated estimators for the covariance matrix of the returns of his portfolio? The procedure ...
Handling Procedures of Vegetable Crops
Perchonok, Michele; French, Stephen J.
2004-01-01
The National Aeronautics and Space Administration (NASA) is working towards future long duration manned space flights beyond low earth orbit. The duration of these missions may be as long as 2.5 years and will likely include a stay on a lunar or planetary surface. The primary goal of the Advanced Food System in these long duration exploratory missions is to provide the crew with a palatable, nutritious, and safe food system while minimizing volume, mass, and waste. Vegetable crops can provide the crew with added nutrition and variety. These crops do not require any cooking or food processing prior to consumption. The vegetable crops, unlike prepackaged foods, will provide bright colors, textures (crispy), and fresh aromas. Ten vegetable crops have been identified for possible use in long duration missions. They are lettuce, spinach, carrot, tomato, green onion, radish, bell pepper, strawberries, fresh herbs, and cabbage. Whether these crops are grown on a transit vehicle (e.g., International Space Station) or on the lunar or planetary surface, it will be necessary to determine how to safely handle the vegetables while maintaining acceptability. Since hydrogen peroxide degrades into water and oxygen and is generally recognized as safe (GRAS), hydrogen peroxide has been recommended as the sanitizer. The objective of th is research is to determine the required effective concentration of hydrogen peroxide. In addition, it will be determined whether the use of hydrogen peroxide, although a viable sanitizer, adversely affects the quality of the vegetables. Vegetables will be dipped in 1 % hydrogen peroxide, 3% hydrogen peroxide, or 5% hydrogen peroxide. Treated produce and controls will be stored in plastic bags at 5 C for up to 14 days. Sensory, color, texture, and total plate count will be measured. The effect on several vegetables including lettuce, radish, tomato and strawberries has been completed. Although each vegetable reacts to hydrogen peroxide differently, the
The handling of radiation accidents
International Nuclear Information System (INIS)
1977-01-01
The symposium was attended by 204 participants from 39 countries and 5 international organizations. Forty-two papers were presented in 8 sessions. The purpose of the meeting was to foster an exchange of experiences gained in establishing and exercising plans for mitigating the effects of radiation accidents and in the handling of actual accident situations. Only a small number of accidents were reported at the symposium, and this reflects the very high standards of safety that has been achieved by the nuclear industry. No accidents of radiological significance were reported to have occurred at commercial nuclear power plants. Of the accidents reported, industrial radiography continues to be the area in which most of the radiation accidents occur. The experience gained in the reported accident situations served to confirm the crucial importance of the prompt availability of medical and radiological services, particularly in the case of uptake of radioactive material, and emphasized the importance of detailed investigation into the causes of the accident in order to improve preventative measures. One of the principal themes of the symposium involved emergency procedures related to nuclear power plant accidents, and several papers defining the scope, progression and consequences of design base accidents for both thermal and fast reactor systems were presented. These were complemented by papers defining the resultant protection requirements that should be satisfied in the establishment of plans designed to mitigate the effects of the postulated accident situations. Several papers were presented describing existing emergency organizational arrangements relating both to specific nuclear power plants and to comprehensive national schemes, and a particularly informative session was devoted to the topic of training of personnel in the practical conduct of emergency arrangements. The general feeling of the participants was one of studied confidence in the competence and