WorldWideScience

Sample records for multiple imputation techniques

  1. The multiple imputation method: a case study involving secondary data analysis.

    Science.gov (United States)

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  2. Multiple Improvements of Multiple Imputation Likelihood Ratio Tests

    OpenAIRE

    Chan, Kin Wai; Meng, Xiao-Li

    2017-01-01

    Multiple imputation (MI) inference handles missing data by first properly imputing the missing values $m$ times, and then combining the $m$ analysis results from applying a complete-data procedure to each of the completed datasets. However, the existing method for combining likelihood ratio tests has multiple defects: (i) the combined test statistic can be negative in practice when the reference null distribution is a standard $F$ distribution; (ii) it is not invariant to re-parametrization; ...

  3. Multiple imputation and its application

    CERN Document Server

    Carpenter, James

    2013-01-01

    A practical guide to analysing partially observed data. Collecting, analysing and drawing inferences from data is central to research in the medical and social sciences. Unfortunately, it is rarely possible to collect all the intended data. The literature on inference from the resulting incomplete  data is now huge, and continues to grow both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enable researchers to apply these methods. This book focuses on a particular statistical method for analysing and drawing inferences from incomplete data, called Multiple Imputation (MI). MI is attractive because it is both practical and widely applicable. The authors aim is to clarify the issues raised by missing data, describing the rationale for MI, the relationship between the various imputation models and associated algorithms and its application to increasingly complex data structures. Multiple Imputation and its Application: Discusses the issues ...

  4. Synthetic Multiple-Imputation Procedure for Multistage Complex Samples

    Directory of Open Access Journals (Sweden)

    Zhou Hanzhi

    2016-03-01

    Full Text Available Multiple imputation (MI is commonly used when item-level missing data are present. However, MI requires that survey design information be built into the imputation models. For multistage stratified clustered designs, this requires dummy variables to represent strata as well as primary sampling units (PSUs nested within each stratum in the imputation model. Such a modeling strategy is not only operationally burdensome but also inferentially inefficient when there are many strata in the sample design. Complexity only increases when sampling weights need to be modeled. This article develops a generalpurpose analytic strategy for population inference from complex sample designs with item-level missingness. In a simulation study, the proposed procedures demonstrate efficient estimation and good coverage properties. We also consider an application to accommodate missing body mass index (BMI data in the analysis of BMI percentiles using National Health and Nutrition Examination Survey (NHANES III data. We argue that the proposed methods offer an easy-to-implement solution to problems that are not well-handled by current MI techniques. Note that, while the proposed method borrows from the MI framework to develop its inferential methods, it is not designed as an alternative strategy to release multiply imputed datasets for complex sample design data, but rather as an analytic strategy in and of itself.

  5. Bootstrap inference when using multiple imputation.

    Science.gov (United States)

    Schomaker, Michael; Heumann, Christian

    2018-04-16

    Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Imputation and quality control steps for combining multiple genome-wide datasets

    Directory of Open Access Journals (Sweden)

    Shefali S Verma

    2014-12-01

    Full Text Available The electronic MEdical Records and GEnomics (eMERGE network brings together DNA biobanks linked to electronic health records (EHRs from multiple institutions. Approximately 52,000 DNA samples from distinct individuals have been genotyped using genome-wide SNP arrays across the nine sites of the network. The eMERGE Coordinating Center and the Genomics Workgroup developed a pipeline to impute and merge genomic data across the different SNP arrays to maximize sample size and power to detect associations with a variety of clinical endpoints. The 1000 Genomes cosmopolitan reference panel was used for imputation. Imputation results were evaluated using the following metrics: accuracy of imputation, allelic R2 (estimated correlation between the imputed and true genotypes, and the relationship between allelic R2 and minor allele frequency. Computation time and memory resources required by two different software packages (BEAGLE and IMPUTE2 were also evaluated. A number of challenges were encountered due to the complexity of using two different imputation software packages, multiple ancestral populations, and many different genotyping platforms. We present lessons learned and describe the pipeline implemented here to impute and merge genomic data sets. The eMERGE imputed dataset will serve as a valuable resource for discovery, leveraging the clinical data that can be mined from the EHR.

  7. Effects of Different Missing Data Imputation Techniques on the Performance of Undiagnosed Diabetes Risk Prediction Models in a Mixed-Ancestry Population of South Africa.

    Directory of Open Access Journals (Sweden)

    Katya L Masconi

    Full Text Available Imputation techniques used to handle missing data are based on the principle of replacement. It is widely advocated that multiple imputation is superior to other imputation methods, however studies have suggested that simple methods for filling missing data can be just as accurate as complex methods. The objective of this study was to implement a number of simple and more complex imputation methods, and assess the effect of these techniques on the performance of undiagnosed diabetes risk prediction models during external validation.Data from the Cape Town Bellville-South cohort served as the basis for this study. Imputation methods and models were identified via recent systematic reviews. Models' discrimination was assessed and compared using C-statistic and non-parametric methods, before and after recalibration through simple intercept adjustment.The study sample consisted of 1256 individuals, of whom 173 were excluded due to previously diagnosed diabetes. Of the final 1083 individuals, 329 (30.4% had missing data. Family history had the highest proportion of missing data (25%. Imputation of the outcome, undiagnosed diabetes, was highest in stochastic regression imputation (163 individuals. Overall, deletion resulted in the lowest model performances while simple imputation yielded the highest C-statistic for the Cambridge Diabetes Risk model, Kuwaiti Risk model, Omani Diabetes Risk model and Rotterdam Predictive model. Multiple imputation only yielded the highest C-statistic for the Rotterdam Predictive model, which were matched by simpler imputation methods.Deletion was confirmed as a poor technique for handling missing data. However, despite the emphasized disadvantages of simpler imputation methods, this study showed that implementing these methods results in similar predictive utility for undiagnosed diabetes when compared to multiple imputation.

  8. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    Science.gov (United States)

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  9. Multiple imputation in the presence of non-normal data.

    Science.gov (United States)

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    Science.gov (United States)

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  11. Missing data treatments matter: an analysis of multiple imputation for anterior cervical discectomy and fusion procedures.

    Science.gov (United States)

    Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Cui, Jonathan J; Basques, Bryce A; Albert, Todd J; Grauer, Jonathan N

    2018-04-09

    occurrence of any adverse event, severe adverse events, and hospital readmission. Multiple imputation is a rigorous statistical procedure that is being increasingly used to address missing values in large datasets. Using this technique for ACDF avoided the loss of cases that may have affected the representativeness and power of the study and led to different results than complete case analysis. Multiple imputation should be considered for future spine studies. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    Science.gov (United States)

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  13. A nonparametric multiple imputation approach for missing categorical data

    Directory of Open Access Journals (Sweden)

    Muhan Zhou

    2017-06-01

    Full Text Available Abstract Background Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness probabilities. Methods We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model and the other fits a logistic regression for predicting missingness probabilities (the missingness model. A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. Results The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. Conclusions We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with

  14. Multiple Imputation of Predictor Variables Using Generalized Additive Models

    NARCIS (Netherlands)

    de Jong, Roel; van Buuren, Stef; Spiess, Martin

    2016-01-01

    The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

  15. Treatments of Missing Values in Large National Data Affect Conclusions: The Impact of Multiple Imputation on Arthroplasty Research.

    Science.gov (United States)

    Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Su, Edwin P; Grauer, Jonathan N

    2018-03-01

    Despite the advantages of large, national datasets, one continuing concern is missing data values. Complete case analysis, where only cases with complete data are analyzed, is commonly used rather than more statistically rigorous approaches such as multiple imputation. This study characterizes the potential selection bias introduced using complete case analysis and compares the results of common regressions using both techniques following unicompartmental knee arthroplasty. Patients undergoing unicompartmental knee arthroplasty were extracted from the 2005 to 2015 National Surgical Quality Improvement Program. As examples, the demographics of patients with and without missing preoperative albumin and hematocrit values were compared. Missing data were then treated with both complete case analysis and multiple imputation (an approach that reproduces the variation and associations that would have been present in a full dataset) and the conclusions of common regressions for adverse outcomes were compared. A total of 6117 patients were included, of which 56.7% were missing at least one value. Younger, female, and healthier patients were more likely to have missing preoperative albumin and hematocrit values. The use of complete case analysis removed 3467 patients from the study in comparison with multiple imputation which included all 6117 patients. The 2 methods of handling missing values led to differing associations of low preoperative laboratory values with commonly studied adverse outcomes. The use of complete case analysis can introduce selection bias and may lead to different conclusions in comparison with the statistically rigorous multiple imputation approach. Joint surgeons should consider the methods of handling missing values when interpreting arthroplasty research. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. The use of multiple imputation for the accurate measurements of individual feed intake by electronic feeders.

    Science.gov (United States)

    Jiao, S; Tiezzi, F; Huang, Y; Gray, K A; Maltecca, C

    2016-02-01

    Obtaining accurate individual feed intake records is the key first step in achieving genetic progress toward more efficient nutrient utilization in pigs. Feed intake records collected by electronic feeding systems contain errors (erroneous and abnormal values exceeding certain cutoff criteria), which are due to feeder malfunction or animal-feeder interaction. In this study, we examined the use of a novel data-editing strategy involving multiple imputation to minimize the impact of errors and missing values on the quality of feed intake data collected by an electronic feeding system. Accuracy of feed intake data adjustment obtained from the conventional linear mixed model (LMM) approach was compared with 2 alternative implementations of multiple imputation by chained equation, denoted as MI (multiple imputation) and MICE (multiple imputation by chained equation). The 3 methods were compared under 3 scenarios, where 5, 10, and 20% feed intake error rates were simulated. Each of the scenarios was replicated 5 times. Accuracy of the alternative error adjustment was measured as the correlation between the true daily feed intake (DFI; daily feed intake in the testing period) or true ADFI (the mean DFI across testing period) and the adjusted DFI or adjusted ADFI. In the editing process, error cutoff criteria are used to define if a feed intake visit contains errors. To investigate the possibility that the error cutoff criteria may affect any of the 3 methods, the simulation was repeated with 2 alternative error cutoff values. Multiple imputation methods outperformed the LMM approach in all scenarios with mean accuracies of 96.7, 93.5, and 90.2% obtained with MI and 96.8, 94.4, and 90.1% obtained with MICE compared with 91.0, 82.6, and 68.7% using LMM for DFI. Similar results were obtained for ADFI. Furthermore, multiple imputation methods consistently performed better than LMM regardless of the cutoff criteria applied to define errors. In conclusion, multiple imputation

  17. Use of Multiple Imputation Method to Improve Estimation of Missing Baseline Serum Creatinine in Acute Kidney Injury Research

    Science.gov (United States)

    Peterson, Josh F.; Eden, Svetlana K.; Moons, Karel G.; Ikizler, T. Alp; Matheny, Michael E.

    2013-01-01

    Summary Background and objectives Baseline creatinine (BCr) is frequently missing in AKI studies. Common surrogate estimates can misclassify AKI and adversely affect the study of related outcomes. This study examined whether multiple imputation improved accuracy of estimating missing BCr beyond current recommendations to apply assumed estimated GFR (eGFR) of 75 ml/min per 1.73 m2 (eGFR 75). Design, setting, participants, & measurements From 41,114 unique adult admissions (13,003 with and 28,111 without BCr data) at Vanderbilt University Hospital between 2006 and 2008, a propensity score model was developed to predict likelihood of missing BCr. Propensity scoring identified 6502 patients with highest likelihood of missing BCr among 13,003 patients with known BCr to simulate a “missing” data scenario while preserving actual reference BCr. Within this cohort (n=6502), the ability of various multiple-imputation approaches to estimate BCr and classify AKI were compared with that of eGFR 75. Results All multiple-imputation methods except the basic one more closely approximated actual BCr than did eGFR 75. Total AKI misclassification was lower with multiple imputation (full multiple imputation + serum creatinine) (9.0%) than with eGFR 75 (12.3%; Pcreatinine) (15.3%) versus eGFR 75 (40.5%; P<0.001). Multiple imputation improved specificity and positive predictive value for detecting AKI at the expense of modestly decreasing sensitivity relative to eGFR 75. Conclusions Multiple imputation can improve accuracy in estimating missing BCr and reduce misclassification of AKI beyond currently proposed methods. PMID:23037980

  18. Limitations in Using Multiple Imputation to Harmonize Individual Participant Data for Meta-Analysis.

    Science.gov (United States)

    Siddique, Juned; de Chavez, Peter J; Howe, George; Cruden, Gracelyn; Brown, C Hendricks

    2018-02-01

    Individual participant data (IPD) meta-analysis is a meta-analysis in which the individual-level data for each study are obtained and used for synthesis. A common challenge in IPD meta-analysis is when variables of interest are measured differently in different studies. The term harmonization has been coined to describe the procedure of placing variables on the same scale in order to permit pooling of data from a large number of studies. Using data from an IPD meta-analysis of 19 adolescent depression trials, we describe a multiple imputation approach for harmonizing 10 depression measures across the 19 trials by treating those depression measures that were not used in a study as missing data. We then apply diagnostics to address the fit of our imputation model. Even after reducing the scale of our application, we were still unable to produce accurate imputations of the missing values. We describe those features of the data that made it difficult to harmonize the depression measures and provide some guidelines for using multiple imputation for harmonization in IPD meta-analysis.

  19. Flexible Imputation of Missing Data

    CERN Document Server

    van Buuren, Stef

    2012-01-01

    Missing data form a problem in every scientific discipline, yet the techniques required to handle them are complicated and often lacking. One of the great ideas in statistical science--multiple imputation--fills gaps in the data with plausible values, the uncertainty of which is coded in the data itself. It also solves other problems, many of which are missing data problems in disguise. Flexible Imputation of Missing Data is supported by many examples using real data taken from the author's vast experience of collaborative research, and presents a practical guide for handling missing data unde

  20. Multiple imputation by chained equations for systematically and sporadically missing multilevel data.

    Science.gov (United States)

    Resche-Rigon, Matthieu; White, Ian R

    2018-06-01

    In multilevel settings such as individual participant data meta-analysis, a variable is 'systematically missing' if it is wholly missing in some clusters and 'sporadically missing' if it is partly missing in some clusters. Previously proposed methods to impute incomplete multilevel data handle either systematically or sporadically missing data, but frequently both patterns are observed. We describe a new multiple imputation by chained equations (MICE) algorithm for multilevel data with arbitrary patterns of systematically and sporadically missing variables. The algorithm is described for multilevel normal data but can easily be extended for other variable types. We first propose two methods for imputing a single incomplete variable: an extension of an existing method and a new two-stage method which conveniently allows for heteroscedastic data. We then discuss the difficulties of imputing missing values in several variables in multilevel data using MICE, and show that even the simplest joint multilevel model implies conditional models which involve cluster means and heteroscedasticity. However, a simulation study finds that the proposed methods can be successfully combined in a multilevel MICE procedure, even when cluster means are not included in the imputation models.

  1. Multiple imputation to account for missing data in a survey: estimating the prevalence of osteoporosis.

    Science.gov (United States)

    Kmetic, Andrew; Joseph, Lawrence; Berger, Claudie; Tenenhouse, Alan

    2002-07-01

    Nonresponse bias is a concern in any epidemiologic survey in which a subset of selected individuals declines to participate. We reviewed multiple imputation, a widely applicable and easy to implement Bayesian methodology to adjust for nonresponse bias. To illustrate the method, we used data from the Canadian Multicentre Osteoporosis Study, a large cohort study of 9423 randomly selected Canadians, designed in part to estimate the prevalence of osteoporosis. Although subjects were randomly selected, only 42% of individuals who were contacted agreed to participate fully in the study. The study design included a brief questionnaire for those invitees who declined further participation in order to collect information on the major risk factors for osteoporosis. These risk factors (which included age, sex, previous fractures, family history of osteoporosis, and current smoking status) were then used to estimate the missing osteoporosis status for nonparticipants using multiple imputation. Both ignorable and nonignorable imputation models are considered. Our results suggest that selection bias in the study is of concern, but only slightly, in very elderly (age 80+ years), both women and men. Epidemiologists should consider using multiple imputation more often than is current practice.

  2. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  3. Missing value imputation for epistatic MAPs

    LENUS (Irish Health Repository)

    Ryan, Colm

    2010-04-20

    Abstract Background Epistatic miniarray profiling (E-MAPs) is a high-throughput approach capable of quantifying aggravating or alleviating genetic interactions between gene pairs. The datasets resulting from E-MAP experiments typically take the form of a symmetric pairwise matrix of interaction scores. These datasets have a significant number of missing values - up to 35% - that can reduce the effectiveness of some data analysis techniques and prevent the use of others. An effective method for imputing interactions would therefore increase the types of possible analysis, as well as increase the potential to identify novel functional interactions between gene pairs. Several methods have been developed to handle missing values in microarray data, but it is unclear how applicable these methods are to E-MAP data because of their pairwise nature and the significantly larger number of missing values. Here we evaluate four alternative imputation strategies, three local (Nearest neighbor-based) and one global (PCA-based), that have been modified to work with symmetric pairwise data. Results We identify different categories for the missing data based on their underlying cause, and show that values from the largest category can be imputed effectively. We compare local and global imputation approaches across a variety of distinct E-MAP datasets, showing that both are competitive and preferable to filling in with zeros. In addition we show that these methods are effective in an E-MAP from a different species, suggesting that pairwise imputation techniques will be increasingly useful as analogous epistasis mapping techniques are developed in different species. We show that strongly alleviating interactions are significantly more difficult to predict than strongly aggravating interactions. Finally we show that imputed interactions, generated using nearest neighbor methods, are enriched for annotations in the same manner as measured interactions. Therefore our method potentially

  4. Practical considerations for sensitivity analysis after multiple imputation applied to epidemiological studies with incomplete data

    Science.gov (United States)

    2012-01-01

    Background Multiple Imputation as usually implemented assumes that data are Missing At Random (MAR), meaning that the underlying missing data mechanism, given the observed data, is independent of the unobserved data. To explore the sensitivity of the inferences to departures from the MAR assumption, we applied the method proposed by Carpenter et al. (2007). This approach aims to approximate inferences under a Missing Not At random (MNAR) mechanism by reweighting estimates obtained after multiple imputation where the weights depend on the assumed degree of departure from the MAR assumption. Methods The method is illustrated with epidemiological data from a surveillance system of hepatitis C virus (HCV) infection in France during the 2001–2007 period. The subpopulation studied included 4343 HCV infected patients who reported drug use. Risk factors for severe liver disease were assessed. After performing complete-case and multiple imputation analyses, we applied the sensitivity analysis to 3 risk factors of severe liver disease: past excessive alcohol consumption, HIV co-infection and infection with HCV genotype 3. Results In these data, the association between severe liver disease and HIV was underestimated, if given the observed data the chance of observing HIV status is high when this is positive. Inference for two other risk factors were robust to plausible local departures from the MAR assumption. Conclusions We have demonstrated the practical utility of, and advocate, a pragmatic widely applicable approach to exploring plausible departures from the MAR assumption post multiple imputation. We have developed guidelines for applying this approach to epidemiological studies. PMID:22681630

  5. Multiply-Imputed Synthetic Data: Advice to the Imputer

    Directory of Open Access Journals (Sweden)

    Loong Bronwyn

    2017-12-01

    Full Text Available Several statistical agencies have started to use multiply-imputed synthetic microdata to create public-use data in major surveys. The purpose of doing this is to protect the confidentiality of respondents’ identities and sensitive attributes, while allowing standard complete-data analyses of microdata. A key challenge, faced by advocates of synthetic data, is demonstrating that valid statistical inferences can be obtained from such synthetic data for non-confidential questions. Large discrepancies between observed-data and synthetic-data analytic results for such questions may arise because of uncongeniality; that is, differences in the types of inputs available to the imputer, who has access to the actual data, and to the analyst, who has access only to the synthetic data. Here, we discuss a simple, but possibly canonical, example of uncongeniality when using multiple imputation to create synthetic data, which specifically addresses the choices made by the imputer. An initial, unanticipated but not surprising, conclusion is that non-confidential design information used to impute synthetic data should be released with the confidential synthetic data to allow users of synthetic data to avoid possible grossly conservative inferences.

  6. Trend in BMI z-score among Private Schools’ Students in Delhi using Multiple Imputation for Growth Curve Model

    Directory of Open Access Journals (Sweden)

    Vinay K Gupta

    2016-06-01

    Full Text Available Objective: The aim of the study is to assess the trend in mean BMI z-score among private schools’ students from their anthropometric records when there were missing values in the outcome. Methodology: The anthropometric measurements of student from class 1 to 12 were taken from the records of two private schools in Delhi, India from 2005 to 2010. These records comprise of an unbalanced longitudinal data that is not all the students had measurements recorded at each year. The trend in mean BMI z-score was estimated through growth curve model. Prior to that, missing values of BMI z-score were imputed through multiple imputation using the same model. A complete case analysis was also performed after excluding missing values to compare the results with those obtained from analysis of multiply imputed data. Results: The mean BMI z-score among school student significantly decreased over time in imputed data (β= -0.2030, se=0.0889, p=0.0232 after adjusting age, gender, class and school. Complete case analysis also shows a decrease in mean BMI z-score though it was not statistically significant (β= -0.2861, se=0.0987, p=0.065. Conclusions: The estimates obtained from multiple imputation analysis were better than those of complete data after excluding missing values in terms of lower standard errors. We showed that anthropometric measurements from schools records can be used to monitor the weight status of children and adolescents and multiple imputation using growth curve model can be useful while analyzing such data

  7. Auxiliary variables in multiple imputation in regression with missing X: a warning against including too many in small sample research

    Directory of Open Access Journals (Sweden)

    Hardt Jochen

    2012-12-01

    Full Text Available Abstract Background Multiple imputation is becoming increasingly popular. Theoretical considerations as well as simulation studies have shown that the inclusion of auxiliary variables is generally of benefit. Methods A simulation study of a linear regression with a response Y and two predictors X1 and X2 was performed on data with n = 50, 100 and 200 using complete cases or multiple imputation with 0, 10, 20, 40 and 80 auxiliary variables. Mechanisms of missingness were either 100% MCAR or 50% MAR + 50% MCAR. Auxiliary variables had low (r=.10 vs. moderate correlations (r=.50 with X’s and Y. Results The inclusion of auxiliary variables can improve a multiple imputation model. However, inclusion of too many variables leads to downward bias of regression coefficients and decreases precision. When the correlations are low, inclusion of auxiliary variables is not useful. Conclusion More research on auxiliary variables in multiple imputation should be performed. A preliminary rule of thumb could be that the ratio of variables to cases with complete data should not go below 1 : 3.

  8. Factors associated with low birth weight in Nepal using multiple imputation

    Directory of Open Access Journals (Sweden)

    Usha Singh

    2017-02-01

    Full Text Available Abstract Background Survey data from low income countries on birth weight usually pose a persistent problem. The studies conducted on birth weight have acknowledged missing data on birth weight, but they are not included in the analysis. Furthermore, other missing data presented on determinants of birth weight are not addressed. Thus, this study tries to identify determinants that are associated with low birth weight (LBW using multiple imputation to handle missing data on birth weight and its determinants. Methods The child dataset from Nepal Demographic and Health Survey (NDHS, 2011 was utilized in this study. A total of 5,240 children were born between 2006 and 2011, out of which 87% had at least one measured variable missing and 21% had no recorded birth weight. All the analyses were carried out in R version 3.1.3. Transform-then impute method was applied to check for interaction between explanatory variables and imputed missing data. Survey package was applied to each imputed dataset to account for survey design and sampling method. Survey logistic regression was applied to identify the determinants associated with LBW. Results The prevalence of LBW was 15.4% after imputation. Women with the highest autonomy on their own health compared to those with health decisions involving husband or others (adjusted odds ratio (OR 1.87, 95% confidence interval (95% CI = 1.31, 2.67, and husband and women together (adjusted OR 1.57, 95% CI = 1.05, 2.35 were less likely to give birth to LBW infants. Mothers using highly polluting cooking fuels (adjusted OR 1.49, 95% CI = 1.03, 2.22 were more likely to give birth to LBW infants than mothers using non-polluting cooking fuels. Conclusion The findings of this study suggested that obtaining the prevalence of LBW from only the sample of measured birth weight and ignoring missing data results in underestimation.

  9. Comparison of results from different imputation techniques for missing data from an anti-obesity drug trial

    DEFF Research Database (Denmark)

    Jørgensen, Anders W.; Lundstrøm, Lars H; Wetterslev, Jørn

    2014-01-01

    BACKGROUND: In randomised trials of medical interventions, the most reliable analysis follows the intention-to-treat (ITT) principle. However, the ITT analysis requires that missing outcome data have to be imputed. Different imputation techniques may give different results and some may lead to bias...... of handling missing data in a 60-week placebo controlled anti-obesity drug trial on topiramate. METHODS: We compared an analysis of complete cases with datasets where missing body weight measurements had been replaced using three different imputation methods: LOCF, baseline carried forward (BOCF) and MI...

  10. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    Science.gov (United States)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  11. Multiple imputation to account for measurement error in marginal structural models

    Science.gov (United States)

    Edwards, Jessie K.; Cole, Stephen R.; Westreich, Daniel; Crane, Heidi; Eron, Joseph J.; Mathews, W. Christopher; Moore, Richard; Boswell, Stephen L.; Lesko, Catherine R.; Mugavero, Michael J.

    2015-01-01

    Background Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and non-differential measurement error in a marginal structural model. Methods We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. Results In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality [hazard ratio (HR): 1.2 (95% CI: 0.6, 2.3)]. The HR for current smoking and therapy (0.4 (95% CI: 0.2, 0.7)) was similar to the HR for no smoking and therapy (0.4; 95% CI: 0.2, 0.6). Conclusions Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies. PMID:26214338

  12. Multiple Imputation to Account for Measurement Error in Marginal Structural Models.

    Science.gov (United States)

    Edwards, Jessie K; Cole, Stephen R; Westreich, Daniel; Crane, Heidi; Eron, Joseph J; Mathews, W Christopher; Moore, Richard; Boswell, Stephen L; Lesko, Catherine R; Mugavero, Michael J

    2015-09-01

    Marginal structural models are an important tool for observational studies. These models typically assume that variables are measured without error. We describe a method to account for differential and nondifferential measurement error in a marginal structural model. We illustrate the method estimating the joint effects of antiretroviral therapy initiation and current smoking on all-cause mortality in a United States cohort of 12,290 patients with HIV followed for up to 5 years between 1998 and 2011. Smoking status was likely measured with error, but a subset of 3,686 patients who reported smoking status on separate questionnaires composed an internal validation subgroup. We compared a standard joint marginal structural model fit using inverse probability weights to a model that also accounted for misclassification of smoking status using multiple imputation. In the standard analysis, current smoking was not associated with increased risk of mortality. After accounting for misclassification, current smoking without therapy was associated with increased mortality (hazard ratio [HR]: 1.2 [95% confidence interval [CI] = 0.6, 2.3]). The HR for current smoking and therapy [0.4 (95% CI = 0.2, 0.7)] was similar to the HR for no smoking and therapy (0.4; 95% CI = 0.2, 0.6). Multiple imputation can be used to account for measurement error in concert with methods for causal inference to strengthen results from observational studies.

  13. Missing data imputation using statistical and machine learning methods in a real breast cancer problem.

    Science.gov (United States)

    Jerez, José M; Molina, Ignacio; García-Laencina, Pedro J; Alba, Emilio; Ribelles, Nuria; Martín, Miguel; Franco, Leonardo

    2010-10-01

    Missing data imputation is an important task in cases where it is crucial to use all available data and not discard records with missing values. This work evaluates the performance of several statistical and machine learning imputation methods that were used to predict recurrence in patients in an extensive real breast cancer data set. Imputation methods based on statistical techniques, e.g., mean, hot-deck and multiple imputation, and machine learning techniques, e.g., multi-layer perceptron (MLP), self-organisation maps (SOM) and k-nearest neighbour (KNN), were applied to data collected through the "El Álamo-I" project, and the results were then compared to those obtained from the listwise deletion (LD) imputation method. The database includes demographic, therapeutic and recurrence-survival information from 3679 women with operable invasive breast cancer diagnosed in 32 different hospitals belonging to the Spanish Breast Cancer Research Group (GEICAM). The accuracies of predictions on early cancer relapse were measured using artificial neural networks (ANNs), in which different ANNs were estimated using the data sets with imputed missing values. The imputation methods based on machine learning algorithms outperformed imputation statistical methods in the prediction of patient outcome. Friedman's test revealed a significant difference (p=0.0091) in the observed area under the ROC curve (AUC) values, and the pairwise comparison test showed that the AUCs for MLP, KNN and SOM were significantly higher (p=0.0053, p=0.0048 and p=0.0071, respectively) than the AUC from the LD-based prognosis model. The methods based on machine learning techniques were the most suited for the imputation of missing values and led to a significant enhancement of prognosis accuracy compared to imputation methods based on statistical procedures. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.

    Science.gov (United States)

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.

  15. Multiple imputation strategies for zero-inflated cost data in economic evaluations : which method works best?

    NARCIS (Netherlands)

    MacNeil Vroomen, Janet; Eekhout, Iris; Dijkgraaf, Marcel G; van Hout, Hein; de Rooij, Sophia E; Heymans, Martijn W; Bosmans, Judith E

    2016-01-01

    Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing

  16. Multiple imputation for multivariate data with missing and below-threshold measurements: time-series concentrations of pollutants in the Arctic.

    Science.gov (United States)

    Hopke, P K; Liu, C; Rubin, D B

    2001-03-01

    Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.

  17. Handling missing data in cluster randomized trials: A demonstration of multiple imputation with PAN through SAS

    Directory of Open Access Journals (Sweden)

    Jiangxiu Zhou

    2014-09-01

    Full Text Available The purpose of this study is to demonstrate a way of dealing with missing data in clustered randomized trials by doing multiple imputation (MI with the PAN package in R through SAS. The procedure for doing MI with PAN through SAS is demonstrated in detail in order for researchers to be able to use this procedure with their own data. An illustration of the technique with empirical data was also included. In this illustration thePAN results were compared with pairwise deletion and three types of MI: (1 Normal Model (NM-MI ignoring the cluster structure; (2 NM-MI with dummy-coded cluster variables (fixed cluster structure; and (3 a hybrid NM-MI which imputes half the time ignoring the cluster structure, and the other half including the dummy-coded cluster variables. The empirical analysis showed that using PAN and the other strategies produced comparable parameter estimates. However, the dummy-coded MI overestimated the intraclass correlation, whereas MI ignoring the cluster structure and the hybrid MI underestimated the intraclass correlation. When compared with PAN, the p-value and standard error for the treatment effect were higher with dummy-coded MI, and lower with MI ignoring the clusterstructure, the hybrid MI approach, and pairwise deletion. Previous studies have shown that NM-MI is not appropriate for handling missing data in clustered randomized trials. This approach, in addition to the pairwise deletion approach, leads to a biased intraclass correlation and faultystatistical conclusions. Imputation in clustered randomized trials should be performed with PAN. We have demonstrated an easy way for using PAN through SAS.

  18. A web-based approach to data imputation

    KAUST Repository

    Li, Zhixu

    2013-10-24

    In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques. © 2013 Springer Science+Business Media New York.

  19. Missing data imputation: focusing on single imputation.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-01-01

    Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations.

  20. A Note on the Effect of Data Clustering on the Multiple-Imputation Variance Estimator: A Theoretical Addendum to the Lewis et al. article in JOS 2014

    Directory of Open Access Journals (Sweden)

    He Yulei

    2016-03-01

    Full Text Available Multiple imputation is a popular approach to handling missing data. Although it was originally motivated by survey nonresponse problems, it has been readily applied to other data settings. However, its general behavior still remains unclear when applied to survey data with complex sample designs, including clustering. Recently, Lewis et al. (2014 compared single- and multiple-imputation analyses for certain incomplete variables in the 2008 National Ambulatory Medicare Care Survey, which has a nationally representative, multistage, and clustered sampling design. Their study results suggested that the increase of the variance estimate due to multiple imputation compared with single imputation largely disappears for estimates with large design effects. We complement their empirical research by providing some theoretical reasoning. We consider data sampled from an equally weighted, single-stage cluster design and characterize the process using a balanced, one-way normal random-effects model. Assuming that the missingness is completely at random, we derive analytic expressions for the within- and between-multiple-imputation variance estimators for the mean estimator, and thus conveniently reveal the impact of design effects on these variance estimators. We propose approximations for the fraction of missing information in clustered samples, extending previous results for simple random samples. We discuss some generalizations of this research and its practical implications for data release by statistical agencies.

  1. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Science.gov (United States)

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  2. Cost reduction for web-based data imputation

    KAUST Repository

    Li, Zhixu

    2014-01-01

    Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity of these keywords and the data complexity on the Web, different queries may retrieve different answers to the same absent field value. To decide the most probable right answer to each absent filed value, existing method issues quite a few available imputation queries for each absent value, and then vote on deciding the most probable right answer. As a result, we have to issue a large number of imputation queries for filling all absent values in an incomplete data set, which brings a large overhead. In this paper, we work on reducing the cost of Web-based Data Imputation in two aspects: First, we propose a query execution scheme which can secure the most probable right answer to an absent field value by issuing as few imputation queries as possible. Second, we recognize and prune queries that probably will fail to return any answers a priori. Our extensive experimental evaluation shows that our proposed techniques substantially reduce the cost of Web-based Imputation without hurting its high imputation accuracy. © 2014 Springer International Publishing Switzerland.

  3. Comparison of different Methods for Univariate Time Series Imputation in R

    OpenAIRE

    Moritz, Steffen; Sardá, Alexis; Bartz-Beielstein, Thomas; Zaefferer, Martin; Stork, Jörg

    2015-01-01

    Missing values in datasets are a well-known problem and there are quite a lot of R packages offering imputation functions. But while imputation in general is well covered within R, it is hard to find functions for imputation of univariate time series. The problem is, most standard imputation techniques can not be applied directly. Most algorithms rely on inter-attribute correlations, while univariate time series imputation needs to employ time dependencies. This paper provides an overview of ...

  4. A Comparison of Joint Model and Fully Conditional Specification Imputation for Multilevel Missing Data

    Science.gov (United States)

    Mistler, Stephen A.; Enders, Craig K.

    2017-01-01

    Multiple imputation methods can generally be divided into two broad frameworks: joint model (JM) imputation and fully conditional specification (FCS) imputation. JM draws missing values simultaneously for all incomplete variables using a multivariate distribution, whereas FCS imputes variables one at a time from a series of univariate conditional…

  5. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes

    Directory of Open Access Journals (Sweden)

    Lotz Meredith J

    2008-01-01

    Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA

  6. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes.

    Science.gov (United States)

    Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C

    2008-01-10

    Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity

  7. R package imputeTestbench to compare imputations methods for univariate time series

    OpenAIRE

    Bokde, Neeraj; Kulat, Kishore; Beck, Marcus W; Asencio-Cortés, Gualberto

    2016-01-01

    This paper describes the R package imputeTestbench that provides a testbench for comparing imputation methods for missing data in univariate time series. The imputeTestbench package can be used to simulate the amount and type of missing data in a complete dataset and compare filled data using different imputation methods. The user has the option to simulate missing data by removing observations completely at random or in blocks of different sizes. Several default imputation methods are includ...

  8. Multiple imputation of missing passenger boarding data in the national census of ferry operators

    Science.gov (United States)

    2008-08-01

    This report presents findings from the 2006 National Census of Ferry Operators (NCFO) augmented with imputed values for passengers and passenger miles. Due to the imputation procedures used to calculate missing data, totals in Table 1 may not corresp...

  9. Imputing data that are missing at high rates using a boosting algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Cauthen, Katherine Regina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lambert, Gregory [Apple Inc., Cupertino, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lefantzi, Sophia [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2016-09-01

    Traditional multiple imputation approaches may perform poorly for datasets with high rates of missingness unless many m imputations are used. This paper implements an alternative machine learning-based approach to imputing data that are missing at high rates. Here, we use boosting to create a strong learner from a weak learner fitted to a dataset missing many observations. This approach may be applied to a variety of types of learners (models). The approach is demonstrated by application to a spatiotemporal dataset for predicting dengue outbreaks in India from meteorological covariates. A Bayesian spatiotemporal CAR model is boosted to produce imputations, and the overall RMSE from a k-fold cross-validation is used to assess imputation accuracy.

  10. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    Science.gov (United States)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  11. Dealing with missing data in a multi-question depression scale: a comparison of imputation methods

    Directory of Open Access Journals (Sweden)

    Stuart Heather

    2006-12-01

    Full Text Available Abstract Background Missing data present a challenge to many research projects. The problem is often pronounced in studies utilizing self-report scales, and literature addressing different strategies for dealing with missing data in such circumstances is scarce. The objective of this study was to compare six different imputation techniques for dealing with missing data in the Zung Self-reported Depression scale (SDS. Methods 1580 participants from a surgical outcomes study completed the SDS. The SDS is a 20 question scale that respondents complete by circling a value of 1 to 4 for each question. The sum of the responses is calculated and respondents are classified as exhibiting depressive symptoms when their total score is over 40. Missing values were simulated by randomly selecting questions whose values were then deleted (a missing completely at random simulation. Additionally, a missing at random and missing not at random simulation were completed. Six imputation methods were then considered; 1 multiple imputation, 2 single regression, 3 individual mean, 4 overall mean, 5 participant's preceding response, and 6 random selection of a value from 1 to 4. For each method, the imputed mean SDS score and standard deviation were compared to the population statistics. The Spearman correlation coefficient, percent misclassified and the Kappa statistic were also calculated. Results When 10% of values are missing, all the imputation methods except random selection produce Kappa statistics greater than 0.80 indicating 'near perfect' agreement. MI produces the most valid imputed values with a high Kappa statistic (0.89, although both single regression and individual mean imputation also produced favorable results. As the percent of missing information increased to 30%, or when unbalanced missing data were introduced, MI maintained a high Kappa statistic. The individual mean and single regression method produced Kappas in the 'substantial agreement' range

  12. Randomly and Non-Randomly Missing Renal Function Data in the Strong Heart Study: A Comparison of Imputation Methods.

    Directory of Open Access Journals (Sweden)

    Nawar Shara

    Full Text Available Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS. Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989-1991, 2 (1993-1995, and 3 (1998-1999 was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.

  13. Collateral missing value imputation: a new robust missing value estimation algorithm for microarray data.

    Science.gov (United States)

    Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S

    2005-05-15

    Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE

  14. Genotype Imputation for Latinos Using the HapMap and 1000 Genomes Project Reference Panels

    Directory of Open Access Journals (Sweden)

    Xiaoyi eGao

    2012-06-01

    Full Text Available Genotype imputation is a vital tool in genome-wide association studies (GWAS and meta-analyses of multiple GWAS results. Imputation enables researchers to increase genomic coverage and to pool data generated using different genotyping platforms. HapMap samples are often employed as the reference panel. More recently, the 1000 Genomes Project resource is becoming the primary source for reference panels. Multiple GWAS and meta-analyses are targeting Latinos, the most populous and fastest growing minority group in the US. However, genotype imputation resources for Latinos are rather limited compared to individuals of European ancestry at present, largely because of the lack of good reference data. One choice of reference panel for Latinos is one derived from the population of Mexican individuals in Los Angeles contained in the HapMap Phase 3 project and the 1000 Genomes Project. However, a detailed evaluation of the quality of the imputed genotypes derived from the public reference panels has not yet been reported. Using simulation studies, the Illumina OmniExpress GWAS data from the Los Angles Latino Eye Study and the MACH software package, we evaluated the accuracy of genotype imputation in Latinos. Our results show that the 1000 Genomes Project AMR+CEU+YRI reference panel provides the highest imputation accuracy for Latinos, and that also including Asian samples in the panel can reduce imputation accuracy. We also provide the imputation accuracy for each autosomal chromosome using the 1000 Genomes Project panel for Latinos. Our results serve as a guide to future imputation-based analysis in Latinos.

  15. 3D-MICE: integration of cross-sectional and longitudinal imputation for multi-analyte longitudinal clinical data.

    Science.gov (United States)

    Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M

    2018-06-01

    A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.

  16. Improving accuracy of rare variant imputation with a two-step imputation approach

    DEFF Research Database (Denmark)

    Kreiner-Møller, Eskil; Medina-Gomez, Carolina; Uitterlinden, André G

    2015-01-01

    not being comprehensively scrutinized. Next-generation arrays ensuring sufficient coverage together with new reference panels, as the 1000 Genomes panel, are emerging to facilitate imputation of low frequent single-nucleotide polymorphisms (minor allele frequency (MAF) ... reference sample genotyped on a dense array and hereafter to the 1000 Genomes reference panel. We show that mean imputation quality, measured by the r(2) using this approach, increases by 28% for variants with a MAF between 1 and 5% as compared with direct imputation to 1000 Genomes reference. Similarly......Genotype imputation has been the pillar of the success of genome-wide association studies (GWAS) for identifying common variants associated with common diseases. However, most GWAS have been run using only 60 HapMap samples as reference for imputation, meaning less frequent and rare variants...

  17. On multivariate imputation and forecasting of decadal wind speed missing data.

    Science.gov (United States)

    Wesonga, Ronald

    2015-01-01

    This paper demonstrates the application of multiple imputations by chained equations and time series forecasting of wind speed data. The study was motivated by the high prevalence of missing wind speed historic data. Findings based on the fully conditional specification under multiple imputations by chained equations, provided reliable wind speed missing data imputations. Further, the forecasting model shows, the smoothing parameter, alpha (0.014) close to zero, confirming that recent past observations are more suitable for use to forecast wind speeds. The maximum decadal wind speed for Entebbe International Airport was estimated to be 17.6 metres per second at a 0.05 level of significance with a bound on the error of estimation of 10.8 metres per second. The large bound on the error of estimations confirms the dynamic tendencies of wind speed at the airport under study.

  18. Avoid Filling Swiss Cheese with Whipped Cream; Imputation Techniques and Evaluation Procedures for Cross-Country Time Series

    OpenAIRE

    Michael Weber; Michaela Denk

    2011-01-01

    International organizations collect data from national authorities to create multivariate cross-sectional time series for their analyses. As data from countries with not yet well-established statistical systems may be incomplete, the bridging of data gaps is a crucial challenge. This paper investigates data structures and missing data patterns in the cross-sectional time series framework, reviews missing value imputation techniques used for micro data in official statistics, and discusses the...

  19. [Imputing missing data in public health: general concepts and application to dichotomous variables].

    Science.gov (United States)

    Hernández, Gilma; Moriña, David; Navarro, Albert

    The presence of missing data in collected variables is common in health surveys, but the subsequent imputation thereof at the time of analysis is not. Working with imputed data may have certain benefits regarding the precision of the estimators and the unbiased identification of associations between variables. The imputation process is probably still little understood by many non-statisticians, who view this process as highly complex and with an uncertain goal. To clarify these questions, this note aims to provide a straightforward, non-exhaustive overview of the imputation process to enable public health researchers ascertain its strengths. All this in the context of dichotomous variables which are commonplace in public health. To illustrate these concepts, an example in which missing data is handled by means of simple and multiple imputation is introduced. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Data imputation analysis for Cosmic Rays time series

    Science.gov (United States)

    Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.

    2017-05-01

    The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.

  1. LinkImputeR: user-guided genotype calling and imputation for non-model organisms.

    Science.gov (United States)

    Money, Daniel; Migicovsky, Zoë; Gardner, Kyle; Myles, Sean

    2017-07-10

    Genomic studies such as genome-wide association and genomic selection require genome-wide genotype data. All existing technologies used to create these data result in missing genotypes, which are often then inferred using genotype imputation software. However, existing imputation methods most often make use only of genotypes that are successfully inferred after having passed a certain read depth threshold. Because of this, any read information for genotypes that did not pass the threshold, and were thus set to missing, is ignored. Most genomic studies also choose read depth thresholds and quality filters without investigating their effects on the size and quality of the resulting genotype data. Moreover, almost all genotype imputation methods require ordered markers and are therefore of limited utility in non-model organisms. Here we introduce LinkImputeR, a software program that exploits the read count information that is normally ignored, and makes use of all available DNA sequence information for the purposes of genotype calling and imputation. It is specifically designed for non-model organisms since it requires neither ordered markers nor a reference panel of genotypes. Using next-generation DNA sequence (NGS) data from apple, cannabis and grape, we quantify the effect of varying read count and missingness thresholds on the quantity and quality of genotypes generated from LinkImputeR. We demonstrate that LinkImputeR can increase the number of genotype calls by more than an order of magnitude, can improve genotyping accuracy by several percent and can thus improve the power of downstream analyses. Moreover, we show that the effects of quality and read depth filters can differ substantially between data sets and should therefore be investigated on a per-study basis. By exploiting DNA sequence data that is normally ignored during genotype calling and imputation, LinkImputeR can significantly improve both the quantity and quality of genotype data generated from

  2. Two-pass imputation algorithm for missing value estimation in gene expression time series.

    Science.gov (United States)

    Tsiporkova, Elena; Boeva, Veselka

    2007-10-01

    Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different

  3. A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets

    Science.gov (United States)

    Carrig, Madeline M.; Manrique-Vallier, Daniel; Ranby, Krista W.; Reiter, Jerome P.; Hoyle, Rick H.

    2015-01-01

    Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437

  4. Handling missing data for the identification of charged particles in a multilayer detector: A comparison between different imputation methods

    Energy Technology Data Exchange (ETDEWEB)

    Riggi, S., E-mail: sriggi@oact.inaf.it [INAF - Osservatorio Astrofisico di Catania (Italy); Riggi, D. [Keras Strategy - Milano (Italy); Riggi, F. [Dipartimento di Fisica e Astronomia - Università di Catania (Italy); INFN, Sezione di Catania (Italy)

    2015-04-21

    Identification of charged particles in a multilayer detector by the energy loss technique may also be achieved by the use of a neural network. The performance of the network becomes worse when a large fraction of information is missing, for instance due to detector inefficiencies. Algorithms which provide a way to impute missing information have been developed over the past years. Among the various approaches, we focused on normal mixtures’ models in comparison with standard mean imputation and multiple imputation methods. Further, to account for the intrinsic asymmetry of the energy loss data, we considered skew-normal mixture models and provided a closed form implementation in the Expectation-Maximization (EM) algorithm framework to handle missing patterns. The method has been applied to a test case where the energy losses of pions, kaons and protons in a six-layers’ Silicon detector are considered as input neurons to a neural network. Results are given in terms of reconstruction efficiency and purity of the various species in different momentum bins.

  5. Differential network analysis with multiply imputed lipidomic data.

    Directory of Open Access Journals (Sweden)

    Maiju Kujala

    Full Text Available The importance of lipids for cell function and health has been widely recognized, e.g., a disorder in the lipid composition of cells has been related to atherosclerosis caused cardiovascular disease (CVD. Lipidomics analyses are characterized by large yet not a huge number of mutually correlated variables measured and their associations to outcomes are potentially of a complex nature. Differential network analysis provides a formal statistical method capable of inferential analysis to examine differences in network structures of the lipids under two biological conditions. It also guides us to identify potential relationships requiring further biological investigation. We provide a recipe to conduct permutation test on association scores resulted from partial least square regression with multiple imputed lipidomic data from the LUdwigshafen RIsk and Cardiovascular Health (LURIC study, particularly paying attention to the left-censored missing values typical for a wide range of data sets in life sciences. Left-censored missing values are low-level concentrations that are known to exist somewhere between zero and a lower limit of quantification. To make full use of the LURIC data with the missing values, we utilize state of the art multiple imputation techniques and propose solutions to the challenges that incomplete data sets bring to differential network analysis. The customized network analysis helps us to understand the complexities of the underlying biological processes by identifying lipids and lipid classes that interact with each other, and by recognizing the most important differentially expressed lipids between two subgroups of coronary artery disease (CAD patients, the patients that had a fatal CVD event and the ones who remained stable during two year follow-up.

  6. Using imputation to provide location information for nongeocoded addresses.

    Directory of Open Access Journals (Sweden)

    Frank C Curriero

    2010-02-01

    Full Text Available The importance of geography as a source of variation in health research continues to receive sustained attention in the literature. The inclusion of geographic information in such research often begins by adding data to a map which is predicated by some knowledge of location. A precise level of spatial information is conventionally achieved through geocoding, the geographic information system (GIS process of translating mailing address information to coordinates on a map. The geocoding process is not without its limitations, though, since there is always a percentage of addresses which cannot be converted successfully (nongeocodable. This raises concerns regarding bias since traditionally the practice has been to exclude nongeocoded data records from analysis.In this manuscript we develop and evaluate a set of imputation strategies for dealing with missing spatial information from nongeocoded addresses. The strategies are developed assuming a known zip code with increasing use of collateral information, namely the spatial distribution of the population at risk. Strategies are evaluated using prostate cancer data obtained from the Maryland Cancer Registry. We consider total case enumerations at the Census county, tract, and block group level as the outcome of interest when applying and evaluating the methods. Multiple imputation is used to provide estimated total case counts based on complete data (geocodes plus imputed nongeocodes with a measure of uncertainty. Results indicate that the imputation strategy based on using available population-based age, gender, and race information performed the best overall at the county, tract, and block group levels.The procedure allows for the potentially biased and likely under reported outcome, case enumerations based on only the geocoded records, to be presented with a statistically adjusted count (imputed count with a measure of uncertainty that are based on all the case data, the geocodes and imputed

  7. Application of Multiple Imputation for Missing Values in Three-Way Three-Mode Multi-Environment Trial Data.

    Science.gov (United States)

    Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E

    2015-01-01

    It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.

  8. Imputation methods for filling missing data in urban air pollution data for Malaysia

    Directory of Open Access Journals (Sweden)

    Nur Afiqah Zakaria

    2018-06-01

    Full Text Available The air quality measurement data obtained from the continuous ambient air quality monitoring (CAAQM station usually contained missing data. The missing observations of the data usually occurred due to machine failure, routine maintenance and human error. In this study, the hourly monitoring data of CO, O3, PM10, SO2, NOx, NO2, ambient temperature and humidity were used to evaluate four imputation methods (Mean Top Bottom, Linear Regression, Multiple Imputation and Nearest Neighbour. The air pollutants observations were simulated into four percentages of simulated missing data i.e. 5%, 10%, 15% and 20%. Performance measures namely the Mean Absolute Error, Root Mean Squared Error, Coefficient of Determination and Index of Agreement were used to describe the goodness of fit of the imputation methods. From the results of the performance measures, Mean Top Bottom method was selected as the most appropriate imputation method for filling in the missing values in air pollutants data.

  9. Improved imputation accuracy of rare and low-frequency variants using population-specific high-coverage WGS-based imputation reference panel.

    Science.gov (United States)

    Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit

    2017-06-01

    Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.

  10. A New Missing Data Imputation Algorithm Applied to Electrical Data Loggers

    Directory of Open Access Journals (Sweden)

    Concepción Crespo Turrado

    2015-12-01

    Full Text Available Nowadays, data collection is a key process in the study of electrical power networks when searching for harmonics and a lack of balance among phases. In this context, the lack of data of any of the main electrical variables (phase-to-neutral voltage, phase-to-phase voltage, and current in each phase and power factor adversely affects any time series study performed. When this occurs, a data imputation process must be accomplished in order to substitute the data that is missing for estimated values. This paper presents a novel missing data imputation method based on multivariate adaptive regression splines (MARS and compares it with the well-known technique called multivariate imputation by chained equations (MICE. The results obtained demonstrate how the proposed method outperforms the MICE algorithm.

  11. Data driven estimation of imputation error-a strategy for imputation with a reject option

    DEFF Research Database (Denmark)

    Bak, Nikolaj; Hansen, Lars Kai

    2016-01-01

    Missing data is a common problem in many research fields and is a challenge that always needs careful considerations. One approach is to impute the missing values, i.e., replace missing values with estimates. When imputation is applied, it is typically applied to all records with missing values i...

  12. Missing data methods for dealing with missing items in quality of life questionnaires. A comparison by simulation of personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques applied to the SF-36 in the French 2003 decennial health survey.

    Science.gov (United States)

    Peyre, Hugo; Leplège, Alain; Coste, Joël

    2011-03-01

    Missing items are common in quality of life (QoL) questionnaires and present a challenge for research in this field. It remains unclear which of the various methods proposed to deal with missing data performs best in this context. We compared personal mean score, full information maximum likelihood, multiple imputation, and hot deck techniques using various realistic simulation scenarios of item missingness in QoL questionnaires constructed within the framework of classical test theory. Samples of 300 and 1,000 subjects were randomly drawn from the 2003 INSEE Decennial Health Survey (of 23,018 subjects representative of the French population and having completed the SF-36) and various patterns of missing data were generated according to three different item non-response rates (3, 6, and 9%) and three types of missing data (Little and Rubin's "missing completely at random," "missing at random," and "missing not at random"). The missing data methods were evaluated in terms of accuracy and precision for the analysis of one descriptive and one association parameter for three different scales of the SF-36. For all item non-response rates and types of missing data, multiple imputation and full information maximum likelihood appeared superior to the personal mean score and especially to hot deck in terms of accuracy and precision; however, the use of personal mean score was associated with insignificant bias (relative bias personal mean score appears nonetheless appropriate for dealing with items missing from completed SF-36 questionnaires in most situations of routine use. These results can reasonably be extended to other questionnaires constructed according to classical test theory.

  13. Accounting for the Multiple Natures of Missing Values in Label-Free Quantitative Proteomics Data Sets to Compare Imputation Strategies.

    Science.gov (United States)

    Lazar, Cosmin; Gatto, Laurent; Ferro, Myriam; Bruley, Christophe; Burger, Thomas

    2016-04-01

    Missing values are a genuine issue in label-free quantitative proteomics. Recent works have surveyed the different statistical methods to conduct imputation and have compared them on real or simulated data sets and recommended a list of missing value imputation methods for proteomics application. Although insightful, these comparisons do not account for two important facts: (i) depending on the proteomics data set, the missingness mechanism may be of different natures and (ii) each imputation method is devoted to a specific type of missingness mechanism. As a result, we believe that the question at stake is not to find the most accurate imputation method in general but instead the most appropriate one. We describe a series of comparisons that support our views: For instance, we show that a supposedly "under-performing" method (i.e., giving baseline average results), if applied at the "appropriate" time in the data-processing pipeline (before or after peptide aggregation) on a data set with the "appropriate" nature of missing values, can outperform a blindly applied, supposedly "better-performing" method (i.e., the reference method from the state-of-the-art). This leads us to formulate few practical guidelines regarding the choice and the application of an imputation method in a proteomics context.

  14. iVAR: a program for imputing missing data in multivariate time series using vector autoregressive models.

    Science.gov (United States)

    Liu, Siwei; Molenaar, Peter C M

    2014-12-01

    This article introduces iVAR, an R program for imputing missing data in multivariate time series on the basis of vector autoregressive (VAR) models. We conducted a simulation study to compare iVAR with three methods for handling missing data: listwise deletion, imputation with sample means and variances, and multiple imputation ignoring time dependency. The results showed that iVAR produces better estimates for the cross-lagged coefficients than do the other three methods. We demonstrate the use of iVAR with an empirical example of time series electrodermal activity data and discuss the advantages and limitations of the program.

  15. Analyzing the changing gender wage gap based on multiply imputed right censored wages

    OpenAIRE

    Gartner, Hermann; Rässler, Susanne

    2005-01-01

    "In order to analyze the gender wage gap with the German IAB-employment register we have to solve the problem of censored wages at the upper limit of the social security system. We treat this problem as a missing data problem. We regard the missingness mechanism as not missing at random (NMAR, according to Little and Rubin, 1987, 2002) as well as missing by design. The censored wages are multiply imputed by draws of a random variable from a truncated distribution. The multiple imputation is b...

  16. Combining Fourier and lagged k-nearest neighbor imputation for biomedical time series data.

    Science.gov (United States)

    Rahman, Shah Atiqur; Huang, Yuxiao; Claassen, Jan; Heintzman, Nathaniel; Kleinberg, Samantha

    2015-12-01

    Most clinical and biomedical data contain missing values. A patient's record may be split across multiple institutions, devices may fail, and sensors may not be worn at all times. While these missing values are often ignored, this can lead to bias and error when the data are mined. Further, the data are not simply missing at random. Instead the measurement of a variable such as blood glucose may depend on its prior values as well as that of other variables. These dependencies exist across time as well, but current methods have yet to incorporate these temporal relationships as well as multiple types of missingness. To address this, we propose an imputation method (FLk-NN) that incorporates time lagged correlations both within and across variables by combining two imputation methods, based on an extension to k-NN and the Fourier transform. This enables imputation of missing values even when all data at a time point is missing and when there are different types of missingness both within and across variables. In comparison to other approaches on three biological datasets (simulated and actual Type 1 diabetes datasets, and multi-modality neurological ICU monitoring) the proposed method has the highest imputation accuracy. This was true for up to half the data being missing and when consecutive missing values are a significant fraction of the overall time series length. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. BRITS: Bidirectional Recurrent Imputation for Time Series

    OpenAIRE

    Cao, Wei; Wang, Dong; Li, Jian; Zhou, Hao; Li, Lei; Li, Yitan

    2018-01-01

    Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing va...

  18. Gaussian mixture clustering and imputation of microarray data.

    Science.gov (United States)

    Ouyang, Ming; Welsh, William J; Georgopoulos, Panos

    2004-04-12

    In microarray experiments, missing entries arise from blemishes on the chips. In large-scale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries are excluded, or the missing entries are filled with estimates prior to the analyses. This study compares methods of missing value estimation. Two evaluation metrics of imputation accuracy are employed. First, the root mean squared error measures the difference between the true values and the imputed values. Second, the number of mis-clustered genes measures the difference between clustering with true values and that with imputed values; it examines the bias introduced by imputation to clustering. The Gaussian mixture clustering with model averaging imputation is superior to all other imputation methods, according to both evaluation metrics, on both time-series (correlated) and non-time series (uncorrelated) data sets.

  19. Multiple imputation for estimating the risk of developing dementia and its impact on survival.

    Science.gov (United States)

    Yu, Binbing; Saczynski, Jane S; Launer, Lenore

    2010-10-01

    Dementia, Alzheimer's disease in particular, is one of the major causes of disability and decreased quality of life among the elderly and a leading obstacle to successful aging. Given the profound impact on public health, much research has focused on the age-specific risk of developing dementia and the impact on survival. Early work has discussed various methods of estimating age-specific incidence of dementia, among which the illness-death model is popular for modeling disease progression. In this article we use multiple imputation to fit multi-state models for survival data with interval censoring and left truncation. This approach allows semi-Markov models in which survival after dementia depends on onset age. Such models can be used to estimate the cumulative risk of developing dementia in the presence of the competing risk of dementia-free death. Simulations are carried out to examine the performance of the proposed method. Data from the Honolulu Asia Aging Study are analyzed to estimate the age-specific and cumulative risks of dementia and to examine the effect of major risk factors on dementia onset and death.

  20. An efficient method to transcription factor binding sites imputation via simultaneous completion of multiple matrices with positional consistency.

    Science.gov (United States)

    Guo, Wei-Li; Huang, De-Shuang

    2017-08-22

    Transcription factors (TFs) are DNA-binding proteins that have a central role in regulating gene expression. Identification of DNA-binding sites of TFs is a key task in understanding transcriptional regulation, cellular processes and disease. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) enables genome-wide identification of in vivo TF binding sites. However, it is still difficult to map every TF in every cell line owing to cost and biological material availability, which poses an enormous obstacle for integrated analysis of gene regulation. To address this problem, we propose a novel computational approach, TFBSImpute, for predicting additional TF binding profiles by leveraging information from available ChIP-seq TF binding data. TFBSImpute fuses the dataset to a 3-mode tensor and imputes missing TF binding signals via simultaneous completion of multiple TF binding matrices with positional consistency. We show that signals predicted by our method achieve overall similarity with experimental data and that TFBSImpute significantly outperforms baseline approaches, by assessing the performance of imputation methods against observed ChIP-seq TF binding profiles. Besides, motif analysis shows that TFBSImpute preforms better in capturing binding motifs enriched in observed data compared with baselines, indicating that the higher performance of TFBSImpute is not simply due to averaging related samples. We anticipate that our approach will constitute a useful complement to experimental mapping of TF binding, which is beneficial for further study of regulation mechanisms and disease.

  1. Comparação de métodos de imputação única e múltipla usando como exemplo um modelo de risco para mortalidade cirúrgica Comparison of simple and multiple imputation methods using a risk model for surgical mortality as example

    Directory of Open Access Journals (Sweden)

    Luciana Neves Nunes

    2010-12-01

    sample size was 450 patients. The imputation methods applied were: two single imputations and one multiple imputation and the assumption was MAR (Missing at Random. RESULTS: The variable with missing data was serum albumin with 27.1% of missing rate. The logistic models adjusted by simple imputation were similar, but differed from models obtained by multiple imputation in relation to the inclusion of variables. CONCLUSIONS: The results indicate that it is important to take into account the relationship of albumin to other variables observed, because different models were obtained in single and multiple imputations. Single imputation underestimates the variability generating narrower confidence intervals. It is important to consider the use of imputation methods when there is missing data, especially multiple imputation that takes into account the variability between imputations for estimates of the model.

  2. Combining item response theory with multiple imputation to equate health assessment questionnaires.

    Science.gov (United States)

    Gu, Chenyang; Gutman, Roee

    2017-09-01

    The assessment of patients' functional status across the continuum of care requires a common patient assessment tool. However, assessment tools that are used in various health care settings differ and cannot be easily contrasted. For example, the Functional Independence Measure (FIM) is used to evaluate the functional status of patients who stay in inpatient rehabilitation facilities, the Minimum Data Set (MDS) is collected for all patients who stay in skilled nursing facilities, and the Outcome and Assessment Information Set (OASIS) is collected if they choose home health care provided by home health agencies. All three instruments or questionnaires include functional status items, but the specific items, rating scales, and instructions for scoring different activities vary between the different settings. We consider equating different health assessment questionnaires as a missing data problem, and propose a variant of predictive mean matching method that relies on Item Response Theory (IRT) models to impute unmeasured item responses. Using real data sets, we simulated missing measurements and compared our proposed approach to existing methods for missing data imputation. We show that, for all of the estimands considered, and in most of the experimental conditions that were examined, the proposed approach provides valid inferences, and generally has better coverages, relatively smaller biases, and shorter interval estimates. The proposed method is further illustrated using a real data set. © 2016, The International Biometric Society.

  3. Estimating the accuracy of geographical imputation

    Directory of Open Access Journals (Sweden)

    Boscoe Francis P

    2008-01-01

    Full Text Available Abstract Background To reduce the number of non-geocoded cases researchers and organizations sometimes include cases geocoded to postal code centroids along with cases geocoded with the greater precision of a full street address. Some analysts then use the postal code to assign information to the cases from finer-level geographies such as a census tract. Assignment is commonly completed using either a postal centroid or by a geographical imputation method which assigns a location by using both the demographic characteristics of the case and the population characteristics of the postal delivery area. To date no systematic evaluation of geographical imputation methods ("geo-imputation" has been completed. The objective of this study was to determine the accuracy of census tract assignment using geo-imputation. Methods Using a large dataset of breast, prostate and colorectal cancer cases reported to the New Jersey Cancer Registry, we determined how often cases were assigned to the correct census tract using alternate strategies of demographic based geo-imputation, and using assignments obtained from postal code centroids. Assignment accuracy was measured by comparing the tract assigned with the tract originally identified from the full street address. Results Assigning cases to census tracts using the race/ethnicity population distribution within a postal code resulted in more correctly assigned cases than when using postal code centroids. The addition of age characteristics increased the match rates even further. Match rates were highly dependent on both the geographic distribution of race/ethnicity groups and population density. Conclusion Geo-imputation appears to offer some advantages and no serious drawbacks as compared with the alternative of assigning cases to census tracts based on postal code centroids. For a specific analysis, researchers will still need to consider the potential impact of geocoding quality on their results and evaluate

  4. Design of a bovine low-density SNP array optimized for imputation.

    Directory of Open Access Journals (Sweden)

    Didier Boichard

    Full Text Available The Illumina BovineLD BeadChip was designed to support imputation to higher density genotypes in dairy and beef breeds by including single-nucleotide polymorphisms (SNPs that had a high minor allele frequency as well as uniform spacing across the genome except at the ends of the chromosome where densities were increased. The chip also includes SNPs on the Y chromosome and mitochondrial DNA loci that are useful for determining subspecies classification and certain paternal and maternal breed lineages. The total number of SNPs was 6,909. Accuracy of imputation to Illumina BovineSNP50 genotypes using the BovineLD chip was over 97% for most dairy and beef populations. The BovineLD imputations were about 3 percentage points more accurate than those from the Illumina GoldenGate Bovine3K BeadChip across multiple populations. The improvement was greatest when neither parent was genotyped. The minor allele frequencies were similar across taurine beef and dairy breeds as was the proportion of SNPs that were polymorphic. The new BovineLD chip should facilitate low-cost genomic selection in taurine beef and dairy cattle.

  5. Cost reduction for web-based data imputation

    KAUST Repository

    Li, Zhixu; Shang, Shuo; Xie, Qing; Zhang, Xiangliang

    2014-01-01

    Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity

  6. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    Science.gov (United States)

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  7. Assessment of imputation methods using varying ecological information to fill the gaps in a tree functional trait database

    Science.gov (United States)

    Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2016-04-01

    Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix

  8. Education and health and well-being: direct and indirect effects with multiple mediators and interactions with multiple imputed data in Stata.

    Science.gov (United States)

    Sheikh, Mashhood Ahmed; Abelsen, Birgit; Olsen, Jan Abel

    2017-11-01

    Previous methods for assessing mediation assume no multiplicative interactions. The inverse odds weighting (IOW) approach has been presented as a method that can be used even when interactions exist. The substantive aim of this study was to assess the indirect effect of education on health and well-being via four indicators of adult socioeconomic status (SES): income, management position, occupational hierarchy position and subjective social status. 8516 men and women from the Tromsø Study (Norway) were followed for 17 years. Education was measured at age 25-74 years, while SES and health and well-being were measured at age 42-91 years. Natural direct and indirect effects (NIE) were estimated using weighted Poisson regression models with IOW. Stata code is provided that makes it easy to assess mediation in any multiple imputed dataset with multiple mediators and interactions. Low education was associated with lower SES. Consequently, low SES was associated with being unhealthy and having a low level of well-being. The effect (NIE) of education on health and well-being is mediated by income, management position, occupational hierarchy position and subjective social status. This study contributes to the literature on mediation analysis, as well as the literature on the importance of education for health-related quality of life and subjective well-being. The influence of education on health and well-being had different pathways in this Norwegian sample. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Accuracy of genome-wide imputation of untyped markers and impacts on statistical power for association studies

    Directory of Open Access Journals (Sweden)

    McElwee Joshua

    2009-06-01

    -eQTL discoveries detected by various methods can be interpreted as their relative statistical power in the GWAS. In this study, we find that imputation offer modest additional power (by 4% on top of either Ilmn317K or Ilmn650Y, much less than the power gain from Ilmn317K to Ilmn650Y (13%. Conclusion Current algorithms can accurately impute genotypes for untyped markers, which enables researchers to pool data between studies conducted using different SNP sets. While genotyping itself results in a small error rate (e.g. 0.5%, imputing genotypes is surprisingly accurate. We found that dense marker sets (e.g. Ilmn650Y outperform sparser ones (e.g. Ilmn317K in terms of imputation yield and accuracy. We also noticed it was harder to impute genotypes for African American samples, partially due to population admixture, although using a pooled reference boosts performance. Interestingly, GWAS carried out using imputed genotypes only slightly increased power on top of assayed SNPs. The reason is likely due to adding more markers via imputation only results in modest gain in genetic coverage, but worsens the multiple testing penalties. Furthermore, cis-eQTL mapping using dense SNP set derived from imputation achieves great resolution, and locate associate peak closer to causal variants than conventional approach.

  10. Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions

    Science.gov (United States)

    Turrado, Concepción Crespo; López, María del Carmen Meizoso; Lasheras, Fernando Sánchez; Gómez, Benigno Antonio Rodríguez; Rollé, José Luis Calvo; de Cos Juez, Francisco Javier

    2014-01-01

    Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE). This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW) and Multiple Linear Regression (MLR). The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW. PMID:25356644

  11. Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions

    Directory of Open Access Journals (Sweden)

    Concepción Crespo Turrado

    2014-10-01

    Full Text Available Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE. This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW and Multiple Linear Regression (MLR. The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.

  12. Missing data imputation of solar radiation data under different atmospheric conditions.

    Science.gov (United States)

    Turrado, Concepción Crespo; López, María Del Carmen Meizoso; Lasheras, Fernando Sánchez; Gómez, Benigno Antonio Rodríguez; Rollé, José Luis Calvo; Juez, Francisco Javier de Cos

    2014-10-29

    Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE). This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW) and Multiple Linear Regression (MLR). The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.

  13. VIGAN: Missing View Imputation with Generative Adversarial Networks.

    Science.gov (United States)

    Shang, Chao; Palmer, Aaron; Sun, Jiangwen; Chen, Ko-Shin; Lu, Jin; Bi, Jinbo

    2017-01-01

    In an era when big data are becoming the norm, there is less concern with the quantity but more with the quality and completeness of the data. In many disciplines, data are collected from heterogeneous sources, resulting in multi-view or multi-modal datasets. The missing data problem has been challenging to address in multi-view data analysis. Especially, when certain samples miss an entire view of data, it creates the missing view problem. Classic multiple imputations or matrix completion methods are hardly effective here when no information can be based on in the specific view to impute data for such samples. The commonly-used simple method of removing samples with a missing view can dramatically reduce sample size, thus diminishing the statistical power of a subsequent analysis. In this paper, we propose a novel approach for view imputation via generative adversarial networks (GANs), which we name by VIGAN. This approach first treats each view as a separate domain and identifies domain-to-domain mappings via a GAN using randomly-sampled data from each view, and then employs a multi-modal denoising autoencoder (DAE) to reconstruct the missing view from the GAN outputs based on paired data across the views. Then, by optimizing the GAN and DAE jointly, our model enables the knowledge integration for domain mappings and view correspondences to effectively recover the missing view. Empirical results on benchmark datasets validate the VIGAN approach by comparing against the state of the art. The evaluation of VIGAN in a genetic study of substance use disorders further proves the effectiveness and usability of this approach in life science.

  14. Evaluating Imputation Algorithms for Low-Depth Genotyping-By-Sequencing (GBS Data.

    Directory of Open Access Journals (Sweden)

    Ariel W Chan

    Full Text Available Well-powered genomic studies require genome-wide marker coverage across many individuals. For non-model species with few genomic resources, high-throughput sequencing (HTS methods, such as Genotyping-By-Sequencing (GBS, offer an inexpensive alternative to array-based genotyping. Although affordable, datasets derived from HTS methods suffer from sequencing error, alignment errors, and missing data, all of which introduce noise and uncertainty to variant discovery and genotype calling. Under such circumstances, meaningful analysis of the data is difficult. Our primary interest lies in the issue of how one can accurately infer or impute missing genotypes in HTS-derived datasets. Many of the existing genotype imputation algorithms and software packages were primarily developed by and optimized for the human genetics community, a field where a complete and accurate reference genome has been constructed and SNP arrays have, in large part, been the common genotyping platform. We set out to answer two questions: 1 can we use existing imputation methods developed by the human genetics community to impute missing genotypes in datasets derived from non-human species and 2 are these methods, which were developed and optimized to impute ascertained variants, amenable for imputation of missing genotypes at HTS-derived variants? We selected Beagle v.4, a widely used algorithm within the human genetics community with reportedly high accuracy, to serve as our imputation contender. We performed a series of cross-validation experiments, using GBS data collected from the species Manihot esculenta by the Next Generation (NEXTGEN Cassava Breeding Project. NEXTGEN currently imputes missing genotypes in their datasets using a LASSO-penalized, linear regression method (denoted 'glmnet'. We selected glmnet to serve as a benchmark imputation method for this reason. We obtained estimates of imputation accuracy by masking a subset of observed genotypes, imputing, and

  15. Evaluating Imputation Algorithms for Low-Depth Genotyping-By-Sequencing (GBS) Data.

    Science.gov (United States)

    Chan, Ariel W; Hamblin, Martha T; Jannink, Jean-Luc

    2016-01-01

    Well-powered genomic studies require genome-wide marker coverage across many individuals. For non-model species with few genomic resources, high-throughput sequencing (HTS) methods, such as Genotyping-By-Sequencing (GBS), offer an inexpensive alternative to array-based genotyping. Although affordable, datasets derived from HTS methods suffer from sequencing error, alignment errors, and missing data, all of which introduce noise and uncertainty to variant discovery and genotype calling. Under such circumstances, meaningful analysis of the data is difficult. Our primary interest lies in the issue of how one can accurately infer or impute missing genotypes in HTS-derived datasets. Many of the existing genotype imputation algorithms and software packages were primarily developed by and optimized for the human genetics community, a field where a complete and accurate reference genome has been constructed and SNP arrays have, in large part, been the common genotyping platform. We set out to answer two questions: 1) can we use existing imputation methods developed by the human genetics community to impute missing genotypes in datasets derived from non-human species and 2) are these methods, which were developed and optimized to impute ascertained variants, amenable for imputation of missing genotypes at HTS-derived variants? We selected Beagle v.4, a widely used algorithm within the human genetics community with reportedly high accuracy, to serve as our imputation contender. We performed a series of cross-validation experiments, using GBS data collected from the species Manihot esculenta by the Next Generation (NEXTGEN) Cassava Breeding Project. NEXTGEN currently imputes missing genotypes in their datasets using a LASSO-penalized, linear regression method (denoted 'glmnet'). We selected glmnet to serve as a benchmark imputation method for this reason. We obtained estimates of imputation accuracy by masking a subset of observed genotypes, imputing, and calculating the

  16. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    Science.gov (United States)

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Outlier Removal in Model-Based Missing Value Imputation for Medical Datasets

    Directory of Open Access Journals (Sweden)

    Min-Wei Huang

    2018-01-01

    Full Text Available Many real-world medical datasets contain some proportion of missing (attribute values. In general, missing value imputation can be performed to solve this problem, which is to provide estimations for the missing values by a reasoning process based on the (complete observed data. However, if the observed data contain some noisy information or outliers, the estimations of the missing values may not be reliable or may even be quite different from the real values. The aim of this paper is to examine whether a combination of instance selection from the observed data and missing value imputation offers better performance than performing missing value imputation alone. In particular, three instance selection algorithms, DROP3, GA, and IB3, and three imputation algorithms, KNNI, MLP, and SVM, are used in order to find out the best combination. The experimental results show that that performing instance selection can have a positive impact on missing value imputation over the numerical data type of medical datasets, and specific combinations of instance selection and imputation methods can improve the imputation results over the mixed data type of medical datasets. However, instance selection does not have a definitely positive impact on the imputation result for categorical medical datasets.

  18. Random Forest as an Imputation Method for Education and Psychology Research: Its Impact on Item Fit and Difficulty of the Rasch Model

    Science.gov (United States)

    Golino, Hudson F.; Gomes, Cristiano M. A.

    2016-01-01

    This paper presents a non-parametric imputation technique, named random forest, from the machine learning field. The random forest procedure has two main tuning parameters: the number of trees grown in the prediction and the number of predictors used. Fifty experimental conditions were created in the imputation procedure, with different…

  19. Assessing accuracy of genotype imputation in American Indians.

    Directory of Open Access Journals (Sweden)

    Alka Malhotra

    Full Text Available Genotype imputation is commonly used in genetic association studies to test untyped variants using information on linkage disequilibrium (LD with typed markers. Imputing genotypes requires a suitable reference population in which the LD pattern is known, most often one selected from HapMap. However, some populations, such as American Indians, are not represented in HapMap. In the present study, we assessed accuracy of imputation using HapMap reference populations in a genome-wide association study in Pima Indians.Data from six randomly selected chromosomes were used. Genotypes in the study population were masked (either 1% or 20% of SNPs available for a given chromosome. The masked genotypes were then imputed using the software Markov Chain Haplotyping Algorithm. Using four HapMap reference populations, average genotype error rates ranged from 7.86% for Mexican Americans to 22.30% for Yoruba. In contrast, use of the original Pima Indian data as a reference resulted in an average error rate of 1.73%.Our results suggest that the use of HapMap reference populations results in substantial inaccuracy in the imputation of genotypes in American Indians. A possible solution would be to densely genotype or sequence a reference American Indian population.

  20. Highly accurate sequence imputation enables precise QTL mapping in Brown Swiss cattle.

    Science.gov (United States)

    Frischknecht, Mirjam; Pausch, Hubert; Bapst, Beat; Signer-Hasler, Heidi; Flury, Christine; Garrick, Dorian; Stricker, Christian; Fries, Ruedi; Gredler-Grandl, Birgit

    2017-12-29

    Within the last few years a large amount of genomic information has become available in cattle. Densities of genomic information vary from a few thousand variants up to whole genome sequence information. In order to combine genomic information from different sources and infer genotypes for a common set of variants, genotype imputation is required. In this study we evaluated the accuracy of imputation from high density chips to whole genome sequence data in Brown Swiss cattle. Using four popular imputation programs (Beagle, FImpute, Impute2, Minimac) and various compositions of reference panels, the accuracy of the imputed sequence variant genotypes was high and differences between the programs and scenarios were small. We imputed sequence variant genotypes for more than 1600 Brown Swiss bulls and performed genome-wide association studies for milk fat percentage at two stages of lactation. We found one and three quantitative trait loci for early and late lactation fat content, respectively. Known causal variants that were imputed from the sequenced reference panel were among the most significantly associated variants of the genome-wide association study. Our study demonstrates that whole-genome sequence information can be imputed at high accuracy in cattle populations. Using imputed sequence variant genotypes in genome-wide association studies may facilitate causal variant detection.

  1. Imputation of missing genotypes within LD-blocks relying on the basic coalescent and beyond: consideration of population growth and structure.

    Science.gov (United States)

    Kabisch, Maria; Hamann, Ute; Lorenzo Bermejo, Justo

    2017-10-17

    Genotypes not directly measured in genetic studies are often imputed to improve statistical power and to increase mapping resolution. The accuracy of standard imputation techniques strongly depends on the similarity of linkage disequilibrium (LD) patterns in the study and reference populations. Here we develop a novel approach for genotype imputation in low-recombination regions that relies on the coalescent and permits to explicitly account for population demographic factors. To test the new method, study and reference haplotypes were simulated and gene trees were inferred under the basic coalescent and also considering population growth and structure. The reference haplotypes that first coalesced with study haplotypes were used as templates for genotype imputation. Computer simulations were complemented with the analysis of real data. Genotype concordance rates were used to compare the accuracies of coalescent-based and standard (IMPUTE2) imputation. Simulations revealed that, in LD-blocks, imputation accuracy relying on the basic coalescent was higher and less variable than with IMPUTE2. Explicit consideration of population growth and structure, even if present, did not practically improve accuracy. The advantage of coalescent-based over standard imputation increased with the minor allele frequency and it decreased with population stratification. Results based on real data indicated that, even in low-recombination regions, further research is needed to incorporate recombination in coalescence inference, in particular for studies with genetically diverse and admixed individuals. To exploit the full potential of coalescent-based methods for the imputation of missing genotypes in genetic studies, further methodological research is needed to reduce computer time, to take into account recombination, and to implement these methods in user-friendly computer programs. Here we provide reproducible code which takes advantage of publicly available software to facilitate

  2. Imputation of variants from the 1000 Genomes Project modestly improves known associations and can identify low-frequency variant-phenotype associations undetected by HapMap based imputation.

    Science.gov (United States)

    Wood, Andrew R; Perry, John R B; Tanaka, Toshiko; Hernandez, Dena G; Zheng, Hou-Feng; Melzer, David; Gibbs, J Raphael; Nalls, Michael A; Weedon, Michael N; Spector, Tim D; Richards, J Brent; Bandinelli, Stefania; Ferrucci, Luigi; Singleton, Andrew B; Frayling, Timothy M

    2013-01-01

    Genome-wide association (GWA) studies have been limited by the reliance on common variants present on microarrays or imputable from the HapMap Project data. More recently, the completion of the 1000 Genomes Project has provided variant and haplotype information for several million variants derived from sequencing over 1,000 individuals. To help understand the extent to which more variants (including low frequency (1% ≤ MAF 1000 Genomes imputation, respectively, and 9 and 11 that reached a stricter, likely conservative, threshold of P1000 Genomes genotype data modestly improved the strength of known associations. Of 20 associations detected at P1000 Genomes imputed data and one was nominally more strongly associated in HapMap imputed data. We also detected an association between a low frequency variant and phenotype that was previously missed by HapMap based imputation approaches. An association between rs112635299 and alpha-1 globulin near the SERPINA gene represented the known association between rs28929474 (MAF = 0.007) and alpha1-antitrypsin that predisposes to emphysema (P = 2.5×10(-12)). Our data provide important proof of principle that 1000 Genomes imputation will detect novel, low frequency-large effect associations.

  3. Performance of genotype imputation for low frequency and rare variants from the 1000 genomes.

    Science.gov (United States)

    Zheng, Hou-Feng; Rong, Jing-Jing; Liu, Ming; Han, Fang; Zhang, Xing-Wei; Richards, J Brent; Wang, Li

    2015-01-01

    Genotype imputation is now routinely applied in genome-wide association studies (GWAS) and meta-analyses. However, most of the imputations have been run using HapMap samples as reference, imputation of low frequency and rare variants (minor allele frequency (MAF) 1000 Genomes panel) are available to facilitate imputation of these variants. Therefore, in order to estimate the performance of low frequency and rare variants imputation, we imputed 153 individuals, each of whom had 3 different genotype array data including 317k, 610k and 1 million SNPs, to three different reference panels: the 1000 Genomes pilot March 2010 release (1KGpilot), the 1000 Genomes interim August 2010 release (1KGinterim), and the 1000 Genomes phase1 November 2010 and May 2011 release (1KGphase1) by using IMPUTE version 2. The differences between these three releases of the 1000 Genomes data are the sample size, ancestry diversity, number of variants and their frequency spectrum. We found that both reference panel and GWAS chip density affect the imputation of low frequency and rare variants. 1KGphase1 outperformed the other 2 panels, at higher concordance rate, higher proportion of well-imputed variants (info>0.4) and higher mean info score in each MAF bin. Similarly, 1M chip array outperformed 610K and 317K. However for very rare variants (MAF ≤ 0.3%), only 0-1% of the variants were well imputed. We conclude that the imputation of low frequency and rare variants improves with larger reference panels and higher density of genome-wide genotyping arrays. Yet, despite a large reference panel size and dense genotyping density, very rare variants remain difficult to impute.

  4. Towards a more efficient representation of imputation operators in TPOT

    OpenAIRE

    Garciarena, Unai; Mendiburu, Alexander; Santana, Roberto

    2018-01-01

    Automated Machine Learning encompasses a set of meta-algorithms intended to design and apply machine learning techniques (e.g., model selection, hyperparameter tuning, model assessment, etc.). TPOT, a software for optimizing machine learning pipelines based on genetic programming (GP), is a novel example of this kind of applications. Recently we have proposed a way to introduce imputation methods as part of TPOT. While our approach was able to deal with problems with missing data, it can prod...

  5. Public Undertakings and Imputability

    DEFF Research Database (Denmark)

    Ølykke, Grith Skovgaard

    2013-01-01

    In this article, the issue of impuability to the State of public undertakings’ decision-making is analysed and discussed in the context of the DSBFirst case. DSBFirst is owned by the independent public undertaking DSB and the private undertaking FirstGroup plc and won the contracts in the 2008...... Oeresund tender for the provision of passenger transport by railway. From the start, the services were provided at a loss, and in the end a part of DSBFirst was wound up. In order to frame the problems illustrated by this case, the jurisprudence-based imputability requirement in the definition of State aid...... in Article 107(1) TFEU is analysed. It is concluded that where the public undertaking transgresses the control system put in place by the State, conditions for imputability are not fulfilled, and it is argued that in the current state of law, there is no conditional link between the level of control...

  6. Accounting for one-channel depletion improves missing value imputation in 2-dye microarray data.

    Science.gov (United States)

    Ritz, Cecilia; Edén, Patrik

    2008-01-19

    For 2-dye microarray platforms, some missing values may arise from an un-measurably low RNA expression in one channel only. Information of such "one-channel depletion" is so far not included in algorithms for imputation of missing values. Calculating the mean deviation between imputed values and duplicate controls in five datasets, we show that KNN-based imputation gives a systematic bias of the imputed expression values of one-channel depleted spots. Evaluating the correction of this bias by cross-validation showed that the mean square deviation between imputed values and duplicates were reduced up to 51%, depending on dataset. By including more information in the imputation step, we more accurately estimate missing expression values.

  7. Missing value imputation for microarray gene expression data using histone acetylation information

    Directory of Open Access Journals (Sweden)

    Feng Jihua

    2008-05-01

    Full Text Available Abstract Background It is an important pre-processing step to accurately estimate missing values in microarray data, because complete datasets are required in numerous expression profile analysis in bioinformatics. Although several methods have been suggested, their performances are not satisfactory for datasets with high missing percentages. Results The paper explores the feasibility of doing missing value imputation with the help of gene regulatory mechanism. An imputation framework called histone acetylation information aided imputation method (HAIimpute method is presented. It incorporates the histone acetylation information into the conventional KNN(k-nearest neighbor and LLS(local least square imputation algorithms for final prediction of the missing values. The experimental results indicated that the use of acetylation information can provide significant improvements in microarray imputation accuracy. The HAIimpute methods consistently improve the widely used methods such as KNN and LLS in terms of normalized root mean squared error (NRMSE. Meanwhile, the genes imputed by HAIimpute methods are more correlated with the original complete genes in terms of Pearson correlation coefficients. Furthermore, the proposed methods also outperform GOimpute, which is one of the existing related methods that use the functional similarity as the external information. Conclusion We demonstrated that the using of histone acetylation information could greatly improve the performance of the imputation especially at high missing percentages. This idea can be generalized to various imputation methods to facilitate the performance. Moreover, with more knowledge accumulated on gene regulatory mechanism in addition to histone acetylation, the performance of our approach can be further improved and verified.

  8. Evaluation and application of summary statistic imputation to discover new height-associated loci.

    Science.gov (United States)

    Rüeger, Sina; McDaid, Aaron; Kutalik, Zoltán

    2018-05-01

    As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian

  9. Missing value imputation: with application to handwriting data

    Science.gov (United States)

    Xu, Zhen; Srihari, Sargur N.

    2015-01-01

    Missing values make pattern analysis difficult, particularly with limited available data. In longitudinal research, missing values accumulate, thereby aggravating the problem. Here we consider how to deal with temporal data with missing values in handwriting analysis. In the task of studying development of individuality of handwriting, we encountered the fact that feature values are missing for several individuals at several time instances. Six algorithms, i.e., random imputation, mean imputation, most likely independent value imputation, and three methods based on Bayesian network (static Bayesian network, parameter EM, and structural EM), are compared with children's handwriting data. We evaluate the accuracy and robustness of the algorithms under different ratios of missing data and missing values, and useful conclusions are given. Specifically, static Bayesian network is used for our data which contain around 5% missing data to provide adequate accuracy and low computational cost.

  10. Imputed prices of greenhouse gases and land forests

    International Nuclear Information System (INIS)

    Uzawa, Hirofumi

    1993-01-01

    The theory of dynamic optimum formulated by Maeler gives us the basic theoretical framework within which it is possible to analyse the economic and, possibly, political circumstances under which the phenomenon of global warming occurs, and to search for the policy and institutional arrangements whereby it would be effectively arrested. The analysis developed here is an application of Maeler's theory to atmospheric quality. In the analysis a central role is played by the concept of imputed price in the dynamic context. Our determination of imputed prices of atmospheric carbon dioxide and land forests takes into account the difference in the stages of economic development. Indeed, the ratios of the imputed prices of atmospheric carbon dioxide and land forests over the per capita level of real national income are identical for all countries involved. (3 figures, 2 tables) (Author)

  11. 48 CFR 1830.7002-4 - Determining imputed cost of money.

    Science.gov (United States)

    2010-10-01

    ... money. 1830.7002-4 Section 1830.7002-4 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND... Determining imputed cost of money. (a) Determine the imputed cost of money for an asset under construction, fabrication, or development by applying a cost of money rate (see 1830.7002-2) to the representative...

  12. Assessing and comparison of different machine learning methods in parent-offspring trios for genotype imputation.

    Science.gov (United States)

    Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi

    2016-06-21

    Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. TRIP: An interactive retrieving-inferring data imputation approach

    KAUST Repository

    Li, Zhixu

    2016-06-25

    Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.

  14. TRIP: An interactive retrieving-inferring data imputation approach

    KAUST Repository

    Li, Zhixu; Qin, Lu; Cheng, Hong; Zhang, Xiangliang; Zhou, Xiaofang

    2016-01-01

    Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.

  15. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  16. Imputation of genotypes in Danish two-way crossbred pigs using low density panels

    DEFF Research Database (Denmark)

    Xiang, Tao; Christensen, Ole Fredslund; Legarra, Andres

    Genotype imputation is commonly used as an initial step of genomic selection. Studies on humans, plants and ruminants suggested many factors would affect the performance of imputation. However, studies rarely investigated pigs, especially crossbred pigs. In this study, different scenarios...... of imputation from 5K SNPs to 7K SNPs on Danish Landrace, Yorkshire, and crossbred Landrace-Yorkshire were compared. In conclusion, genotype imputation on crossbreds performs equally well as in purebreds, when parental breeds are used as the reference panel. When the size of reference is considerably large...... SNPs. This dataset will be analyzed for genomic selection in a future study...

  17. An Overview and Evaluation of Recent Machine Learning Imputation Methods Using Cardiac Imaging Data.

    Science.gov (United States)

    Liu, Yuzhe; Gopalakrishnan, Vanathi

    2017-03-01

    Many clinical research datasets have a large percentage of missing values that directly impacts their usefulness in yielding high accuracy classifiers when used for training in supervised machine learning. While missing value imputation methods have been shown to work well with smaller percentages of missing values, their ability to impute sparse clinical research data can be problem specific. We previously attempted to learn quantitative guidelines for ordering cardiac magnetic resonance imaging during the evaluation for pediatric cardiomyopathy, but missing data significantly reduced our usable sample size. In this work, we sought to determine if increasing the usable sample size through imputation would allow us to learn better guidelines. We first review several machine learning methods for estimating missing data. Then, we apply four popular methods (mean imputation, decision tree, k-nearest neighbors, and self-organizing maps) to a clinical research dataset of pediatric patients undergoing evaluation for cardiomyopathy. Using Bayesian Rule Learning (BRL) to learn ruleset models, we compared the performance of imputation-augmented models versus unaugmented models. We found that all four imputation-augmented models performed similarly to unaugmented models. While imputation did not improve performance, it did provide evidence for the robustness of our learned models.

  18. The Ability of Different Imputation Methods to Preserve the Significant Genes and Pathways in Cancer

    Directory of Open Access Journals (Sweden)

    Rosa Aghdam

    2017-12-01

    Full Text Available Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/.

  19. Imputing amino acid polymorphisms in human leukocyte antigens.

    Directory of Open Access Journals (Sweden)

    Xiaoming Jia

    Full Text Available DNA sequence variation within human leukocyte antigen (HLA genes mediate susceptibility to a wide range of human diseases. The complex genetic structure of the major histocompatibility complex (MHC makes it difficult, however, to collect genotyping data in large cohorts. Long-range linkage disequilibrium between HLA loci and SNP markers across the major histocompatibility complex (MHC region offers an alternative approach through imputation to interrogate HLA variation in existing GWAS data sets. Here we describe a computational strategy, SNP2HLA, to impute classical alleles and amino acid polymorphisms at class I (HLA-A, -B, -C and class II (-DPA1, -DPB1, -DQA1, -DQB1, and -DRB1 loci. To characterize performance of SNP2HLA, we constructed two European ancestry reference panels, one based on data collected in HapMap-CEPH pedigrees (90 individuals and another based on data collected by the Type 1 Diabetes Genetics Consortium (T1DGC, 5,225 individuals. We imputed HLA alleles in an independent data set from the British 1958 Birth Cohort (N = 918 with gold standard four-digit HLA types and SNPs genotyped using the Affymetrix GeneChip 500 K and Illumina Immunochip microarrays. We demonstrate that the sample size of the reference panel, rather than SNP density of the genotyping platform, is critical to achieve high imputation accuracy. Using the larger T1DGC reference panel, the average accuracy at four-digit resolution is 94.7% using the low-density Affymetrix GeneChip 500 K, and 96.7% using the high-density Illumina Immunochip. For amino acid polymorphisms within HLA genes, we achieve 98.6% and 99.3% accuracy using the Affymetrix GeneChip 500 K and Illumina Immunochip, respectively. Finally, we demonstrate how imputation and association testing at amino acid resolution can facilitate fine-mapping of primary MHC association signals, giving a specific example from type 1 diabetes.

  20. The Ability of Different Imputation Methods to Preserve the Significant Genes and Pathways in Cancer.

    Science.gov (United States)

    Aghdam, Rosa; Baghfalaki, Taban; Khosravi, Pegah; Saberi Ansari, Elnaz

    2017-12-01

    Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM) method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/. Copyright © 2017. Production and hosting by Elsevier B.V.

  1. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models.

    Science.gov (United States)

    Chen, Qingxia; Ibrahim, Joseph G

    2014-07-01

    Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.

  2. Imputation across genotyping arrays for genome-wide association studies: assessment of bias and a correction strategy.

    Science.gov (United States)

    Johnson, Eric O; Hancock, Dana B; Levy, Joshua L; Gaddis, Nathan C; Saccone, Nancy L; Bierut, Laura J; Page, Grier P

    2013-05-01

    A great promise of publicly sharing genome-wide association data is the potential to create composite sets of controls. However, studies often use different genotyping arrays, and imputation to a common set of SNPs has shown substantial bias: a problem which has no broadly applicable solution. Based on the idea that using differing genotyped SNP sets as inputs creates differential imputation errors and thus bias in the composite set of controls, we examined the degree to which each of the following occurs: (1) imputation based on the union of genotyped SNPs (i.e., SNPs available on one or more arrays) results in bias, as evidenced by spurious associations (type 1 error) between imputed genotypes and arbitrarily assigned case/control status; (2) imputation based on the intersection of genotyped SNPs (i.e., SNPs available on all arrays) does not evidence such bias; and (3) imputation quality varies by the size of the intersection of genotyped SNP sets. Imputations were conducted in European Americans and African Americans with reference to HapMap phase II and III data. Imputation based on the union of genotyped SNPs across the Illumina 1M and 550v3 arrays showed spurious associations for 0.2 % of SNPs: ~2,000 false positives per million SNPs imputed. Biases remained problematic for very similar arrays (550v1 vs. 550v3) and were substantial for dissimilar arrays (Illumina 1M vs. Affymetrix 6.0). In all instances, imputing based on the intersection of genotyped SNPs (as few as 30 % of the total SNPs genotyped) eliminated such bias while still achieving good imputation quality.

  3. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  4. Inference for multivariate regression model based on multiply imputed synthetic data generated via posterior predictive sampling

    Science.gov (United States)

    Moura, Ricardo; Sinha, Bimal; Coelho, Carlos A.

    2017-06-01

    The recent popularity of the use of synthetic data as a Statistical Disclosure Control technique has enabled the development of several methods of generating and analyzing such data, but almost always relying in asymptotic distributions and in consequence being not adequate for small sample datasets. Thus, a likelihood-based exact inference procedure is derived for the matrix of regression coefficients of the multivariate regression model, for multiply imputed synthetic data generated via Posterior Predictive Sampling. Since it is based in exact distributions this procedure may even be used in small sample datasets. Simulation studies compare the results obtained from the proposed exact inferential procedure with the results obtained from an adaptation of Reiters combination rule to multiply imputed synthetic datasets and an application to the 2000 Current Population Survey is discussed.

  5. Whole-Genome Sequencing Coupled to Imputation Discovers Genetic Signals for Anthropometric Traits

    NARCIS (Netherlands)

    I. Tachmazidou (Ioanna); Süveges, D. (Dániel); J. Min (Josine); G.R.S. Ritchie (Graham R.S.); Steinberg, J. (Julia); K. Walter (Klaudia); V. Iotchkova (Valentina); J.A. Schwartzentruber (Jeremy); J. Huang (Jian); Y. Memari (Yasin); McCarthy, S. (Shane); Crawford, A.A. (Andrew A.); C. Bombieri (Cristina); M. Cocca (Massimiliano); A.-E. Farmaki (Aliki-Eleni); T.R. Gaunt (Tom); P. Jousilahti (Pekka); M.N. Kooijman (Marjolein ); Lehne, B. (Benjamin); G. Malerba (Giovanni); S. Männistö (Satu); A. Matchan (Angela); M.C. Medina-Gomez (Carolina); S. Metrustry (Sarah); A. Nag (Abhishek); I. Ntalla (Ioanna); L. Paternoster (Lavinia); N.W. Rayner (Nigel William); C. Sala (Cinzia); W.R. Scott (William R.); H.A. Shihab (Hashem A.); L. Southam (Lorraine); B. St Pourcain (Beate); M. Traglia (Michela); K. Trajanoska (Katerina); Zaza, G. (Gialuigi); W. Zhang (Weihua); M.S. Artigas; Bansal, N. (Narinder); M. Benn (Marianne); Chen, Z. (Zhongsheng); P. Danecek (Petr); Lin, W.-Y. (Wei-Yu); A. Locke (Adam); J. Luan (Jian'An); A.K. Manning (Alisa); Mulas, A. (Antonella); C. Sidore (Carlo); A. Tybjaerg-Hansen; A. Varbo (Anette); M. Zoledziewska (Magdalena); C. Finan (Chris); Hatzikotoulas, K. (Konstantinos); A.E. Hendricks (Audrey E.); J.P. Kemp (John); A. Moayyeri (Alireza); Panoutsopoulou, K. (Kalliope); Szpak, M. (Michal); S.G. Wilson (Scott); M. Boehnke (Michael); F. Cucca (Francesco); Di Angelantonio, E. (Emanuele); C. Langenberg (Claudia); C.M. Lindgren (Cecilia M.); McCarthy, M.I. (Mark I.); A.P. Morris (Andrew); B.G. Nordestgaard (Børge); R.A. Scott (Robert); M.D. Tobin (Martin); N.J. Wareham (Nick); P.R. Burton (Paul); J.C. Chambers (John); Smith, G.D. (George Davey); G.V. Dedoussis (George); J.F. Felix (Janine); O.H. Franco (Oscar); Gambaro, G. (Giovanni); P. Gasparini (Paolo); C.J. Hammond (Christopher J.); A. Hofman (Albert); V.W.V. Jaddoe (Vincent); M.E. Kleber (Marcus); J.S. Kooner (Jaspal S.); M. Perola (Markus); C.L. Relton (Caroline); S.M. Ring (Susan); F. Rivadeneira Ramirez (Fernando); V. Salomaa (Veikko); T.D. Spector (Timothy); O. Stegle (Oliver); D. Toniolo (Daniela); A.G. Uitterlinden (André); I.E. Barroso (Inês); C.M.T. Greenwood (Celia); Perry, J.R.B. (John R.B.); Walker, B.R. (Brian R.); A.S. Butterworth (Adam); Y. Xue (Yali); R. Durbin (Richard); K.S. Small (Kerrin); N. Soranzo (Nicole); N.J. Timpson (Nicholas); E. Zeggini (Eleftheria)

    2016-01-01

    textabstractDeep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the

  6. Whole-Genome Sequencing Coupled to Imputation Discovers Genetic Signals for Anthropometric Traits

    DEFF Research Database (Denmark)

    Tachmazidou, Ioanna; Süveges, Dániel; Min, Josine L

    2017-01-01

    Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader alleli...

  7. Missing value imputation in DNA microarrays based on conjugate gradient method.

    Science.gov (United States)

    Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh

    2012-02-01

    Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Clustering with Missing Values: No Imputation Required

    Science.gov (United States)

    Wagstaff, Kiri

    2004-01-01

    Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.

  9. Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis

    Science.gov (United States)

    Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal

    Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.

  10. GACT: a Genome build and Allele definition Conversion Tool for SNP imputation and meta-analysis in genetic association studies.

    Science.gov (United States)

    Sulovari, Arvis; Li, Dawei

    2014-07-19

    Genome-wide association studies (GWAS) have successfully identified genes associated with complex human diseases. Although much of the heritability remains unexplained, combining single nucleotide polymorphism (SNP) genotypes from multiple studies for meta-analysis will increase the statistical power to identify new disease-associated variants. Meta-analysis requires same allele definition (nomenclature) and genome build among individual studies. Similarly, imputation, commonly-used prior to meta-analysis, requires the same consistency. However, the genotypes from various GWAS are generated using different genotyping platforms, arrays or SNP-calling approaches, resulting in use of different genome builds and allele definitions. Incorrect assumptions of identical allele definition among combined GWAS lead to a large portion of discarded genotypes or incorrect association findings. There is no published tool that predicts and converts among all major allele definitions. In this study, we have developed a tool, GACT, which stands for Genome build and Allele definition Conversion Tool, that predicts and inter-converts between any of the common SNP allele definitions and between the major genome builds. In addition, we assessed several factors that may affect imputation quality, and our results indicated that inclusion of singletons in the reference had detrimental effects while ambiguous SNPs had no measurable effect. Unexpectedly, exclusion of genotypes with missing rate > 0.001 (40% of study SNPs) showed no significant decrease of imputation quality (even significantly higher when compared to the imputation with singletons in the reference), especially for rare SNPs. GACT is a new, powerful, and user-friendly tool with both command-line and interactive online versions that can accurately predict, and convert between any of the common allele definitions and between genome builds for genome-wide meta-analysis and imputation of genotypes from SNP-arrays or deep

  11. Fully conditional specification in multivariate imputation

    NARCIS (Netherlands)

    van Buuren, S.; Brand, J. P.L.; Groothuis-Oudshoorn, C. G.M.; Rubin, D. B.

    2006-01-01

    The use of the Gibbs sampler with fully conditionally specified models, where the distribution of each variable given the other variables is the starting point, has become a popular method to create imputations in incomplete multivariate data. The theoretical weakness of this approach is that the

  12. Gap-filling a spatially explicit plant trait database: comparing imputation methods and different levels of environmental information

    Science.gov (United States)

    Poyatos, Rafael; Sus, Oliver; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2018-05-01

    The ubiquity of missing data in plant trait databases may hinder trait-based analyses of ecological patterns and processes. Spatially explicit datasets with information on intraspecific trait variability are rare but offer great promise in improving our understanding of functional biogeography. At the same time, they offer specific challenges in terms of data imputation. Here we compare statistical imputation approaches, using varying levels of environmental information, for five plant traits (leaf biomass to sapwood area ratio, leaf nitrogen content, maximum tree height, leaf mass per area and wood density) in a spatially explicit plant trait dataset of temperate and Mediterranean tree species (Ecological and Forest Inventory of Catalonia, IEFC, dataset for Catalonia, north-east Iberian Peninsula, 31 900 km2). We simulated gaps at different missingness levels (10-80 %) in a complete trait matrix, and we used overall trait means, species means, k nearest neighbours (kNN), ordinary and regression kriging, and multivariate imputation using chained equations (MICE) to impute missing trait values. We assessed these methods in terms of their accuracy and of their ability to preserve trait distributions, multi-trait correlation structure and bivariate trait relationships. The relatively good performance of mean and species mean imputations in terms of accuracy masked a poor representation of trait distributions and multivariate trait structure. Species identity improved MICE imputations for all traits, whereas forest structure and topography improved imputations for some traits. No method performed best consistently for the five studied traits, but, considering all traits and performance metrics, MICE informed by relevant ecological variables gave the best results. However, at higher missingness (> 30 %), species mean imputations and regression kriging tended to outperform MICE for some traits. MICE informed by relevant ecological variables allowed us to fill the gaps in

  13. Multi-generational imputation of single nucleotide polymorphism marker genotypes and accuracy of genomic selection.

    Science.gov (United States)

    Toghiani, S; Aggrey, S E; Rekaya, R

    2016-07-01

    Availability of high-density single nucleotide polymorphism (SNP) genotyping platforms provided unprecedented opportunities to enhance breeding programmes in livestock, poultry and plant species, and to better understand the genetic basis of complex traits. Using this genomic information, genomic breeding values (GEBVs), which are more accurate than conventional breeding values. The superiority of genomic selection is possible only when high-density SNP panels are used to track genes and QTLs affecting the trait. Unfortunately, even with the continuous decrease in genotyping costs, only a small fraction of the population has been genotyped with these high-density panels. It is often the case that a larger portion of the population is genotyped with low-density and low-cost SNP panels and then imputed to a higher density. Accuracy of SNP genotype imputation tends to be high when minimum requirements are met. Nevertheless, a certain rate of genotype imputation errors is unavoidable. Thus, it is reasonable to assume that the accuracy of GEBVs will be affected by imputation errors; especially, their cumulative effects over time. To evaluate the impact of multi-generational selection on the accuracy of SNP genotypes imputation and the reliability of resulting GEBVs, a simulation was carried out under varying updating of the reference population, distance between the reference and testing sets, and the approach used for the estimation of GEBVs. Using fixed reference populations, imputation accuracy decayed by about 0.5% per generation. In fact, after 25 generations, the accuracy was only 7% lower than the first generation. When the reference population was updated by either 1% or 5% of the top animals in the previous generations, decay of imputation accuracy was substantially reduced. These results indicate that low-density panels are useful, especially when the generational interval between reference and testing population is small. As the generational interval

  14. Comparison of three boosting methods in parent-offspring trios for genotype imputation using simulation study

    Directory of Open Access Journals (Sweden)

    Abbas Mikhchi

    2016-01-01

    Full Text Available Abstract Background Genotype imputation is an important process of predicting unknown genotypes, which uses reference population with dense genotypes to predict missing genotypes for both human and animal genetic variations at a low cost. Machine learning methods specially boosting methods have been used in genetic studies to explore the underlying genetic profile of disease and build models capable of predicting missing values of a marker. Methods In this study strategies and factors affecting the imputation accuracy of parent-offspring trios compared from lower-density SNP panels (5 K to high density (10 K SNP panel using three different Boosting methods namely TotalBoost (TB, LogitBoost (LB and AdaBoost (AB. The methods employed using simulated data to impute the un-typed SNPs in parent-offspring trios. Four different datasets of G1 (100 trios with 5 k SNPs, G2 (100 trios with 10 k SNPs, G3 (500 trios with 5 k SNPs, and G4 (500 trio with 10 k SNPs were simulated. In four datasets all parents were genotyped completely, and offspring genotyped with a lower density panel. Results Comparison of the three methods for imputation showed that the LB outperformed AB and TB for imputation accuracy. The time of computation were different between methods. The AB was the fastest algorithm. The higher SNP densities resulted the increase of the accuracy of imputation. Larger trios (i.e. 500 was better for performance of LB and TB. Conclusions The conclusion is that the three methods do well in terms of imputation accuracy also the dense chip is recommended for imputation of parent-offspring trios.

  15. Nearest neighbor imputation using spatial-temporal correlations in wireless sensor networks.

    Science.gov (United States)

    Li, YuanYuan; Parker, Lynne E

    2014-01-01

    Missing data is common in Wireless Sensor Networks (WSNs), especially with multi-hop communications. There are many reasons for this phenomenon, such as unstable wireless communications, synchronization issues, and unreliable sensors. Unfortunately, missing data creates a number of problems for WSNs. First, since most sensor nodes in the network are battery-powered, it is too expensive to have the nodes retransmit missing data across the network. Data re-transmission may also cause time delays when detecting abnormal changes in an environment. Furthermore, localized reasoning techniques on sensor nodes (such as machine learning algorithms to classify states of the environment) are generally not robust enough to handle missing data. Since sensor data collected by a WSN is generally correlated in time and space, we illustrate how replacing missing sensor values with spatially and temporally correlated sensor values can significantly improve the network's performance. However, our studies show that it is important to determine which nodes are spatially and temporally correlated with each other. Simple techniques based on Euclidean distance are not sufficient for complex environmental deployments. Thus, we have developed a novel Nearest Neighbor (NN) imputation method that estimates missing data in WSNs by learning spatial and temporal correlations between sensor nodes. To improve the search time, we utilize a k d-tree data structure, which is a non-parametric, data-driven binary search tree. Instead of using traditional mean and variance of each dimension for k d-tree construction, and Euclidean distance for k d-tree search, we use weighted variances and weighted Euclidean distances based on measured percentages of missing data. We have evaluated this approach through experiments on sensor data from a volcano dataset collected by a network of Crossbow motes, as well as experiments using sensor data from a highway traffic monitoring application. Our experimental

  16. Imputation of missing data in time series for air pollutants

    Science.gov (United States)

    Junger, W. L.; Ponce de Leon, A.

    2015-02-01

    Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.

  17. Imputation of the rare HOXB13 G84E mutation and cancer risk in a large population-based cohort.

    Directory of Open Access Journals (Sweden)

    Thomas J Hoffmann

    2015-01-01

    Full Text Available An efficient approach to characterizing the disease burden of rare genetic variants is to impute them into large well-phenotyped cohorts with existing genome-wide genotype data using large sequenced referenced panels. The success of this approach hinges on the accuracy of rare variant imputation, which remains controversial. For example, a recent study suggested that one cannot adequately impute the HOXB13 G84E mutation associated with prostate cancer risk (carrier frequency of 0.0034 in European ancestry participants in the 1000 Genomes Project. We show that by utilizing the 1000 Genomes Project data plus an enriched reference panel of mutation carriers we were able to accurately impute the G84E mutation into a large cohort of 83,285 non-Hispanic White participants from the Kaiser Permanente Research Program on Genes, Environment and Health Genetic Epidemiology Research on Adult Health and Aging cohort. Imputation authenticity was confirmed via a novel classification and regression tree method, and then empirically validated analyzing a subset of these subjects plus an additional 1,789 men from Kaiser specifically genotyped for the G84E mutation (r2 = 0.57, 95% CI = 0.37–0.77. We then show the value of this approach by using the imputed data to investigate the impact of the G84E mutation on age-specific prostate cancer risk and on risk of fourteen other cancers in the cohort. The age-specific risk of prostate cancer among G84E mutation carriers was higher than among non-carriers. Risk estimates from Kaplan-Meier curves were 36.7% versus 13.6% by age 72, and 64.2% versus 24.2% by age 80, for G84E mutation carriers and non-carriers, respectively (p = 3.4x10-12. The G84E mutation was also associated with an increase in risk for the fourteen other most common cancers considered collectively (p = 5.8x10-4 and more so in cases diagnosed with multiple cancer types, both those including and not including prostate cancer, strongly suggesting

  18. PRIMAL: Fast and accurate pedigree-based imputation from sequence data in a founder population.

    Directory of Open Access Journals (Sweden)

    Oren E Livne

    2015-03-01

    Full Text Available Founder populations and large pedigrees offer many well-known advantages for genetic mapping studies, including cost-efficient study designs. Here, we describe PRIMAL (PedigRee IMputation ALgorithm, a fast and accurate pedigree-based phasing and imputation algorithm for founder populations. PRIMAL incorporates both existing and original ideas, such as a novel indexing strategy of Identity-By-Descent (IBD segments based on clique graphs. We were able to impute the genomes of 1,317 South Dakota Hutterites, who had genome-wide genotypes for ~300,000 common single nucleotide variants (SNVs, from 98 whole genome sequences. Using a combination of pedigree-based and LD-based imputation, we were able to assign 87% of genotypes with >99% accuracy over the full range of allele frequencies. Using the IBD cliques we were also able to infer the parental origin of 83% of alleles, and genotypes of deceased recent ancestors for whom no genotype information was available. This imputed data set will enable us to better study the relative contribution of rare and common variants on human phenotypes, as well as parental origin effect of disease risk alleles in >1,000 individuals at minimal cost.

  19. Sequence imputation of HPV16 genomes for genetic association studies.

    Directory of Open Access Journals (Sweden)

    Benjamin Smith

    Full Text Available Human Papillomavirus type 16 (HPV16 causes over half of all cervical cancer and some HPV16 variants are more oncogenic than others. The genetic basis for the extraordinary oncogenic properties of HPV16 compared to other HPVs is unknown. In addition, we neither know which nucleotides vary across and within HPV types and lineages, nor which of the single nucleotide polymorphisms (SNPs determine oncogenicity.A reference set of 62 HPV16 complete genome sequences was established and used to examine patterns of evolutionary relatedness amongst variants using a pairwise identity heatmap and HPV16 phylogeny. A BLAST-based algorithm was developed to impute complete genome data from partial sequence information using the reference database. To interrogate the oncogenic risk of determined and imputed HPV16 SNPs, odds-ratios for each SNP were calculated in a case-control viral genome-wide association study (VWAS using biopsy confirmed high-grade cervix neoplasia and self-limited HPV16 infections from Guanacaste, Costa Rica.HPV16 variants display evolutionarily stable lineages that contain conserved diagnostic SNPs. The imputation algorithm indicated that an average of 97.5±1.03% of SNPs could be accurately imputed. The VWAS revealed specific HPV16 viral SNPs associated with variant lineages and elevated odds ratios; however, individual causal SNPs could not be distinguished with certainty due to the nature of HPV evolution.Conserved and lineage-specific SNPs can be imputed with a high degree of accuracy from limited viral polymorphic data due to the lack of recombination and the stochastic mechanism of variation accumulation in the HPV genome. However, to determine the role of novel variants or non-lineage-specific SNPs by VWAS will require direct sequence analysis. The investigation of patterns of genetic variation and the identification of diagnostic SNPs for lineages of HPV16 variants provides a valuable resource for future studies of HPV16

  20. Gap-filling a spatially explicit plant trait database: comparing imputation methods and different levels of environmental information

    Directory of Open Access Journals (Sweden)

    R. Poyatos

    2018-05-01

    Full Text Available The ubiquity of missing data in plant trait databases may hinder trait-based analyses of ecological patterns and processes. Spatially explicit datasets with information on intraspecific trait variability are rare but offer great promise in improving our understanding of functional biogeography. At the same time, they offer specific challenges in terms of data imputation. Here we compare statistical imputation approaches, using varying levels of environmental information, for five plant traits (leaf biomass to sapwood area ratio, leaf nitrogen content, maximum tree height, leaf mass per area and wood density in a spatially explicit plant trait dataset of temperate and Mediterranean tree species (Ecological and Forest Inventory of Catalonia, IEFC, dataset for Catalonia, north-east Iberian Peninsula, 31 900 km2. We simulated gaps at different missingness levels (10–80 % in a complete trait matrix, and we used overall trait means, species means, k nearest neighbours (kNN, ordinary and regression kriging, and multivariate imputation using chained equations (MICE to impute missing trait values. We assessed these methods in terms of their accuracy and of their ability to preserve trait distributions, multi-trait correlation structure and bivariate trait relationships. The relatively good performance of mean and species mean imputations in terms of accuracy masked a poor representation of trait distributions and multivariate trait structure. Species identity improved MICE imputations for all traits, whereas forest structure and topography improved imputations for some traits. No method performed best consistently for the five studied traits, but, considering all traits and performance metrics, MICE informed by relevant ecological variables gave the best results. However, at higher missingness (> 30 %, species mean imputations and regression kriging tended to outperform MICE for some traits. MICE informed by relevant ecological variables

  1. Imputation Accuracy from Low to Moderate Density Single Nucleotide Polymorphism Chips in a Thai Multibreed Dairy Cattle Population

    Directory of Open Access Journals (Sweden)

    Danai Jattawa

    2016-04-01

    Full Text Available The objective of this study was to investigate the accuracy of imputation from low density (LDC to moderate density SNP chips (MDC in a Thai Holstein-Other multibreed dairy cattle population. Dairy cattle with complete pedigree information (n = 1,244 from 145 dairy farms were genotyped with GeneSeek GGP20K (n = 570, GGP26K (n = 540 and GGP80K (n = 134 chips. After checking for single nucleotide polymorphism (SNP quality, 17,779 SNP markers in common between the GGP20K, GGP26K, and GGP80K were used to represent MDC. Animals were divided into two groups, a reference group (n = 912 and a test group (n = 332. The SNP markers chosen for the test group were those located in positions corresponding to GeneSeek GGP9K (n = 7,652. The LDC to MDC genotype imputation was carried out using three different software packages, namely Beagle 3.3 (population-based algorithm, FImpute 2.2 (combined family- and population-based algorithms and Findhap 4 (combined family- and population-based algorithms. Imputation accuracies within and across chromosomes were calculated as ratios of correctly imputed SNP markers to overall imputed SNP markers. Imputation accuracy for the three software packages ranged from 76.79% to 93.94%. FImpute had higher imputation accuracy (93.94% than Findhap (84.64% and Beagle (76.79%. Imputation accuracies were similar and consistent across chromosomes for FImpute, but not for Findhap and Beagle. Most chromosomes that showed either high (73% or low (80% imputation accuracies were the same chromosomes that had above and below average linkage disequilibrium (LD; defined here as the correlation between pairs of adjacent SNP within chromosomes less than or equal to 1 Mb apart. Results indicated that FImpute was more suitable than Findhap and Beagle for genotype imputation in this Thai multibreed population. Perhaps additional increments in imputation accuracy could be achieved by increasing the completeness of pedigree information.

  2. The utility of imputed matched sets. Analyzing probabilistically linked databases in a low information setting.

    Science.gov (United States)

    Thomas, A M; Cook, L J; Dean, J M; Olson, L M

    2014-01-01

    To compare results from high probability matched sets versus imputed matched sets across differing levels of linkage information. A series of linkages with varying amounts of available information were performed on two simulated datasets derived from multiyear motor vehicle crash (MVC) and hospital databases, where true matches were known. Distributions of high probability and imputed matched sets were compared against the true match population for occupant age, MVC county, and MVC hour. Regression models were fit to simulated log hospital charges and hospitalization status. High probability and imputed matched sets were not significantly different from occupant age, MVC county, and MVC hour in high information settings (p > 0.999). In low information settings, high probability matched sets were significantly different from occupant age and MVC county (p sets were not (p > 0.493). High information settings saw no significant differences in inference of simulated log hospital charges and hospitalization status between the two methods. High probability and imputed matched sets were significantly different from the outcomes in low information settings; however, imputed matched sets were more robust. The level of information available to a linkage is an important consideration. High probability matched sets are suitable for high to moderate information settings and for situations involving case-specific analysis. Conversely, imputed matched sets are preferable for low information settings when conducting population-based analyses.

  3. An Imputation Model for Dropouts in Unemployment Data

    Directory of Open Access Journals (Sweden)

    Nilsson Petra

    2016-09-01

    Full Text Available Incomplete unemployment data is a fundamental problem when evaluating labour market policies in several countries. Many unemployment spells end for unknown reasons; in the Swedish Public Employment Service’s register as many as 20 percent. This leads to an ambiguity regarding destination states (employment, unemployment, retired, etc.. According to complete combined administrative data, the employment rate among dropouts was close to 50 for the years 1992 to 2006, but from 2007 the employment rate has dropped to 40 or less. This article explores an imputation approach. We investigate imputation models estimated both on survey data from 2005/2006 and on complete combined administrative data from 2005/2006 and 2011/2012. The models are evaluated in terms of their ability to make correct predictions. The models have relatively high predictive power.

  4. A suggested approach for imputation of missing dietary data for young children in daycare.

    Science.gov (United States)

    Stevens, June; Ou, Fang-Shu; Truesdale, Kimberly P; Zeng, Donglin; Vaughn, Amber E; Pratt, Charlotte; Ward, Dianne S

    2015-01-01

    Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult. The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method. Data were from children aged 2-5 years in the My Parenting SOS project (n=308; 870 24-h diet recalls). Mixed models were used to simultaneously predict breakfast, dinner, and evening snacks (B+D+ES); lunch; and daytime snacks for all children after adjusting for age, sex, and body mass index (BMI). From these models, we imputed the missing weekday daycare lunches by interpolation using the mean lunch to B+D+ES [L/(B+D+ES)] ratio among non-daycare children on weekdays and the L/(B+D+ES) ratio for all children on weekends. Daytime snack data were used to impute snacks. The reported mean (± standard deviation) weekday intake was lower for daycare children [725 (±324) kcal] compared to non-daycare children [1,048 (±463) kcal]. Weekend intake for all children was 1,173 (±427) kcal. After imputation, weekday caloric intake for daycare children was 1,230 (±409) kcal. Daily intakes that included imputed data were associated with age and sex but not with BMI. This work indicates that imputation is a promising method for improving the precision of daily nutrient data from young children.

  5. UniFIeD Univariate Frequency-based Imputation for Time Series Data

    OpenAIRE

    Friese, Martina; Stork, Jörg; Ramos Guerra, Ricardo; Bartz-Beielstein, Thomas; Thaker, Soham; Flasch, Oliver; Zaefferer, Martin

    2013-01-01

    This paper introduces UniFIeD, a new data preprocessing method for time series. UniFIeD can cope with large intervals of missing data. A scalable test function generator, which allows the simulation of time series with different gap sizes, is presented additionally. An experimental study demonstrates that (i) UniFIeD shows a significant better performance than simple imputation methods and (ii) UniFIeD is able to handle situations, where advanced imputation methods fail. The results are indep...

  6. Candidate gene analysis using imputed genotypes: cell cycle single-nucleotide polymorphisms and ovarian cancer risk

    DEFF Research Database (Denmark)

    Goode, Ellen L; Fridley, Brooke L; Vierkant, Robert A

    2009-01-01

    Polymorphisms in genes critical to cell cycle control are outstanding candidates for association with ovarian cancer risk; numerous genes have been interrogated by multiple research groups using differing tagging single-nucleotide polymorphism (SNP) sets. To maximize information gleaned from......, and rs3212891; CDK2 rs2069391, rs2069414, and rs17528736; and CCNE1 rs3218036. These results exemplify the utility of imputation in candidate gene studies and lend evidence to a role of cell cycle genes in ovarian cancer etiology, suggest a reduced set of SNPs to target in additional cases and controls....

  7. Quick, “Imputation-free” meta-analysis with proxy-SNPs

    Directory of Open Access Journals (Sweden)

    Meesters Christian

    2012-09-01

    Full Text Available Abstract Background Meta-analysis (MA is widely used to pool genome-wide association studies (GWASes in order to a increase the power to detect strong or weak genotype effects or b as a result verification method. As a consequence of differing SNP panels among genotyping chips, imputation is the method of choice within GWAS consortia to avoid losing too many SNPs in a MA. YAMAS (Yet Another Meta Analysis Software, however, enables cross-GWAS conclusions prior to finished and polished imputation runs, which eventually are time-consuming. Results Here we present a fast method to avoid forfeiting SNPs present in only a subset of studies, without relying on imputation. This is accomplished by using reference linkage disequilibrium data from 1,000 Genomes/HapMap projects to find proxy-SNPs together with in-phase alleles for SNPs missing in at least one study. MA is conducted by combining association effect estimates of a SNP and those of its proxy-SNPs. Our algorithm is implemented in the MA software YAMAS. Association results from GWAS analysis applications can be used as input files for MA, tremendously speeding up MA compared to the conventional imputation approach. We show that our proxy algorithm is well-powered and yields valuable ad hoc results, possibly providing an incentive for follow-up studies. We propose our method as a quick screening step prior to imputation-based MA, as well as an additional main approach for studies without available reference data matching the ethnicities of study participants. As a proof of principle, we analyzed six dbGaP Type II Diabetes GWAS and found that the proxy algorithm clearly outperforms naïve MA on the p-value level: for 17 out of 23 we observe an improvement on the p-value level by a factor of more than two, and a maximum improvement by a factor of 2127. Conclusions YAMAS is an efficient and fast meta-analysis program which offers various methods, including conventional MA as well as inserting proxy

  8. Learning-Based Adaptive Imputation Methodwith kNN Algorithm for Missing Power Data

    Directory of Open Access Journals (Sweden)

    Minkyung Kim

    2017-10-01

    Full Text Available This paper proposes a learning-based adaptive imputation method (LAI for imputing missing power data in an energy system. This method estimates the missing power data by using the pattern that appears in the collected data. Here, in order to capture the patterns from past power data, we newly model a feature vector by using past data and its variations. The proposed LAI then learns the optimal length of the feature vector and the optimal historical length, which are significant hyper parameters of the proposed method, by utilizing intentional missing data. Based on a weighted distance between feature vectors representing a missing situation and past situation, missing power data are estimated by referring to the k most similar past situations in the optimal historical length. We further extend the proposed LAI to alleviate the effect of unexpected variation in power data and refer to this new approach as the extended LAI method (eLAI. The eLAI selects a method between linear interpolation (LI and the proposed LAI to improve accuracy under unexpected variations. Finally, from a simulation under various energy consumption profiles, we verify that the proposed eLAI achieves about a 74% reduction of the average imputation error in an energy system, compared to the existing imputation methods.

  9. A suggested approach for imputation of missing dietary data for young children in daycare

    Directory of Open Access Journals (Sweden)

    June Stevens

    2015-12-01

    Full Text Available Background: Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult. Objective: The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method. Design: Data were from children aged 2-5 years in the My Parenting SOS project (n=308; 870 24-h diet recalls. Mixed models were used to simultaneously predict breakfast, dinner, and evening snacks (B+D+ES; lunch; and daytime snacks for all children after adjusting for age, sex, and body mass index (BMI. From these models, we imputed the missing weekday daycare lunches by interpolation using the mean lunch to B+D+ES [L/(B+D+ES] ratio among non-daycare children on weekdays and the L/(B+D+ES ratio for all children on weekends. Daytime snack data were used to impute snacks. Results: The reported mean (± standard deviation weekday intake was lower for daycare children [725 (±324 kcal] compared to non-daycare children [1,048 (±463 kcal]. Weekend intake for all children was 1,173 (±427 kcal. After imputation, weekday caloric intake for daycare children was 1,230 (±409 kcal. Daily intakes that included imputed data were associated with age and sex but not with BMI. Conclusion: This work indicates that imputation is a promising method for improving the precision of daily nutrient data from young children.

  10. Inclusion of Population-specific Reference Panel from India to the 1000 Genomes Phase 3 Panel Improves Imputation Accuracy.

    Science.gov (United States)

    Ahmad, Meraj; Sinha, Anubhav; Ghosh, Sreya; Kumar, Vikrant; Davila, Sonia; Yajnik, Chittaranjan S; Chandak, Giriraj R

    2017-07-27

    Imputation is a computational method based on the principle of haplotype sharing allowing enrichment of genome-wide association study datasets. It depends on the haplotype structure of the population and density of the genotype data. The 1000 Genomes Project led to the generation of imputation reference panels which have been used globally. However, recent studies have shown that population-specific panels provide better enrichment of genome-wide variants. We compared the imputation accuracy using 1000 Genomes phase 3 reference panel and a panel generated from genome-wide data on 407 individuals from Western India (WIP). The concordance of imputed variants was cross-checked with next-generation re-sequencing data on a subset of genomic regions. Further, using the genome-wide data from 1880 individuals, we demonstrate that WIP works better than the 1000 Genomes phase 3 panel and when merged with it, significantly improves the imputation accuracy throughout the minor allele frequency range. We also show that imputation using only South Asian component of the 1000 Genomes phase 3 panel works as good as the merged panel, making it computationally less intensive job. Thus, our study stresses that imputation accuracy using 1000 Genomes phase 3 panel can be further improved by including population-specific reference panels from South Asia.

  11. Increasing imputation and prediction accuracy for Chinese Holsteins using joint Chinese-Nordic reference population

    DEFF Research Database (Denmark)

    Ma, Peipei; Lund, Mogens Sandø; Ding, X

    2015-01-01

    This study investigated the effect of including Nordic Holsteins in the reference population on the imputation accuracy and prediction accuracy for Chinese Holsteins. The data used in this study include 85 Chinese Holstein bulls genotyped with both 54K chip and 777K (HD) chip, 2862 Chinese cows...... was improved slightly when using the marker data imputed based on the combined HD reference data, compared with using the marker data imputed based on the Chinese HD reference data only. On the other hand, when using the combined reference population including 4398 Nordic Holstein bulls, the accuracy...... to increase reference population rather than increasing marker density...

  12. Simultaneous Treatment of Missing Data and Measurement Error in HIV Research Using Multiple Overimputation.

    Science.gov (United States)

    Schomaker, Michael; Hogger, Sara; Johnson, Leigh F; Hoffmann, Christopher J; Bärnighausen, Till; Heumann, Christian

    2015-09-01

    Both CD4 count and viral load in HIV-infected persons are measured with error. There is no clear guidance on how to deal with this measurement error in the presence of missing data. We used multiple overimputation, a method recently developed in the political sciences, to account for both measurement error and missing data in CD4 count and viral load measurements from four South African cohorts of a Southern African HIV cohort collaboration. Our knowledge about the measurement error of ln CD4 and log10 viral load is part of an imputation model that imputes both missing and mismeasured data. In an illustrative example, we estimate the association of CD4 count and viral load with the hazard of death among patients on highly active antiretroviral therapy by means of a Cox model. Simulation studies evaluate the extent to which multiple overimputation is able to reduce bias in survival analyses. Multiple overimputation emphasizes more strongly the influence of having high baseline CD4 counts compared to both a complete case analysis and multiple imputation (hazard ratio for >200 cells/mm vs. <25 cells/mm: 0.21 [95% confidence interval: 0.18, 0.24] vs. 0.38 [0.29, 0.48], and 0.29 [0.25, 0.34], respectively). Similar results are obtained when varying assumptions about measurement error, when using p-splines, and when evaluating time-updated CD4 count in a longitudinal analysis. The estimates of the association with viral load are slightly more attenuated when using multiple imputation instead of multiple overimputation. Our simulation studies suggest that multiple overimputation is able to reduce bias and mean squared error in survival analyses. Multiple overimputation, which can be used with existing software, offers a convenient approach to account for both missing and mismeasured data in HIV research.

  13. Multiple Imputation of Groundwater Data to Evaluate Spatial and Temporal Anthropogenic Influences on Subsurface Water Fluxes in Los Angeles, CA

    Science.gov (United States)

    Manago, K. F.; Hogue, T. S.; Hering, A. S.

    2014-12-01

    In the City of Los Angeles, groundwater accounts for 11% of the total water supply on average, and 30% during drought years. Due to ongoing drought in California, increased reliance on local water supply highlights the need for better understanding of regional groundwater dynamics and estimating sustainable groundwater supply. However, in an urban setting, such as Los Angeles, understanding or modeling groundwater levels is extremely complicated due to various anthropogenic influences such as groundwater pumping, artificial recharge, landscape irrigation, leaking infrastructure, seawater intrusion, and extensive impervious surfaces. This study analyzes anthropogenic effects on groundwater levels using groundwater monitoring well data from the County of Los Angeles Department of Public Works. The groundwater data is irregularly sampled with large gaps between samples, resulting in a sparsely populated dataset. A multiple imputation method is used to fill the missing data, allowing for multiple ensembles and improved error estimates. The filled data is interpolated to create spatial groundwater maps utilizing information from all wells. The groundwater data is evaluated at a monthly time step over the last several decades to analyze the effect of land cover and identify other influencing factors on groundwater levels spatially and temporally. Preliminary results show irrigated parks have the largest influence on groundwater fluctuations, resulting in large seasonal changes, exceeding changes in spreading grounds. It is assumed that these fluctuations are caused by watering practices required to sustain non-native vegetation. Conversely, high intensity urbanized areas resulted in muted groundwater fluctuations and behavior decoupling from climate patterns. Results provides improved understanding of anthropogenic effects on groundwater levels in addition to providing high quality datasets for validation of regional groundwater models.

  14. Comparison of different methods for imputing genome-wide marker genotypes in Swedish and Finnish Red Cattle

    DEFF Research Database (Denmark)

    Ma, Peipei; Brøndum, Rasmus Froberg; Qin, Zahng

    2013-01-01

    This study investigated the imputation accuracy of different methods, considering both the minor allele frequency and relatedness between individuals in the reference and test data sets. Two data sets from the combined population of Swedish and Finnish Red Cattle were used to test the influence...... coefficient was lower when the minor allele frequency was lower. The results indicate that Beagle and IMPUTE2 provide the most robust and accurate imputation accuracies, but considering computing time and memory usage, FImpute is another alternative method....

  15. Imputing forest carbon stock estimates from inventory plots to a nationally continuous coverage

    Directory of Open Access Journals (Sweden)

    Wilson Barry Tyler

    2013-01-01

    Full Text Available Abstract The U.S. has been providing national-scale estimates of forest carbon (C stocks and stock change to meet United Nations Framework Convention on Climate Change (UNFCCC reporting requirements for years. Although these currently are provided as national estimates by pool and year to meet greenhouse gas monitoring requirements, there is growing need to disaggregate these estimates to finer scales to enable strategic forest management and monitoring activities focused on various ecosystem services such as C storage enhancement. Through application of a nearest-neighbor imputation approach, spatially extant estimates of forest C density were developed for the conterminous U.S. using the U.S.’s annual forest inventory. Results suggest that an existing forest inventory plot imputation approach can be readily modified to provide raster maps of C density across a range of pools (e.g., live tree to soil organic carbon and spatial scales (e.g., sub-county to biome. Comparisons among imputed maps indicate strong regional differences across C pools. The C density of pools closely related to detrital input (e.g., dead wood is often highest in forests suffering from recent mortality events such as those in the northern Rocky Mountains (e.g., beetle infestations. In contrast, live tree carbon density is often highest on the highest quality forest sites such as those found in the Pacific Northwest. Validation results suggest strong agreement between the estimates produced from the forest inventory plots and those from the imputed maps, particularly when the C pool is closely associated with the imputation model (e.g., aboveground live biomass and live tree basal area, with weaker agreement for detrital pools (e.g., standing dead trees. Forest inventory imputed plot maps provide an efficient and flexible approach to monitoring diverse C pools at national (e.g., UNFCCC and regional scales (e.g., Reducing Emissions from Deforestation and Forest

  16. Using the Superpopulation Model for Imputations and Variance Computation in Survey Sampling

    Directory of Open Access Journals (Sweden)

    Petr Novák

    2012-03-01

    Full Text Available This study is aimed at variance computation techniques for estimates of population characteristics based on survey sampling and imputation. We use the superpopulation regression model, which means that the target variable values for each statistical unit are treated as random realizations of a linear regression model with weighted variance. We focus on regression models with one auxiliary variable and no intercept, which have many applications and straightforward interpretation in business statistics. Furthermore, we deal with caseswhere the estimates are not independent and thus the covariance must be computed. We also consider chained regression models with auxiliary variables as random variables instead of constants.

  17. A comprehensive evaluation of popular proteomics software workflows for label-free proteome quantification and imputation.

    Science.gov (United States)

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2017-05-31

    Label-free mass spectrometry (MS) has developed into an important tool applied in various fields of biological and life sciences. Several software exist to process the raw MS data into quantified protein abundances, including open source and commercial solutions. Each software includes a set of unique algorithms for different tasks of the MS data processing workflow. While many of these algorithms have been compared separately, a thorough and systematic evaluation of their overall performance is missing. Moreover, systematic information is lacking about the amount of missing values produced by the different proteomics software and the capabilities of different data imputation methods to account for them.In this study, we evaluated the performance of five popular quantitative label-free proteomics software workflows using four different spike-in data sets. Our extensive testing included the number of proteins quantified and the number of missing values produced by each workflow, the accuracy of detecting differential expression and logarithmic fold change and the effect of different imputation and filtering methods on the differential expression results. We found that the Progenesis software performed consistently well in the differential expression analysis and produced few missing values. The missing values produced by the other software decreased their performance, but this difference could be mitigated using proper data filtering or imputation methods. Among the imputation methods, we found that the local least squares (lls) regression imputation consistently increased the performance of the software in the differential expression analysis, and a combination of both data filtering and local least squares imputation increased performance the most in the tested data sets. © The Author 2017. Published by Oxford University Press.

  18. Advancing US GHG Inventory by Incorporating Survey Data using Machine-Learning Techniques

    Science.gov (United States)

    Alsaker, C.; Ogle, S. M.; Breidt, J.

    2017-12-01

    Crop management data are used in the National Greenhouse Gas Inventory that is compiled annually and reported to the United Nations Framework Convention on Climate Change. Emissions for carbon stock change and N2O emissions for US agricultural soils are estimated using the USDA National Resources Inventory (NRI). NRI provides basic information on land use and cropping histories, but it does not provide much detail on other management practices. In contrast, the Conservation Effects Assessment Project (CEAP) survey collects detailed crop management data that could be used in the GHG Inventory. The survey data were collected from NRI survey locations that are a subset of the NRI every 10 years. Therefore, imputation of the CEAP are needed to represent the management practices across all NRI survey locations both spatially and temporally. Predictive mean matching and an artificial neural network methods have been applied to develop imputation model under a multiple imputation framework. Temporal imputation involves adjusting the imputation model using state-level USDA Agricultural Resource Management Survey data. Distributional and predictive accuracy is assessed for the imputed data, providing not only management data needed for the inventory but also rigorous estimates of uncertainty.

  19. Accuracy of hemoglobin A1c imputation using fasting plasma glucose in diabetes research using electronic health records data

    Directory of Open Access Journals (Sweden)

    Stanley Xu

    2014-05-01

    Full Text Available In studies that use electronic health record data, imputation of important data elements such as Glycated hemoglobin (A1c has become common. However, few studies have systematically examined the validity of various imputation strategies for missing A1c values. We derived a complete dataset using an incident diabetes population that has no missing values in A1c, fasting and random plasma glucose (FPG and RPG, age, and gender. We then created missing A1c values under two assumptions: missing completely at random (MCAR and missing at random (MAR. We then imputed A1c values, compared the imputed values to the true A1c values, and used these data to assess the impact of A1c on initiation of antihyperglycemic therapy. Under MCAR, imputation of A1c based on FPG 1 estimated a continuous A1c within ± 1.88% of the true A1c 68.3% of the time; 2 estimated a categorical A1c within ± one category from the true A1c about 50% of the time. Including RPG in imputation slightly improved the precision but did not improve the accuracy. Under MAR, including gender and age in addition to FPG improved the accuracy of imputed continuous A1c but not categorical A1c. Moreover, imputation of up to 33% of missing A1c values did not change the accuracy and precision and did not alter the impact of A1c on initiation of antihyperglycemic therapy. When using A1c values as a predictor variable, a simple imputation algorithm based only on age, sex, and fasting plasma glucose gave acceptable results.

  20. Multiple imputation using linked proxy outcome data resulted in important bias reduction and efficiency gains: a simulation study.

    Science.gov (United States)

    Cornish, R P; Macleod, J; Carpenter, J R; Tilling, K

    2017-01-01

    When an outcome variable is missing not at random (MNAR: probability of missingness depends on outcome values), estimates of the effect of an exposure on this outcome are often biased. We investigated the extent of this bias and examined whether the bias can be reduced through incorporating proxy outcomes obtained through linkage to administrative data as auxiliary variables in multiple imputation (MI). Using data from the Avon Longitudinal Study of Parents and Children (ALSPAC) we estimated the association between breastfeeding and IQ (continuous outcome), incorporating linked attainment data (proxies for IQ) as auxiliary variables in MI models. Simulation studies explored the impact of varying the proportion of missing data (from 20 to 80%), the correlation between the outcome and its proxy (0.1-0.9), the strength of the missing data mechanism, and having a proxy variable that was incomplete. Incorporating a linked proxy for the missing outcome as an auxiliary variable reduced bias and increased efficiency in all scenarios, even when 80% of the outcome was missing. Using an incomplete proxy was similarly beneficial. High correlations (> 0.5) between the outcome and its proxy substantially reduced the missing information. Consistent with this, ALSPAC analysis showed inclusion of a proxy reduced bias and improved efficiency. Gains with additional proxies were modest. In longitudinal studies with loss to follow-up, incorporating proxies for this study outcome obtained via linkage to external sources of data as auxiliary variables in MI models can give practically important bias reduction and efficiency gains when the study outcome is MNAR.

  1. Comparing strategies for selection of low-density SNPs for imputation-mediated genomic prediction in U. S. Holsteins.

    Science.gov (United States)

    He, Jun; Xu, Jiaqi; Wu, Xiao-Lin; Bauck, Stewart; Lee, Jungjae; Morota, Gota; Kachman, Stephen D; Spangler, Matthew L

    2018-04-01

    SNP chips are commonly used for genotyping animals in genomic selection but strategies for selecting low-density (LD) SNPs for imputation-mediated genomic selection have not been addressed adequately. The main purpose of the present study was to compare the performance of eight LD (6K) SNP panels, each selected by a different strategy exploiting a combination of three major factors: evenly-spaced SNPs, increased minor allele frequencies, and SNP-trait associations either for single traits independently or for all the three traits jointly. The imputation accuracies from 6K to 80K SNP genotypes were between 96.2 and 98.2%. Genomic prediction accuracies obtained using imputed 80K genotypes were between 0.817 and 0.821 for daughter pregnancy rate, between 0.838 and 0.844 for fat yield, and between 0.850 and 0.863 for milk yield. The two SNP panels optimized on the three major factors had the highest genomic prediction accuracy (0.821-0.863), and these accuracies were very close to those obtained using observed 80K genotypes (0.825-0.868). Further exploration of the underlying relationships showed that genomic prediction accuracies did not respond linearly to imputation accuracies, but were significantly affected by genotype (imputation) errors of SNPs in association with the traits to be predicted. SNPs optimal for map coverage and MAF were favorable for obtaining accurate imputation of genotypes whereas trait-associated SNPs improved genomic prediction accuracies. Thus, optimal LD SNP panels were the ones that combined both strengths. The present results have practical implications on the design of LD SNP chips for imputation-enabled genomic prediction.

  2. Missing in space: an evaluation of imputation methods for missing data in spatial analysis of risk factors for type II diabetes.

    Science.gov (United States)

    Baker, Jannah; White, Nicole; Mengersen, Kerrie

    2014-11-20

    Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.

  3. Multiple-energy Techniques in Industrial Computerized Tomography

    Science.gov (United States)

    Schneberk, D.; Martz, H.; Azevedo, S.

    1990-08-01

    Considerable effort is being applied to develop multiple-energy industrial CT techniques for materials characterization. Multiple-energy CT can provide reliable estimates of effective Z (Z{sub eff}), weight fraction, and rigorous calculations of absolute density, all at the spatial resolution of the scanner. Currently, a wide variety of techniques exist for CT scanners, but each has certain problems and limitations. Ultimately, the best multi-energy CT technique would combine the qualities of accuracy, reliability, and wide range of application, and would require the smallest number of additional measurements. We have developed techniques for calculating material properties of industrial objects that differ somewhat from currently used methods. In this paper, we present our methods for calculating Z{sub eff}, weight fraction, and density. We begin with the simplest case -- methods for multiple-energy CT using isotopic sources -- and proceed to multiple-energy work with x-ray machine sources. The methods discussed here are illustrated on CT scans of PBX-9502 high explosives, a lexan-aluminum phantom, and a cylinder of glass beads used in a preliminary study to determine if CT can resolve three phases: air, water, and a high-Z oil. In the CT project at LLNL, we have constructed several CT scanners of varying scanning geometries using {gamma}- and x-ray sources. In our research, we employed two of these scanners: pencil-beam CAT for CT data using isotopic sources and video-CAT equipped with an IRT micro-focal x-ray machine source.

  4. Imputation of microsatellite alleles from dense SNP genotypes for parental verification

    Directory of Open Access Journals (Sweden)

    Matthew eMcclure

    2012-08-01

    Full Text Available Microsatellite (MS markers have recently been used for parental verification and are still the international standard despite higher cost, error rate, and turnaround time compared with Single Nucleotide Polymorphisms (SNP-based assays. Despite domestic and international interest from producers and research communities, no viable means currently exist to verify parentage for an individual unless all familial connections were analyzed using the same DNA marker type (MS or SNP. A simple and cost-effective method was devised to impute MS alleles from SNP haplotypes within breeds. For some MS, imputation results may allow inference across breeds. A total of 347 dairy cattle representing 4 dairy breeds (Brown Swiss, Guernsey, Holstein, and Jersey were used to generate reference haplotypes. This approach has been verified (>98% accurate for imputing the International Society of Animal Genetics (ISAG recommended panel of 12 MS for cattle parentage verification across a validation set of 1,307 dairy animals.. Implementation of this method will allow producers and breed associations to transition to SNP-based parentage verification utilizing MS genotypes from historical data on parents where SNP genotypes are missing. This approach may be applicable to additional cattle breeds and other species that wish to migrate from MS- to SNP- based parental verification.

  5. ParaHaplo 3.0: A program package for imputation and a haplotype-based whole-genome association study using hybrid parallel computing

    Directory of Open Access Journals (Sweden)

    Kamatani Naoyuki

    2011-05-01

    Full Text Available Abstract Background Use of missing genotype imputations and haplotype reconstructions are valuable in genome-wide association studies (GWASs. By modeling the patterns of linkage disequilibrium in a reference panel, genotypes not directly measured in the study samples can be imputed and used for GWASs. Since millions of single nucleotide polymorphisms need to be imputed in a GWAS, faster methods for genotype imputation and haplotype reconstruction are required. Results We developed a program package for parallel computation of genotype imputation and haplotype reconstruction. Our program package, ParaHaplo 3.0, is intended for use in workstation clusters using the Intel Message Passing Interface. We compared the performance of ParaHaplo 3.0 on the Japanese in Tokyo, Japan and Han Chinese in Beijing, and Chinese in the HapMap dataset. A parallel version of ParaHaplo 3.0 can conduct genotype imputation 20 times faster than a non-parallel version of ParaHaplo. Conclusions ParaHaplo 3.0 is an invaluable tool for conducting haplotype-based GWASs. The need for faster genotype imputation and haplotype reconstruction using parallel computing will become increasingly important as the data sizes of such projects continue to increase. ParaHaplo executable binaries and program sources are available at http://en.sourceforge.jp/projects/parallelgwas/releases/.

  6. Saturated linkage map construction in Rubus idaeus using genotyping by sequencing and genome-independent imputation

    Directory of Open Access Journals (Sweden)

    Ward Judson A

    2013-01-01

    Full Text Available Abstract Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry. Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation

  7. Simple nuclear norm based algorithms for imputing missing data and forecasting in time series

    OpenAIRE

    Butcher, Holly Louise; Gillard, Jonathan William

    2017-01-01

    There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.

  8. A spatial haplotype copying model with applications to genotype imputation.

    Science.gov (United States)

    Yang, Wen-Yun; Hormozdiari, Farhad; Eskin, Eleazar; Pasaniuc, Bogdan

    2015-05-01

    Ever since its introduction, the haplotype copy model has proven to be one of the most successful approaches for modeling genetic variation in human populations, with applications ranging from ancestry inference to genotype phasing and imputation. Motivated by coalescent theory, this approach assumes that any chromosome (haplotype) can be modeled as a mosaic of segments copied from a set of chromosomes sampled from the same population. At the core of the model is the assumption that any chromosome from the sample is equally likely to contribute a priori to the copying process. Motivated by recent works that model genetic variation in a geographic continuum, we propose a new spatial-aware haplotype copy model that jointly models geography and the haplotype copying process. We extend hidden Markov models of haplotype diversity such that at any given location, haplotypes that are closest in the genetic-geographic continuum map are a priori more likely to contribute to the copying process than distant ones. Through simulations starting from the 1000 Genomes data, we show that our model achieves superior accuracy in genotype imputation over the standard spatial-unaware haplotype copy model. In addition, we show the utility of our model in selecting a small personalized reference panel for imputation that leads to both improved accuracy as well as to a lower computational runtime than the standard approach. Finally, we show our proposed model can be used to localize individuals on the genetic-geographical map on the basis of their genotype data.

  9. Data Editing and Imputation in Business Surveys Using “R”

    Directory of Open Access Journals (Sweden)

    Elena Romascanu

    2014-06-01

    Full Text Available Purpose – Missing data are a recurring problem that can cause bias or lead to inefficient analyses. The objective of this paper is a direct comparison between the two statistical software features R and SPSS, in order to take full advantage of the existing automated methods for data editing process and imputation in business surveys (with a proper design of consistency rules as a partial alternative to the manual editing of data. Approach – The comparison of different methods on editing surveys data, in R with the ‘editrules’ and ‘survey’ packages because inside those, exist commonly used transformations in official statistics, as visualization of missing values pattern using ‘Amelia’ and ‘VIM’ packages, imputation approaches for longitudinal data using ‘VIMGUI’ and a comparison of another statistical software performance on the same features, such as SPSS. Findings – Data on business statistics received by NIS’s (National Institute of Statistics are not ready to be used for direct analysis due to in-record inconsistencies, errors and missing values from the collected data sets. The appropriate automatic methods from R packages, offers the ability to set the erroneous fields in edit-violating records, to verify the results after the imputation of missing values providing for users a flexible, less time consuming approach and easy to perform automation in R than in SPSS Macros syntax situations, when macros are very handy.

  10. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    Science.gov (United States)

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  11. Estimating Classification Errors Under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC

    Directory of Open Access Journals (Sweden)

    Boeschoten Laura

    2017-12-01

    Full Text Available Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible combinations with scores on other variables. Furthermore, the latent class model, by multiply imputing a new variable, enhances the quality of statistics based on the composite data set. The performance of this method is investigated by a simulation study, which shows that whether or not the method can be applied depends on the entropy R2 of the latent class model and the type of analysis a researcher is planning to do. Finally, the method is applied to public data from Statistics Netherlands.

  12. Improved Ancestry Estimation for both Genotyping and Sequencing Data using Projection Procrustes Analysis and Genotype Imputation

    Science.gov (United States)

    Wang, Chaolong; Zhan, Xiaowei; Liang, Liming; Abecasis, Gonçalo R.; Lin, Xihong

    2015-01-01

    Accurate estimation of individual ancestry is important in genetic association studies, especially when a large number of samples are collected from multiple sources. However, existing approaches developed for genome-wide SNP data do not work well with modest amounts of genetic data, such as in targeted sequencing or exome chip genotyping experiments. We propose a statistical framework to estimate individual ancestry in a principal component ancestry map generated by a reference set of individuals. This framework extends and improves upon our previous method for estimating ancestry using low-coverage sequence reads (LASER 1.0) to analyze either genotyping or sequencing data. In particular, we introduce a projection Procrustes analysis approach that uses high-dimensional principal components to estimate ancestry in a low-dimensional reference space. Using extensive simulations and empirical data examples, we show that our new method (LASER 2.0), combined with genotype imputation on the reference individuals, can substantially outperform LASER 1.0 in estimating fine-scale genetic ancestry. Specifically, LASER 2.0 can accurately estimate fine-scale ancestry within Europe using either exome chip genotypes or targeted sequencing data with off-target coverage as low as 0.05×. Under the framework of LASER 2.0, we can estimate individual ancestry in a shared reference space for samples assayed at different loci or by different techniques. Therefore, our ancestry estimation method will accelerate discovery in disease association studies not only by helping model ancestry within individual studies but also by facilitating combined analysis of genetic data from multiple sources. PMID:26027497

  13. RIDDLE: Race and ethnicity Imputation from Disease history with Deep LEarning

    KAUST Repository

    Kim, Ji-Sung; Gao, Xin; Rzhetsky, Andrey

    2018-01-01

    are predictive of race and ethnicity. We used these characterizations of informative features to perform a systematic comparison of differential disease patterns by race and ethnicity. The fact that clinical histories are informative for imputing race

  14. Effect of imputing markers from a low-density chip on the reliability of genomic breeding values in Holstein populations

    DEFF Research Database (Denmark)

    Dassonneville, R; Brøndum, Rasmus Froberg; Druet, T

    2011-01-01

    The purpose of this study was to investigate the imputation error and loss of reliability of direct genomic values (DGV) or genomically enhanced breeding values (GEBV) when using genotypes imputed from a 3,000-marker single nucleotide polymorphism (SNP) panel to a 50,000-marker SNP panel. Data...... of missing markers and prediction of breeding values were performed using 2 different reference populations in each country: either a national reference population or a combined EuroGenomics reference population. Validation for accuracy of imputation and genomic prediction was done based on national test...... with a national reference data set gave an absolute loss of 0.05 in mean reliability of GEBV in the French study, whereas a loss of 0.03 was obtained for reliability of DGV in the Nordic study. When genotypes were imputed using the EuroGenomics reference, a loss of 0.02 in mean reliability of GEBV was detected...

  15. Imputation by the mean score should be avoided when validating a Patient Reported Outcomes questionnaire by a Rasch model in presence of informative missing data

    LENUS (Irish Health Repository)

    Hardouin, Jean-Benoit

    2011-07-14

    Abstract Background Nowadays, more and more clinical scales consisting in responses given by the patients to some items (Patient Reported Outcomes - PRO), are validated with models based on Item Response Theory, and more specifically, with a Rasch model. In the validation sample, presence of missing data is frequent. The aim of this paper is to compare sixteen methods for handling the missing data (mainly based on simple imputation) in the context of psychometric validation of PRO by a Rasch model. The main indexes used for validation by a Rasch model are compared. Methods A simulation study was performed allowing to consider several cases, notably the possibility for the missing values to be informative or not and the rate of missing data. Results Several imputations methods produce bias on psychometrical indexes (generally, the imputation methods artificially improve the psychometric qualities of the scale). In particular, this is the case with the method based on the Personal Mean Score (PMS) which is the most commonly used imputation method in practice. Conclusions Several imputation methods should be avoided, in particular PMS imputation. From a general point of view, it is important to use an imputation method that considers both the ability of the patient (measured for example by his\\/her score), and the difficulty of the item (measured for example by its rate of favourable responses). Another recommendation is to always consider the addition of a random process in the imputation method, because such a process allows reducing the bias. Last, the analysis realized without imputation of the missing data (available case analyses) is an interesting alternative to the simple imputation in this context.

  16. Construction and application of a Korean reference panel for imputing classical alleles and amino acids of human leukocyte antigen genes.

    Science.gov (United States)

    Kim, Kwangwoo; Bang, So-Young; Lee, Hye-Soon; Bae, Sang-Cheol

    2014-01-01

    Genetic variations of human leukocyte antigen (HLA) genes within the major histocompatibility complex (MHC) locus are strongly associated with disease susceptibility and prognosis for many diseases, including many autoimmune diseases. In this study, we developed a Korean HLA reference panel for imputing classical alleles and amino acid residues of several HLA genes. An HLA reference panel has potential for use in identifying and fine-mapping disease associations with the MHC locus in East Asian populations, including Koreans. A total of 413 unrelated Korean subjects were analyzed for single nucleotide polymorphisms (SNPs) at the MHC locus and six HLA genes, including HLA-A, -B, -C, -DRB1, -DPB1, and -DQB1. The HLA reference panel was constructed by phasing the 5,858 MHC SNPs, 233 classical HLA alleles, and 1,387 amino acid residue markers from 1,025 amino acid positions as binary variables. The imputation accuracy of the HLA reference panel was assessed by measuring concordance rates between imputed and genotyped alleles of the HLA genes from a subset of the study subjects and East Asian HapMap individuals. Average concordance rates were 95.6% and 91.1% at 2-digit and 4-digit allele resolutions, respectively. The imputation accuracy was minimally affected by SNP density of a test dataset for imputation. In conclusion, the Korean HLA reference panel we developed was highly suitable for imputing HLA alleles and amino acids from MHC SNPs in East Asians, including Koreans.

  17. Construction and application of a Korean reference panel for imputing classical alleles and amino acids of human leukocyte antigen genes.

    Directory of Open Access Journals (Sweden)

    Kwangwoo Kim

    Full Text Available Genetic variations of human leukocyte antigen (HLA genes within the major histocompatibility complex (MHC locus are strongly associated with disease susceptibility and prognosis for many diseases, including many autoimmune diseases. In this study, we developed a Korean HLA reference panel for imputing classical alleles and amino acid residues of several HLA genes. An HLA reference panel has potential for use in identifying and fine-mapping disease associations with the MHC locus in East Asian populations, including Koreans. A total of 413 unrelated Korean subjects were analyzed for single nucleotide polymorphisms (SNPs at the MHC locus and six HLA genes, including HLA-A, -B, -C, -DRB1, -DPB1, and -DQB1. The HLA reference panel was constructed by phasing the 5,858 MHC SNPs, 233 classical HLA alleles, and 1,387 amino acid residue markers from 1,025 amino acid positions as binary variables. The imputation accuracy of the HLA reference panel was assessed by measuring concordance rates between imputed and genotyped alleles of the HLA genes from a subset of the study subjects and East Asian HapMap individuals. Average concordance rates were 95.6% and 91.1% at 2-digit and 4-digit allele resolutions, respectively. The imputation accuracy was minimally affected by SNP density of a test dataset for imputation. In conclusion, the Korean HLA reference panel we developed was highly suitable for imputing HLA alleles and amino acids from MHC SNPs in East Asians, including Koreans.

  18. DTW-APPROACH FOR UNCORRELATED MULTIVARIATE TIME SERIES IMPUTATION

    OpenAIRE

    Phan , Thi-Thu-Hong; Poisson Caillault , Emilie; Bigand , André; Lefebvre , Alain

    2017-01-01

    International audience; Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper , we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of...

  19. Using mi impute chained to fit ANCOVA models in randomized trials with censored dependent and independent variables

    DEFF Research Database (Denmark)

    Andersen, Andreas; Rieckmann, Andreas

    2016-01-01

    In this article, we illustrate how to use mi impute chained with intreg to fit an analysis of covariance analysis of censored and nondetectable immunological concentrations measured in a randomized pretest–posttest design.......In this article, we illustrate how to use mi impute chained with intreg to fit an analysis of covariance analysis of censored and nondetectable immunological concentrations measured in a randomized pretest–posttest design....

  20. Evaluating geographic imputation approaches for zip code level data: an application to a study of pediatric diabetes

    Directory of Open Access Journals (Sweden)

    Puett Robin C

    2009-10-01

    Full Text Available Abstract Background There is increasing interest in the study of place effects on health, facilitated in part by geographic information systems. Incomplete or missing address information reduces geocoding success. Several geographic imputation methods have been suggested to overcome this limitation. Accuracy evaluation of these methods can be focused at the level of individuals and at higher group-levels (e.g., spatial distribution. Methods We evaluated the accuracy of eight geo-imputation methods for address allocation from ZIP codes to census tracts at the individual and group level. The spatial apportioning approaches underlying the imputation methods included four fixed (deterministic and four random (stochastic allocation methods using land area, total population, population under age 20, and race/ethnicity as weighting factors. Data included more than 2,000 geocoded cases of diabetes mellitus among youth aged 0-19 in four U.S. regions. The imputed distribution of cases across tracts was compared to the true distribution using a chi-squared statistic. Results At the individual level, population-weighted (total or under age 20 fixed allocation showed the greatest level of accuracy, with correct census tract assignments averaging 30.01% across all regions, followed by the race/ethnicity-weighted random method (23.83%. The true distribution of cases across census tracts was that 58.2% of tracts exhibited no cases, 26.2% had one case, 9.5% had two cases, and less than 3% had three or more. This distribution was best captured by random allocation methods, with no significant differences (p-value > 0.90. However, significant differences in distributions based on fixed allocation methods were found (p-value Conclusion Fixed imputation methods seemed to yield greatest accuracy at the individual level, suggesting use for studies on area-level environmental exposures. Fixed methods result in artificial clusters in single census tracts. For studies

  1. Multiple Access Techniques for Next Generation Wireless: Recent Advances and Future Perspectives

    Directory of Open Access Journals (Sweden)

    Shree Krishna Sharma

    2016-01-01

    Full Text Available The advances in multiple access techniques has been one of the key drivers in moving from one cellular generation to another. Starting from the first generation, several multiple access techniques have been explored in different generations and various emerging multiplexing/multiple access techniques are being investigated for the next generation of cellular networks. In this context, this paper first provides a detailed review on the existing Space Division Multiple Access (SDMA related works. Subsequently, it highlights the main features and the drawbacks of various existing and emerging multiplexing/multiple access techniques. Finally, we propose a novel concept of clustered orthogonal signature division multiple access for the next generation of cellular networks. The proposed concept envisions to employ joint antenna coding in order to enhance the orthogonality of SDMA beams with the objective of enhancing the spectral efficiency of future cellular networks.

  2. Multiple sectioning and perforation techniques for TEM sub-surface studies

    International Nuclear Information System (INIS)

    Lee, E.H.; Rowcliffe, A.F.

    1978-01-01

    Techniques for preparing multiple electron transparent regions at several depth levels below the surface of a metal disk specimen are described. These techniques are relatively rapid and find application in many areas involving surface studies. Examples are shown of multiple thin areas produced at intervals of approximately 200 nm below the original surface of a stainless steel bombarded with 4 MeV Ni +2 ions for void swelling studies

  3. FCMPSO: An Imputation for Missing Data Features in Heart Disease Classification

    Science.gov (United States)

    Salleh, Mohd Najib Mohd; Ashikin Samat, Nurul

    2017-08-01

    The application of data mining and machine learning in directing clinical research into possible hidden knowledge is becoming greatly influential in medical areas. Heart Disease is a killer disease around the world, and early prevention through efficient methods can help to reduce the mortality number. Medical data may contain many uncertainties, as they are fuzzy and vague in nature. Nonetheless, imprecise features data such as no values and missing values can affect quality of classification results. Nevertheless, the other complete features are still capable to give information in certain features. Therefore, an imputation approach based on Fuzzy C-Means and Particle Swarm Optimization (FCMPSO) is developed in preprocessing stage to help fill in the missing values. Then, the complete dataset is trained in classification algorithm, Decision Tree. The experiment is trained with Heart Disease dataset and the performance is analysed using accuracy, precision, and ROC values. Results show that the performance of Decision Tree is increased after the application of FCMSPO for imputation.

  4. Improved Correction of Misclassification Bias With Bootstrap Imputation.

    Science.gov (United States)

    van Walraven, Carl

    2018-07-01

    Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.

  5. Estimating past hepatitis C infection risk from reported risk factor histories: implications for imputing age of infection and modeling fibrosis progression

    Directory of Open Access Journals (Sweden)

    Busch Michael P

    2007-12-01

    Full Text Available Abstract Background Chronic hepatitis C virus infection is prevalent and often causes hepatic fibrosis, which can progress to cirrhosis and cause liver cancer or liver failure. Study of fibrosis progression often relies on imputing the time of infection, often as the reported age of first injection drug use. We sought to examine the accuracy of such imputation and implications for modeling factors that influence progression rates. Methods We analyzed cross-sectional data on hepatitis C antibody status and reported risk factor histories from two large studies, the Women's Interagency HIV Study and the Urban Health Study, using modern survival analysis methods for current status data to model past infection risk year by year. We compared fitted distributions of past infection risk to reported age of first injection drug use. Results Although injection drug use appeared to be a very strong risk factor, models for both studies showed that many subjects had considerable probability of having been infected substantially before or after their reported age of first injection drug use. Persons reporting younger age of first injection drug use were more likely to have been infected after, and persons reporting older age of first injection drug use were more likely to have been infected before. Conclusion In cross-sectional studies of fibrosis progression where date of HCV infection is estimated from risk factor histories, modern methods such as multiple imputation should be used to account for the substantial uncertainty about when infection occurred. The models presented here can provide the inputs needed by such methods. Using reported age of first injection drug use as the time of infection in studies of fibrosis progression is likely to produce a spuriously strong association of younger age of infection with slower rate of progression.

  6. Using imputed genotype data in the joint score tests for genetic association and gene-environment interactions in case-control studies.

    Science.gov (United States)

    Song, Minsun; Wheeler, William; Caporaso, Neil E; Landi, Maria Teresa; Chatterjee, Nilanjan

    2018-03-01

    Genome-wide association studies (GWAS) are now routinely imputed for untyped single nucleotide polymorphisms (SNPs) based on various powerful statistical algorithms for imputation trained on reference datasets. The use of predicted allele counts for imputed SNPs as the dosage variable is known to produce valid score test for genetic association. In this paper, we investigate how to best handle imputed SNPs in various modern complex tests for genetic associations incorporating gene-environment interactions. We focus on case-control association studies where inference for an underlying logistic regression model can be performed using alternative methods that rely on varying degree on an assumption of gene-environment independence in the underlying population. As increasingly large-scale GWAS are being performed through consortia effort where it is preferable to share only summary-level information across studies, we also describe simple mechanisms for implementing score tests based on standard meta-analysis of "one-step" maximum-likelihood estimates across studies. Applications of the methods in simulation studies and a dataset from GWAS of lung cancer illustrate ability of the proposed methods to maintain type-I error rates for the underlying testing procedures. For analysis of imputed SNPs, similar to typed SNPs, the retrospective methods can lead to considerable efficiency gain for modeling of gene-environment interactions under the assumption of gene-environment independence. Methods are made available for public use through CGEN R software package. © 2017 WILEY PERIODICALS, INC.

  7. Whose Music of a Century? Performance, History and Multiple Voices

    African Journals Online (AJOL)

    It's the small words that do the most cultural work. THE MUSIC OF A CENTURY, the title of the conference for which this paper was written, imputes a spurious singularity to a multiplicity of cultural practices, and begs the question of in whose interests this singularity is being constructed. An alternative question, 'WHOSE ...

  8. Discovery and Fine-Mapping of Glycaemic and Obesity-Related Trait Loci Using High-Density Imputation.

    Science.gov (United States)

    Horikoshi, Momoko; Mӓgi, Reedik; van de Bunt, Martijn; Surakka, Ida; Sarin, Antti-Pekka; Mahajan, Anubha; Marullo, Letizia; Thorleifsson, Gudmar; Hӓgg, Sara; Hottenga, Jouke-Jan; Ladenvall, Claes; Ried, Janina S; Winkler, Thomas W; Willems, Sara M; Pervjakova, Natalia; Esko, Tõnu; Beekman, Marian; Nelson, Christopher P; Willenborg, Christina; Wiltshire, Steven; Ferreira, Teresa; Fernandez, Juan; Gaulton, Kyle J; Steinthorsdottir, Valgerdur; Hamsten, Anders; Magnusson, Patrik K E; Willemsen, Gonneke; Milaneschi, Yuri; Robertson, Neil R; Groves, Christopher J; Bennett, Amanda J; Lehtimӓki, Terho; Viikari, Jorma S; Rung, Johan; Lyssenko, Valeriya; Perola, Markus; Heid, Iris M; Herder, Christian; Grallert, Harald; Müller-Nurasyid, Martina; Roden, Michael; Hypponen, Elina; Isaacs, Aaron; van Leeuwen, Elisabeth M; Karssen, Lennart C; Mihailov, Evelin; Houwing-Duistermaat, Jeanine J; de Craen, Anton J M; Deelen, Joris; Havulinna, Aki S; Blades, Matthew; Hengstenberg, Christian; Erdmann, Jeanette; Schunkert, Heribert; Kaprio, Jaakko; Tobin, Martin D; Samani, Nilesh J; Lind, Lars; Salomaa, Veikko; Lindgren, Cecilia M; Slagboom, P Eline; Metspalu, Andres; van Duijn, Cornelia M; Eriksson, Johan G; Peters, Annette; Gieger, Christian; Jula, Antti; Groop, Leif; Raitakari, Olli T; Power, Chris; Penninx, Brenda W J H; de Geus, Eco; Smit, Johannes H; Boomsma, Dorret I; Pedersen, Nancy L; Ingelsson, Erik; Thorsteinsdottir, Unnur; Stefansson, Kari; Ripatti, Samuli; Prokopenko, Inga; McCarthy, Mark I; Morris, Andrew P

    2015-07-01

    Reference panels from the 1000 Genomes (1000G) Project Consortium provide near complete coverage of common and low-frequency genetic variation with minor allele frequency ≥0.5% across European ancestry populations. Within the European Network for Genetic and Genomic Epidemiology (ENGAGE) Consortium, we have undertaken the first large-scale meta-analysis of genome-wide association studies (GWAS), supplemented by 1000G imputation, for four quantitative glycaemic and obesity-related traits, in up to 87,048 individuals of European ancestry. We identified two loci for body mass index (BMI) at genome-wide significance, and two for fasting glucose (FG), none of which has been previously reported in larger meta-analysis efforts to combine GWAS of European ancestry. Through conditional analysis, we also detected multiple distinct signals of association mapping to established loci for waist-hip ratio adjusted for BMI (RSPO3) and FG (GCK and G6PC2). The index variant for one association signal at the G6PC2 locus is a low-frequency coding allele, H177Y, which has recently been demonstrated to have a functional role in glucose regulation. Fine-mapping analyses revealed that the non-coding variants most likely to drive association signals at established and novel loci were enriched for overlap with enhancer elements, which for FG mapped to promoter and transcription factor binding sites in pancreatic islets, in particular. Our study demonstrates that 1000G imputation and genetic fine-mapping of common and low-frequency variant association signals at GWAS loci, integrated with genomic annotation in relevant tissues, can provide insight into the functional and regulatory mechanisms through which their effects on glycaemic and obesity-related traits are mediated.

  9. Defining, evaluating, and removing bias induced by linear imputation in longitudinal clinical trials with MNAR missing data.

    Science.gov (United States)

    Helms, Ronald W; Reece, Laura Helms; Helms, Russell W; Helms, Mary W

    2011-03-01

    Missing not at random (MNAR) post-dropout missing data from a longitudinal clinical trial result in the collection of "biased data," which leads to biased estimators and tests of corrupted hypotheses. In a full rank linear model analysis the model equation, E[Y] = Xβ, leads to the definition of the primary parameter β = (X'X)(-1)X'E[Y], and the definition of linear secondary parameters of the form θ = Lβ = L(X'X)(-1)X'E[Y], including, for example, a parameter representing a "treatment effect." These parameters depend explicitly on E[Y], which raises the questions: What is E[Y] when some elements of the incomplete random vector Y are not observed and MNAR, or when such a Y is "completed" via imputation? We develop a rigorous, readily interpretable definition of E[Y] in this context that leads directly to definitions of β, Bias(β) = E[β] - β, Bias(θ) = E[θ] - Lβ, and the extent of hypothesis corruption. These definitions provide a basis for evaluating, comparing, and removing biases induced by various linear imputation methods for MNAR incomplete data from longitudinal clinical trials. Linear imputation methods use earlier data from a subject to impute values for post-dropout missing values and include "Last Observation Carried Forward" (LOCF) and "Baseline Observation Carried Forward" (BOCF), among others. We illustrate the methods of evaluating, comparing, and removing biases and the effects of testing corresponding corrupted hypotheses via a hypothetical but very realistic longitudinal analgesic clinical trial.

  10. Benefits of the Multiple Echo Technique for Ultrasonic Thickness Testing

    Energy Technology Data Exchange (ETDEWEB)

    Elder, J.; Vandekamp, R.

    2011-02-10

    Much effort has been put into determining methods to make accurate thickness measurements, especially at elevated temperatures. An accuracy of +/- 0.001 inches is typically noted for commercial ultrasonic thickness gauges and ultrasonic thickness techniques. Codes and standards put limitations on many inspection factors including equipment, calibration tolerance and temperature variations. These factors are important and should be controlled, but unfortunately do not guarantee accurate and repeatable measurements in the field. Most technicians long for a single technique that is best for every situation, unfortunately, there are no 'silver bullets' when it comes to nondestructive testing. This paper will describe and discuss some of the major contributors to measurement error as well as some advantages and limitations of multiple echo techniques and why multiple echo techniques should be more widely utilized for ultrasonic thickness measurements.

  11. Implementation of Multiple Access Techniques Applicable for Maritime Satellite Communications

    Directory of Open Access Journals (Sweden)

    Stojce Dimov Ilcev

    2013-12-01

    Full Text Available In this paper are introduced fundamentals, characteristics, advantages and disadvantages of Multiple Access (MA employed as transmission techniques in the Maritime Mobile Satellite Communications (MMSC between ships and Coast Earth Station (CES via Geostationary Earth Orbit (GEO or Not-GEO satellite constellations. In fixed satellite communication, as a rule, especially in MMSC many users are active at the same time. The problem of simultaneous communications between many single or multipoint mobile satellite users can be solved by using MA technique, such as Frequency Division Multiple Access (FDMA, Time Division Multiple Access (TDMA, Code Division Multiple Access (CDMA, Space Division Multiple Access (SDMA and Random (Packet Division Multiple Access (RDMA. Since the resources of the systems such as the transmitting power and the bandwidth are limited, it is advisable to use the channels with complete charge and to create a different MA to the channel. This generates a problem of summation and separation of signals in the transmission and reception parts, respectively. Deciding this problem consists in the development of orthogonal channels of transmission in order to divide signals from various users unambiguously on the reception part.

  12. Multiple-walled BN nanotubes obtained with a mechanical alloying technique

    International Nuclear Information System (INIS)

    Rosas, G.; Sistos, J.; Ascencio, J.A.; Medina, A.; Perez, R.

    2005-01-01

    An experimental method to obtain multiple-walled nanotubes of BN using low energy is presented. The method is based on the use of mechanical alloying techniques with elemental boron powders and nitrogen gas mixed in an autoclave at room temperature. The chemical and structural characteristics of the multiple-walled nanotubes were obtained using different techniques, such as X-ray diffraction, transmission electron microscopy, EELS microanalysis, high-resolution electron microscopy images and theoretical simulations based on the multisliced approach of the electron diffraction theory. This investigation clearly illustrates the production of multiple-wall BN nanotubes at room temperature. These results open up a new kind of synthesis method with low expense and important perspectives for use in large-quantity production. (orig.)

  13. Applying an efficient K-nearest neighbor search to forest attribute imputation

    Science.gov (United States)

    Andrew O. Finley; Ronald E. McRoberts; Alan R. Ek

    2006-01-01

    This paper explores the utility of an efficient nearest neighbor (NN) search algorithm for applications in multi-source kNN forest attribute imputation. The search algorithm reduces the number of distance calculations between a given target vector and each reference vector, thereby, decreasing the time needed to discover the NN subset. Results of five trials show gains...

  14. Local exome sequences facilitate imputation of less common variants and increase power of genome wide association studies.

    Directory of Open Access Journals (Sweden)

    Peter K Joshi

    Full Text Available The analysis of less common variants in genome-wide association studies promises to elucidate complex trait genetics but is hampered by low power to reliably detect association. We show that addition of population-specific exome sequence data to global reference data allows more accurate imputation, particularly of less common SNPs (minor allele frequency 1-10% in two very different European populations. The imputation improvement corresponds to an increase in effective sample size of 28-38%, for SNPs with a minor allele frequency in the range 1-3%.

  15. Missing data in clinical trials: control-based mean imputation and sensitivity analysis.

    Science.gov (United States)

    Mehrotra, Devan V; Liu, Fang; Permutt, Thomas

    2017-09-01

    In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between-treatment difference in population means of a clinical endpoint that is free from the confounding effects of "rescue" medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post-rescue data. We caution that the commonly used mixed-effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for "dropouts" (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient-level imputation is not required. A supplemental "dropout = failure" analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between-treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Discovery and Fine-Mapping of Glycaemic and Obesity-Related Trait Loci Using High-Density Imputation.

    Directory of Open Access Journals (Sweden)

    Momoko Horikoshi

    2015-07-01

    Full Text Available Reference panels from the 1000 Genomes (1000G Project Consortium provide near complete coverage of common and low-frequency genetic variation with minor allele frequency ≥0.5% across European ancestry populations. Within the European Network for Genetic and Genomic Epidemiology (ENGAGE Consortium, we have undertaken the first large-scale meta-analysis of genome-wide association studies (GWAS, supplemented by 1000G imputation, for four quantitative glycaemic and obesity-related traits, in up to 87,048 individuals of European ancestry. We identified two loci for body mass index (BMI at genome-wide significance, and two for fasting glucose (FG, none of which has been previously reported in larger meta-analysis efforts to combine GWAS of European ancestry. Through conditional analysis, we also detected multiple distinct signals of association mapping to established loci for waist-hip ratio adjusted for BMI (RSPO3 and FG (GCK and G6PC2. The index variant for one association signal at the G6PC2 locus is a low-frequency coding allele, H177Y, which has recently been demonstrated to have a functional role in glucose regulation. Fine-mapping analyses revealed that the non-coding variants most likely to drive association signals at established and novel loci were enriched for overlap with enhancer elements, which for FG mapped to promoter and transcription factor binding sites in pancreatic islets, in particular. Our study demonstrates that 1000G imputation and genetic fine-mapping of common and low-frequency variant association signals at GWAS loci, integrated with genomic annotation in relevant tissues, can provide insight into the functional and regulatory mechanisms through which their effects on glycaemic and obesity-related traits are mediated.

  17. Age at menopause: imputing age at menopause for women with a hysterectomy with application to risk of postmenopausal breast cancer

    Science.gov (United States)

    Rosner, Bernard; Colditz, Graham A.

    2011-01-01

    Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037

  18. Using multiple linear regression techniques to quantify carbon ...

    African Journals Online (AJOL)

    Fallow ecosystems provide a significant carbon stock that can be quantified for inclusion in the accounts of global carbon budgets. Process and statistical models of productivity, though useful, are often technically rigid as the conditions for their application are not easy to satisfy. Multiple regression techniques have been ...

  19. A new multiple noncontinuous puncture (pointage technique for corneal tattooing

    Directory of Open Access Journals (Sweden)

    Jin Hyoung Park

    2015-10-01

    Full Text Available AIM:To assess the safety and cosmetic efficacy of a new multiple noncontinuous transepithelial puncture technique for tattooing a decompensated cornea.METHODS:It was anon-comparative clinical case series study.The study examines 33 eyes in 33 patients with total corneal opacity due to corneal decompensation, which developed following intraocular surgery.Corneal tattooing was performed using the multiple noncontinuous transepithelial puncture technique (i.e. pointage. The safety of this new surgical strategy was assessed by occurrence of adverse events for the follow-up period. The cosmetic efficacy was determined by the patient’s cosmetic satisfaction and independent observer’s opinion about patient appearance.RESULTS:Seven women and 26 men were included in the study. The mean age was 46.4±17.5y (range:7-67. In total, 30 of 33 patients (91% reported cosmetic satisfaction within the follow-up period. Only 3 patients (9% required additional tattooing due to cosmetic unsatisfaction. Cosmetic outcomes were analyzed and classified as excellent or good in 13 (39% and 17 (52% patients, respectively. No serious adverse events developed, except delayed epithelial healing in 3 cases.CONCLUSION:The cosmetic outcomes of the multiple noncontinuous transepithelial puncture technique for corneal tattooing were good. The safety of this method is higher than conventional procedures. This new procedure also provides improved cost-effectiveness and safety over current corneal tattooing techniques.

  20. Which DTW Method Applied to Marine Univariate Time Series Imputation

    OpenAIRE

    Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André

    2017-01-01

    International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...

  1. Teaching Multiple Online Sections/Courses: Tactics and Techniques

    Science.gov (United States)

    Bates, Rodger; LaBrecque, Bryan; Fortner, Emily

    2016-01-01

    The challenge of teaching online increases as the number of sections or courses increase in a semester. The tactics and techniques which enrich online instruction in the tradition of quality matters can be modified and adapted to the demands of multiple instructional needs during a semester. This paper addresses time management and instructional…

  2. Imputation-based analysis of association studies: candidate regions and quantitative traits.

    Directory of Open Access Journals (Sweden)

    Bertrand Servin

    2007-07-01

    Full Text Available We introduce a new framework for the analysis of association studies, designed to allow untyped variants to be more effectively and directly tested for association with a phenotype. The idea is to combine knowledge on patterns of correlation among SNPs (e.g., from the International HapMap project or resequencing data in a candidate region of interest with genotype data at tag SNPs collected on a phenotyped study sample, to estimate ("impute" unmeasured genotypes, and then assess association between the phenotype and these estimated genotypes. Compared with standard single-SNP tests, this approach results in increased power to detect association, even in cases in which the causal variant is typed, with the greatest gain occurring when multiple causal variants are present. It also provides more interpretable explanations for observed associations, including assessing, for each SNP, the strength of the evidence that it (rather than another correlated SNP is causal. Although we focus on association studies with quantitative phenotype and a relatively restricted region (e.g., a candidate gene, the framework is applicable and computationally practical for whole genome association studies. Methods described here are implemented in a software package, Bim-Bam, available from the Stephens Lab website http://stephenslab.uchicago.edu/software.html.

  3. Exploring the Interplay between Rescue Drugs, Data Imputation, and Study Outcomes: Conceptual Review and Qualitative Analysis of an Acute Pain Data Set.

    Science.gov (United States)

    Singla, Neil K; Meske, Diana S; Desjardins, Paul J

    2017-12-01

    In placebo-controlled acute surgical pain studies, provisions must be made for study subjects to receive adequate analgesic therapy. As such, most protocols allow study subjects to receive a pre-specified regimen of open-label analgesic drugs (rescue drugs) as needed. The selection of an appropriate rescue regimen is a critical experimental design choice. We hypothesized that a rescue regimen that is too liberal could lead to all study arms receiving similar levels of pain relief (thereby confounding experimental results), while a regimen that is too stringent could lead to a high subject dropout rate (giving rise to a preponderance of missing data). Despite the importance of rescue regimen as a study design feature, there exist no published review articles or meta-analysis focusing on the impact of rescue therapy on experimental outcomes. Therefore, when selecting a rescue regimen, researchers must rely on clinical factors (what analgesics do patients usually receive in similar surgical scenarios) and/or anecdotal evidence. In the following article, we attempt to bridge this gap by reviewing and discussing the experimental impacts of rescue therapy on a common acute surgical pain population: first metatarsal bunionectomy. The function of this analysis is to (1) create a framework for discussion and future exploration of rescue as a methodological study design feature, (2) discuss the interplay between data imputation techniques and rescue drugs, and (3) inform the readership regarding the impact of data imputation techniques on the validity of study conclusions. Our findings indicate that liberal rescue may degrade assay sensitivity, while stringent rescue may lead to unacceptably high dropout rates.

  4. Imputation of genotypes from low density (50,000 markers) to high density (700,000 markers) of cows from research herds in Europe, North America, and Australasia using 2 reference populations

    DEFF Research Database (Denmark)

    Pryce, J E; Johnston, J; Hayes, B J

    2014-01-01

    detection in genome-wide association studies and the accuracy of genomic selection may increase when the low-density genotypes are imputed to higher density. Genotype data were available from 10 research herds: 5 from Europe [Denmark, Germany, Ireland, the Netherlands, and the United Kingdom (UK)], 2 from...... reference populations. Although it was not possible to use a combined reference population, which would probably result in the highest accuracies of imputation, differences arising from using 2 high-density reference populations on imputing 50,000-marker genotypes of 583 animals (from the UK) were...... information exploited. The UK animals were also included in the North American data set (n = 1,579) that was imputed to high density using a reference population of 2,018 bulls. After editing, 591,213 genotypes on 5,999 animals from 10 research herds remained. The correlation between imputed allele...

  5. Application of multiplicative array techniques for multibeam sounder systems

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.

    modification in terms of additional computation or hardware for improved array gain. The present work is devoted towards the study of a better beamforming method i.e. a multiplicative array technique with some modification proposEd. by Brown and Rowland...

  6. Mapping change of older forest with nearest-neighbor imputation and Landsat time-series

    Science.gov (United States)

    Janet L. Ohmann; Matthew J. Gregory; Heather M. Roberts; Warren B. Cohen; Robert E. Kennedy; Zhiqiang. Yang

    2012-01-01

    The Northwest Forest Plan (NWFP), which aims to conserve late-successional and old-growth forests (older forests) and associated species, established new policies on federal lands in the Pacific Northwest USA. As part of monitoring for the NWFP, we tested nearest-neighbor imputation for mapping change in older forest, defined by threshold values for forest attributes...

  7. Simple lock-in detection technique utilizing multiple harmonics for digital PGC demodulators.

    Science.gov (United States)

    Duan, Fajie; Huang, Tingting; Jiang, Jiajia; Fu, Xiao; Ma, Ling

    2017-06-01

    A simple lock-in detection technique especially suited for digital phase-generated carrier (PGC) demodulators is proposed in this paper. It mixes the interference signal with rectangular waves whose Fourier expansions contain multiple odd or multiple even harmonics of the carrier to recover the quadrature components needed for interference phase demodulation. In this way, the use of a multiplier is avoided and the efficiency of the algorithm is improved. Noise performance with regard to light intensity variation and circuit noise is analyzed theoretically for both the proposed technique and the traditional lock-in technique, and results show that the former provides a better signal-to-noise ratio than the latter with proper modulation depth and average interference phase. Detailed simulations were conducted and the theoretical analysis was verified. A fiber-optic Michelson interferometer was constructed and the feasibility of the proposed technique is demonstrated.

  8. Estimating Stand Height and Tree Density in Pinus taeda plantations using in-situ data, airborne LiDAR and k-Nearest Neighbor Imputation.

    Science.gov (United States)

    Silva, Carlos Alberto; Klauberg, Carine; Hudak, Andrew T; Vierling, Lee A; Liesenberg, Veraldo; Bernett, Luiz G; Scheraiber, Clewerson F; Schoeninger, Emerson R

    2018-01-01

    Accurate forest inventory is of great economic importance to optimize the entire supply chain management in pulp and paper companies. The aim of this study was to estimate stand dominate and mean heights (HD and HM) and tree density (TD) of Pinus taeda plantations located in South Brazil using in-situ measurements, airborne Light Detection and Ranging (LiDAR) data and the non- k-nearest neighbor (k-NN) imputation. Forest inventory attributes and LiDAR derived metrics were calculated at 53 regular sample plots and we used imputation models to retrieve the forest attributes at plot and landscape-levels. The best LiDAR-derived metrics to predict HD, HM and TD were H99TH, HSD, SKE and HMIN. The Imputation model using the selected metrics was more effective for retrieving height than tree density. The model coefficients of determination (adj.R2) and a root mean squared difference (RMSD) for HD, HM and TD were 0.90, 0.94, 0.38m and 6.99, 5.70, 12.92%, respectively. Our results show that LiDAR and k-NN imputation can be used to predict stand heights with high accuracy in Pinus taeda. However, furthers studies need to be realized to improve the accuracy prediction of TD and to evaluate and compare the cost of acquisition and processing of LiDAR data against the conventional inventory procedures.

  9. Hydrologic Response to Climate Change: Missing Precipitation Data Matters for Computed Timing Trends

    Science.gov (United States)

    Daniels, B.

    2016-12-01

    This work demonstrates the derivation of climate timing statistics and applying them to determine resulting hydroclimate impacts. Long-term daily precipitation observations from 50 California stations were used to compute climate trends of precipitation event Intensity, event Duration and Pause between events. Each precipitation event trend was then applied as input to a PRMS hydrology model which showed hydrology changes to recharge, baseflow, streamflow, etc. An important concern was precipitation uncertainty induced by missing observation values and causing errors in quantification of precipitation trends. Many standard statistical techniques such as ARIMA and simple endogenous or even exogenous imputation were applied but failed to help resolve these uncertainties. What helped resolve these uncertainties was use of multiple imputation techniques. This involved fitting of Weibull probability distributions to multiple imputed values for the three precipitation trends.Permutation resampling techniques using Monte Carlo processing were then applied to the multiple imputation values to derive significance p-values for each trend. Significance at the 95% level for Intensity was found for 11 of the 50 stations, Duration from 16 of the 50, and Pause from 19, of which 12 were 99% significant. The significance weighted trends for California are Intensity -4.61% per decade, Duration +3.49% per decade, and Pause +3.58% per decade. Two California basins with PRMS hydrologic models were studied: Feather River in the northern Sierra Nevada mountains and the central coast Soquel-Aptos. Each local trend was changed without changing the other trends or the total precipitation. Feather River Basin's critical supply to Lake Oroville and the State Water Project benefited from a total streamflow increase of 1.5%. The Soquel-Aptos Basin water supply was impacted by a total groundwater recharge decrease of -7.5% and streamflow decrease of -3.2%.

  10. Multiple matings among glossina and the sterile male technique

    International Nuclear Information System (INIS)

    Pinhao, R.C.

    1980-01-01

    The fact that multiple matings are a common phenomenon among glossina turns the sterile male technique into a competition not between adult insects but between two types of sperm, and the proportion of females inseminated with the one or the other is given by the binomial (p+q)sup(n), where p is the percentage of normal males, q the percentage of sterile males and n the average number of matings per female. However, multiple matings cannot damage the effectiveness of the technique unless two conditions are present either separately or simultaneously: precocious death of the spermatozoa and reduced inseminating potential among the sterile males. Study of the factors which can alter the inseminating potential is thus important for those who wish to use the sterile male technique. These factors are of three kinds: factors connected with quality, with quantity and with availability. The first are associated with the nature and intensity of the alterations brought about in the spermatozoa by the sterilizing agent, the second with possible variations in the amount of sperm reaching the spermotheca, the third with the behaviour of the sterile males in the nature - that is, the question whether sterilization has a favourable or unfavourable influence on their chances of mating with wild females. The author describes his observations of the quantity of sperm produced by Glossina morsitans submorsitans males from the colony reared at the Institute for Tropical Hygiene and Medicine in Lisbon, compares them with the observations of other authors and discusses their practical significance. Specific research is suggested. Advantages from assessing the behaviour of colonies not by female productivity but by male inseminating potential, and appropriate laboratory techniques

  11. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT

    Directory of Open Access Journals (Sweden)

    Mohamed M. Ibrahim

    2014-01-01

    Full Text Available Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  12. Video multiple watermarking technique based on image interlacing using DWT.

    Science.gov (United States)

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  13. Is missing geographic positioning system data in accelerometry studies a problem, and is imputation the solution?

    DEFF Research Database (Denmark)

    Meseck, Kristin; Jankowska, Marta M; Schipperijn, Jasper

    2016-01-01

    The main purpose of the present study was to assess the impact of global positioning system (GPS) signal lapse on physical activity analyses, discover any existing associations between missing GPS data and environmental and demographics attributes, and to determine whether imputation is an accurate...

  14. Genome of the Netherlands population-specific imputations identify an ABCA6 variant associated with cholesterol levels

    NARCIS (Netherlands)

    van Leeuwen, E.M.; Karssen, L.C.; Deelen, J.; Isaacs, A.; Medina-Gomez, C.; Mbarek, H.; Kanterakis, A.; Trompet, S.; Postmus, I.; Verweij, N.; van Enckevort, D.; Huffman, J.E.; White, C.C.; Feitosa, M.F.; Bartz, T.M.; Manichaikul, A.; Joshi, P.K.; Peloso, G.M.; Deelen, P.; Dijk, F.; Willemsen, G.; de Geus, E.J.C.; Milaneschi, Y.; Penninx, B.W.J.H.; Francioli, L.C.; Menelaou, A.; Pulit, S.L.; Rivadeneira, F.; Hofman, A.; Oostra, B.A.; Franco, O.H.; Mateo Leach, I.; Beekman, M.; de Craen, A.J.; Uh, H.W.; Trochet, H.; Hocking, L.J.; Porteous, D.J.; Sattar, N.; Packard, C.J.; Buckley, B.M.; Brody, J.A.; Bis, J.C.; Rotter, J.I.; Mychaleckyj, J.C.; Campbell, H.; Duan, Q.; Lange, L.A.; Wilson, J.F.; Hayward, C.; Polasek, O.; Vitart, V.; Rudan, I.; Wright, A.F.; Rich, S.S.; Psaty, B.M.; Borecki, I.B.; Kearney, P.M.; Stott, D.J.; Cupples, L.A.; Jukema, J.W.; van der Harst, P.; Sijbrands, E.J.; Hottenga, J.J.; Uitterlinden, A.G.; Swertz, M.A.; van Ommen, G.J.B; Bakker, P.I.W.; Slagboom, P.E.; Boomsma, D.I.; Wijmenga, C.; van Duijn, C.M.

    2015-01-01

    Variants associated with blood lipid levels may be population-specific. To identify low-frequency variants associated with this phenotype, population-specific reference panels may be used. Here we impute nine large Dutch biobanks (∼35,000 samples) with the population-specific reference panel created

  15. Estimating Stand Height and Tree Density in Pinus taeda plantations using in-situ data, airborne LiDAR and k-Nearest Neighbor Imputation

    Directory of Open Access Journals (Sweden)

    CARLOS ALBERTO SILVA

    Full Text Available ABSTRACT Accurate forest inventory is of great economic importance to optimize the entire supply chain management in pulp and paper companies. The aim of this study was to estimate stand dominate and mean heights (HD and HM and tree density (TD of Pinus taeda plantations located in South Brazil using in-situ measurements, airborne Light Detection and Ranging (LiDAR data and the non- k-nearest neighbor (k-NN imputation. Forest inventory attributes and LiDAR derived metrics were calculated at 53 regular sample plots and we used imputation models to retrieve the forest attributes at plot and landscape-levels. The best LiDAR-derived metrics to predict HD, HM and TD were H99TH, HSD, SKE and HMIN. The Imputation model using the selected metrics was more effective for retrieving height than tree density. The model coefficients of determination (adj.R2 and a root mean squared difference (RMSD for HD, HM and TD were 0.90, 0.94, 0.38m and 6.99, 5.70, 12.92%, respectively. Our results show that LiDAR and k-NN imputation can be used to predict stand heights with high accuracy in Pinus taeda. However, furthers studies need to be realized to improve the accuracy prediction of TD and to evaluate and compare the cost of acquisition and processing of LiDAR data against the conventional inventory procedures.

  16. Improved imputation of low-frequency and rare variants using the UK10K haplotype reference panel

    NARCIS (Netherlands)

    J. Huang (Jie); B. Howie (Bryan); S. McCarthy (Shane); Y. Memari (Yasin); K. Walter (Klaudia); J.L. Min (Josine L.); P. Danecek (Petr); G. Malerba (Giovanni); E. Trabetti (Elisabetta); H.-F. Zheng (Hou-Feng); G. Gambaro (Giovanni); J.B. Richards (Brent); R. Durbin (Richard); N.J. Timpson (Nicholas); J. Marchini (Jonathan); N. Soranzo (Nicole); S.H. Al Turki (Saeed); A. Amuzu (Antoinette); C. Anderson (Carl); R. Anney (Richard); D. Antony (Dinu); M.S. Artigas; M. Ayub (Muhammad); S. Bala (Senduran); J.C. Barrett (Jeffrey); I.E. Barroso (Inês); P.L. Beales (Philip); M. Benn (Marianne); J. Bentham (Jamie); S. Bhattacharya (Shoumo); E. Birney (Ewan); D.H.R. Blackwood (Douglas); M. Bobrow (Martin); E. Bochukova (Elena); P.F. Bolton (Patrick F.); R. Bounds (Rebecca); C. Boustred (Chris); G. Breen (Gerome); M. Calissano (Mattia); K. Carss (Keren); J.P. Casas (Juan Pablo); J.C. Chambers (John C.); R. Charlton (Ruth); K. Chatterjee (Krishna); L. Chen (Lu); A. Ciampi (Antonio); S. Cirak (Sebahattin); P. Clapham (Peter); G. Clement (Gail); G. Coates (Guy); M. Cocca (Massimiliano); D.A. Collier (David); C. Cosgrove (Catherine); T. Cox (Tony); N.J. Craddock (Nick); L. Crooks (Lucy); S. Curran (Sarah); D. Curtis (David); A. Daly (Allan); I.N.M. Day (Ian N.M.); A.G. Day-Williams (Aaron); G.V. Dedoussis (George); T. Down (Thomas); Y. Du (Yuanping); C.M. van Duijn (Cornelia); I. Dunham (Ian); T. Edkins (Ted); R. Ekong (Rosemary); P. Ellis (Peter); D.M. Evans (David); I.S. Farooqi (I. Sadaf); D.R. Fitzpatrick (David R.); P. Flicek (Paul); J. Floyd (James); A.R. Foley (A. Reghan); C.S. Franklin (Christopher S.); M. Futema (Marta); L. Gallagher (Louise); P. Gasparini (Paolo); T.R. Gaunt (Tom); M. Geihs (Matthias); D. Geschwind (Daniel); C.M.T. Greenwood (Celia); H. Griffin (Heather); D. Grozeva (Detelina); X. Guo (Xiaosen); X. Guo (Xueqin); H. Gurling (Hugh); D. Hart (Deborah); A.E. Hendricks (Audrey E.); P.A. Holmans (Peter A.); L. Huang (Liren); T. Hubbard (Tim); S.E. Humphries (Steve E.); M.E. Hurles (Matthew); P.G. Hysi (Pirro); V. Iotchkova (Valentina); A. Isaacs (Aaron); D.K. Jackson (David K.); Y. Jamshidi (Yalda); J. Johnson (Jon); C. Joyce (Chris); K.J. Karczewski (Konrad); J. Kaye (Jane); T. Keane (Thomas); J.P. Kemp (John); K. Kennedy (Karen); A. Kent (Alastair); J. Keogh (Julia); F. Khawaja (Farrah); M.E. Kleber (Marcus); M. Van Kogelenberg (Margriet); A. Kolb-Kokocinski (Anja); J.S. Kooner (Jaspal S.); G. Lachance (Genevieve); C. Langenberg (Claudia); C. Langford (Cordelia); D. Lawson (Daniel); I. Lee (Irene); E.M. van Leeuwen (Elisa); M. Lek (Monkol); R. Li (Rui); Y. Li (Yingrui); J. Liang (Jieqin); H. Lin (Hong); R. Liu (Ryan); J. Lönnqvist (Jouko); L.R. Lopes (Luis R.); M.C. Lopes (Margarida); J. Luan; D.G. MacArthur (Daniel G.); M. Mangino (Massimo); G. Marenne (Gaëlle); W. März (Winfried); J. Maslen (John); A. Matchan (Angela); I. Mathieson (Iain); P. McGuffin (Peter); A.M. McIntosh (Andrew); A.G. McKechanie (Andrew G.); A. McQuillin (Andrew); S. Metrustry (Sarah); N. Migone (Nicola); H.M. Mitchison (Hannah M.); A. Moayyeri (Alireza); J. Morris (James); R. Morris (Richard); D. Muddyman (Dawn); F. Muntoni; B.G. Nordestgaard (Børge G.); K. Northstone (Kate); M.C. O'donovan (Michael); S. O'Rahilly (Stephen); A. Onoufriadis (Alexandros); K. Oualkacha (Karim); M.J. Owen (Michael J.); A. Palotie (Aarno); K. Panoutsopoulou (Kalliope); V. Parker (Victoria); J.R. Parr (Jeremy R.); L. Paternoster (Lavinia); T. Paunio (Tiina); F. Payne (Felicity); S.J. Payne (Stewart J.); J.R.B. Perry (John); O.P.H. Pietiläinen (Olli); V. Plagnol (Vincent); R.C. Pollitt (Rebecca C.); S. Povey (Sue); M.A. Quail (Michael A.); L. Quaye (Lydia); L. Raymond (Lucy); K. Rehnström (Karola); C.K. Ridout (Cheryl K.); S.M. Ring (Susan); G.R.S. Ritchie (Graham R.S.); N. Roberts (Nicola); R.L. Robinson (Rachel L.); D.B. Savage (David); P.J. Scambler (Peter); S. Schiffels (Stephan); M. Schmidts (Miriam); N. Schoenmakers (Nadia); R.H. Scott (Richard H.); R.A. Scott (Robert); R.K. Semple (Robert K.); E. Serra (Eva); S.I. Sharp (Sally I.); A.C. Shaw (Adam C.); H.A. Shihab (Hashem A.); S.-Y. Shin (So-Youn); D. Skuse (David); K.S. Small (Kerrin); C. Smee (Carol); G.D. Smith; L. Southam (Lorraine); O. Spasic-Boskovic (Olivera); T.D. Spector (Timothy); D. St. Clair (David); B. St Pourcain (Beate); J. Stalker (Jim); E. Stevens (Elizabeth); J. Sun (Jianping); G. Surdulescu (Gabriela); J. Suvisaari (Jaana); P. Syrris (Petros); I. Tachmazidou (Ioanna); R. Taylor (Rohan); J. Tian (Jing); M.D. Tobin (Martin); D. Toniolo (Daniela); M. Traglia (Michela); A. Tybjaerg-Hansen; A.M. Valdes; A.M. Vandersteen (Anthony M.); A. Varbo (Anette); P. Vijayarangakannan (Parthiban); P.M. Visscher (Peter); L.V. Wain (Louise); J.T. Walters (James); G. Wang (Guangbiao); J. Wang (Jun); Y. Wang (Yu); K. Ward (Kirsten); E. Wheeler (Eleanor); P.H. Whincup (Peter); T. Whyte (Tamieka); H.J. Williams (Hywel J.); K.A. Williamson (Kathleen); C. Wilson (Crispian); S.G. Wilson (Scott); K. Wong (Kim); C. Xu (Changjiang); J. Yang (Jian); G. Zaza (Gianluigi); E. Zeggini (Eleftheria); F. Zhang (Feng); P. Zhang (Pingbo); W. Zhang (Weihua)

    2015-01-01

    textabstractImputing genotypes from reference panels created by whole-genome sequencing (WGS) provides a cost-effective strategy for augmenting the single-nucleotide polymorphism (SNP) content of genome-wide arrays. The UK10K Cohorts project has generated a data set of 3,781 whole genomes sequenced

  17. Techniques for Performance Improvement of Integer Multiplication in Cryptographic Applications

    Directory of Open Access Journals (Sweden)

    Robert Brumnik

    2014-01-01

    Full Text Available The problem of arithmetic operations performance in number fields is actively researched by many scientists, as evidenced by significant publications in this field. In this work, we offer some techniques to increase performance of software implementation of finite field multiplication algorithm, for both 32-bit and 64-bit platforms. The developed technique, called “delayed carry mechanism,” allows to preventing necessity to consider a significant bit carry at each iteration of the sum accumulation loop. This mechanism enables reducing the total number of additions and applies the modern parallelization technologies effectively.

  18. Imputation of Baseline LDL Cholesterol Concentration in Patients with Familial Hypercholesterolemia on Statins or Ezetimibe.

    Science.gov (United States)

    Ruel, Isabelle; Aljenedil, Sumayah; Sadri, Iman; de Varennes, Émilie; Hegele, Robert A; Couture, Patrick; Bergeron, Jean; Wanneh, Eric; Baass, Alexis; Dufour, Robert; Gaudet, Daniel; Brisson, Diane; Brunham, Liam R; Francis, Gordon A; Cermakova, Lubomira; Brophy, James M; Ryomoto, Arnold; Mancini, G B John; Genest, Jacques

    2018-02-01

    Familial hypercholesterolemia (FH) is the most frequent genetic disorder seen clinically and is characterized by increased LDL cholesterol (LDL-C) (>95th percentile), family history of increased LDL-C, premature atherosclerotic cardiovascular disease (ASCVD) in the patient or in first-degree relatives, presence of tendinous xanthomas or premature corneal arcus, or presence of a pathogenic mutation in the LDLR , PCSK9 , or APOB genes. A diagnosis of FH has important clinical implications with respect to lifelong risk of ASCVD and requirement for intensive pharmacological therapy. The concentration of baseline LDL-C (untreated) is essential for the diagnosis of FH but is often not available because the individual is already on statin therapy. To validate a new algorithm to impute baseline LDL-C, we examined 1297 patients. The baseline LDL-C was compared with the imputed baseline obtained within 18 months of the initiation of therapy. We compared the percent reduction in LDL-C on treatment from baseline with the published percent reductions. After eliminating individuals with missing data, nonstandard doses of statins, or medications other than statins or ezetimibe, we provide data on 951 patients. The mean ± SE baseline LDL-C was 243.0 (2.2) mg/dL [6.28 (0.06) mmol/L], and the mean ± SE imputed baseline LDL-C was 244.2 (2.6) mg/dL [6.31 (0.07) mmol/L] ( P = 0.48). There was no difference in response according to the patient's sex or in percent reduction between observed and expected for individual doses or types of statin or ezetimibe. We provide a validated estimation of baseline LDL-C for patients with FH that may help clinicians in making a diagnosis. © 2017 American Association for Clinical Chemistry.

  19. Improved imputation of low-frequency and rare variants using the UK10K haplotype reference panel

    DEFF Research Database (Denmark)

    Huang, Jie; Howie, Bryan; Mccarthy, Shane

    2015-01-01

    Imputing genotypes from reference panels created by whole-genome sequencing (WGS) provides a cost-effective strategy for augmenting the single-nucleotide polymorphism (SNP) content of genome-wide arrays. The UK10K Cohorts project has generated a data set of 3,781 whole genomes sequenced at low de...

  20. Imputing historical statistics, soils information, and other land-use data to crop area

    Science.gov (United States)

    Perry, C. R., Jr.; Willis, R. W.; Lautenschlager, L.

    1982-01-01

    In foreign crop condition monitoring, satellite acquired imagery is routinely used. To facilitate interpretation of this imagery, it is advantageous to have estimates of the crop types and their extent for small area units, i.e., grid cells on a map represent, at 60 deg latitude, an area nominally 25 by 25 nautical miles in size. The feasibility of imputing historical crop statistics, soils information, and other ancillary data to crop area for a province in Argentina is studied.

  1. Imputation of single nucleotide polymorhpism genotypes of Hereford cattle: reference panel size, family relationship and population structure

    Science.gov (United States)

    The objective of this study is to investigate single nucleotide polymorphism (SNP) genotypes imputation of Hereford cattle. Purebred Herefords were from two sources, Line 1 Hereford (N=240) and representatives of Industry Herefords (N=311). Using different reference panels of 62 and 494 males with 1...

  2. Multiple Solutions of Nonlinear Boundary Value Problems of Fractional Order: A New Analytic Iterative Technique

    Directory of Open Access Journals (Sweden)

    Omar Abu Arqub

    2014-01-01

    Full Text Available The purpose of this paper is to present a new kind of analytical method, the so-called residual power series, to predict and represent the multiplicity of solutions to nonlinear boundary value problems of fractional order. The present method is capable of calculating all branches of solutions simultaneously, even if these multiple solutions are very close and thus rather difficult to distinguish even by numerical techniques. To verify the computational efficiency of the designed proposed technique, two nonlinear models are performed, one of them arises in mixed convection flows and the other one arises in heat transfer, which both admit multiple solutions. The results reveal that the method is very effective, straightforward, and powerful for formulating these multiple solutions.

  3. Traffic speed data imputation method based on tensor completion.

    Science.gov (United States)

    Ran, Bin; Tan, Huachun; Feng, Jianshuai; Liu, Ying; Wang, Wuhong

    2015-01-01

    Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  4. Cohort-specific imputation of gene expression improves prediction of warfarin dose for African Americans.

    Science.gov (United States)

    Gottlieb, Assaf; Daneshjou, Roxana; DeGorter, Marianne; Bourgeois, Stephane; Svensson, Peter J; Wadelius, Mia; Deloukas, Panos; Montgomery, Stephen B; Altman, Russ B

    2017-11-24

    Genome-wide association studies are useful for discovering genotype-phenotype associations but are limited because they require large cohorts to identify a signal, which can be population-specific. Mapping genetic variation to genes improves power and allows the effects of both protein-coding variation as well as variation in expression to be combined into "gene level" effects. Previous work has shown that warfarin dose can be predicted using information from genetic variation that affects protein-coding regions. Here, we introduce a method that improves dose prediction by integrating tissue-specific gene expression. In particular, we use drug pathways and expression quantitative trait loci knowledge to impute gene expression-on the assumption that differential expression of key pathway genes may impact dose requirement. We focus on 116 genes from the pharmacokinetic and pharmacodynamic pathways of warfarin within training and validation sets comprising both European and African-descent individuals. We build gene-tissue signatures associated with warfarin dose in a cohort-specific manner and identify a signature of 11 gene-tissue pairs that significantly augments the International Warfarin Pharmacogenetics Consortium dosage-prediction algorithm in both populations. Our results demonstrate that imputed expression can improve dose prediction and bridge population-specific compositions. MATLAB code is available at https://github.com/assafgo/warfarin-cohort.

  5. 31 CFR 19.630 - May the Department of the Treasury impute conduct of one person to another?

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false May the Department of the Treasury impute conduct of one person to another? 19.630 Section 19.630 Money and Finance: Treasury Office of the Secretary of the Treasury GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) General Principles...

  6. The Use of Imputed Sibling Genotypes in Sibship-Based Association Analysis: On Modeling Alternatives, Power and Model Misspecification

    NARCIS (Netherlands)

    Minica, C.C.; Dolan, C.V.; Willemsen, G.; Vink, J.M.; Boomsma, D.I.

    2013-01-01

    When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of

  7. Semiautomatic imputation of activity travel diaries : use of global positioning system traces, prompted recall, and context-sensitive learning algorithms

    NARCIS (Netherlands)

    Moiseeva, A.; Jessurun, A.J.; Timmermans, H.J.P.; Stopher, P.

    2016-01-01

    Anastasia Moiseeva, Joran Jessurun and Harry Timmermans (2010), ‘Semiautomatic Imputation of Activity Travel Diaries: Use of Global Positioning System Traces, Prompted Recall, and Context-Sensitive Learning Algorithms’, Transportation Research Record: Journal of the Transportation Research Board,

  8. Estimating cavity tree and snag abundance using negative binomial regression models and nearest neighbor imputation methods

    Science.gov (United States)

    Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett

    2009-01-01

    Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....

  9. Non-imputability, criminal dangerousness and curative safety measures: myths and realities

    Directory of Open Access Journals (Sweden)

    Frank Harbottle Quirós

    2017-04-01

    Full Text Available The curative safety measures are imposed in a criminal proceeding to the non-imputable people provided that through a prognosis it is concluded in an affirmative way about its criminal dangerousness. Although this statement seems very elementary, in judicial practice several myths remain in relation to these legal institutes whose versions may vary, to a greater or lesser extent, between the different countries of the world. In this context, the present article formulates ten myths based on the experience of Costa Rica and provides an explanation that seeks to weaken or knock them down, inviting the reader to reflect on them.

  10. 29 CFR 1471.630 - May the Federal Mediation and Conciliation Service impute conduct of one person to another?

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false May the Federal Mediation and Conciliation Service impute...) FEDERAL MEDIATION AND CONCILIATION SERVICE GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) General Principles Relating to Suspension and Debarment Actions § 1471.630 May the Federal Mediation and...

  11. The use of the bootstrap in the analysis of case-control studies with missing data

    DEFF Research Database (Denmark)

    Siersma, Volkert Dirk; Johansen, Christoffer

    2004-01-01

    nonparametric bootstrap, bootstrap confidence intervals, missing values, multiple imputation, matched case-control study......nonparametric bootstrap, bootstrap confidence intervals, missing values, multiple imputation, matched case-control study...

  12. Traffic Speed Data Imputation Method Based on Tensor Completion

    Directory of Open Access Journals (Sweden)

    Bin Ran

    2015-01-01

    Full Text Available Traffic speed data plays a key role in Intelligent Transportation Systems (ITS; however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS. In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC, an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  13. [Investigation of RNA viral genome amplification by multiple displacement amplification technique].

    Science.gov (United States)

    Pang, Zheng; Li, Jian-Dong; Li, Chuan; Liang, Mi-Fang; Li, De-Xin

    2013-06-01

    In order to facilitate the detection of newly emerging or rare viral infectious diseases, a negative-strand RNA virus-severe fever with thrombocytopenia syndrome bunyavirus, and a positive-strand RNA virus-dengue virus, were used to investigate RNA viral genome unspecific amplification by multiple displacement amplification technique from clinical samples. Series of 10-fold diluted purified viral RNA were utilized as analog samples with different pathogen loads, after a series of reactions were sequentially processed, single-strand cDNA, double-strand cDNA, double-strand cDNA treated with ligation without or with supplemental RNA were generated, then a Phi29 DNA polymerase depended isothermal amplification was employed, and finally the target gene copies were detected by real time PCR assays to evaluate the amplification efficiencies of various methods. The results showed that multiple displacement amplification effects of single-strand or double-strand cDNA templates were limited, while the fold increases of double-strand cDNA templates treated with ligation could be up to 6 X 10(3), even 2 X 10(5) when supplemental RNA existed, and better results were obtained when viral RNA loads were lower. A RNA viral genome amplification system using multiple displacement amplification technique was established in this study and effective amplification of RNA viral genome with low load was achieved, which could provide a tool to synthesize adequate viral genome for multiplex pathogens detection.

  14. Implementation of Multiple Access Techniques Applicable for Maritime Satellite Communications

    OpenAIRE

    Stojce Dimov Ilcev

    2013-01-01

    In this paper are introduced fundamentals, characteristics, advantages and disadvantages of Multiple Access (MA) employed as transmission techniques in the Maritime Mobile Satellite Communications (MMSC) between ships and Coast Earth Station (CES) via Geostationary Earth Orbit (GEO) or Not-GEO satellite constellations. In fixed satellite communication, as a rule, especially in MMSC many users are active at the same time. The problem of simultaneous communications between many single or multip...

  15. Improving accuracy of genomic prediction in Brangus cattle by adding animals with imputed low-density SNP genotypes.

    Science.gov (United States)

    Lopes, F B; Wu, X-L; Li, H; Xu, J; Perkins, T; Genho, J; Ferretti, R; Tait, R G; Bauck, S; Rosa, G J M

    2018-02-01

    Reliable genomic prediction of breeding values for quantitative traits requires the availability of sufficient number of animals with genotypes and phenotypes in the training set. As of 31 October 2016, there were 3,797 Brangus animals with genotypes and phenotypes. These Brangus animals were genotyped using different commercial SNP chips. Of them, the largest group consisted of 1,535 animals genotyped by the GGP-LDV4 SNP chip. The remaining 2,262 genotypes were imputed to the SNP content of the GGP-LDV4 chip, so that the number of animals available for training the genomic prediction models was more than doubled. The present study showed that the pooling of animals with both original or imputed 40K SNP genotypes substantially increased genomic prediction accuracies on the ten traits. By supplementing imputed genotypes, the relative gains in genomic prediction accuracies on estimated breeding values (EBV) were from 12.60% to 31.27%, and the relative gain in genomic prediction accuracies on de-regressed EBV was slightly small (i.e. 0.87%-18.75%). The present study also compared the performance of five genomic prediction models and two cross-validation methods. The five genomic models predicted EBV and de-regressed EBV of the ten traits similarly well. Of the two cross-validation methods, leave-one-out cross-validation maximized the number of animals at the stage of training for genomic prediction. Genomic prediction accuracy (GPA) on the ten quantitative traits was validated in 1,106 newly genotyped Brangus animals based on the SNP effects estimated in the previous set of 3,797 Brangus animals, and they were slightly lower than GPA in the original data. The present study was the first to leverage currently available genotype and phenotype resources in order to harness genomic prediction in Brangus beef cattle. © 2018 Blackwell Verlag GmbH.

  16. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    OpenAIRE

    Yan, Xiaobo; Xiong, Weiqing; Hu, Liang; Wang, Feng; Zhao, Kuo

    2015-01-01

    This paper addresses missing value imputation for the Internet of Things (IoT). Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the red...

  17. Cohort-specific imputation of gene expression improves prediction of warfarin dose for African Americans

    Directory of Open Access Journals (Sweden)

    Assaf Gottlieb

    2017-11-01

    Full Text Available Abstract Background Genome-wide association studies are useful for discovering genotype–phenotype associations but are limited because they require large cohorts to identify a signal, which can be population-specific. Mapping genetic variation to genes improves power and allows the effects of both protein-coding variation as well as variation in expression to be combined into “gene level” effects. Methods Previous work has shown that warfarin dose can be predicted using information from genetic variation that affects protein-coding regions. Here, we introduce a method that improves dose prediction by integrating tissue-specific gene expression. In particular, we use drug pathways and expression quantitative trait loci knowledge to impute gene expression—on the assumption that differential expression of key pathway genes may impact dose requirement. We focus on 116 genes from the pharmacokinetic and pharmacodynamic pathways of warfarin within training and validation sets comprising both European and African-descent individuals. Results We build gene-tissue signatures associated with warfarin dose in a cohort-specific manner and identify a signature of 11 gene-tissue pairs that significantly augments the International Warfarin Pharmacogenetics Consortium dosage-prediction algorithm in both populations. Conclusions Our results demonstrate that imputed expression can improve dose prediction and bridge population-specific compositions. MATLAB code is available at https://github.com/assafgo/warfarin-cohort

  18. Impute DC link (IDCL) cell based power converters and control thereof

    Science.gov (United States)

    Divan, Deepakraj M.; Prasai, Anish; Hernendez, Jorge; Moghe, Rohit; Iyer, Amrit; Kandula, Rajendra Prasad

    2016-04-26

    Power flow controllers based on Imputed DC Link (IDCL) cells are provided. The IDCL cell is a self-contained power electronic building block (PEBB). The IDCL cell may be stacked in series and parallel to achieve power flow control at higher voltage and current levels. Each IDCL cell may comprise a gate drive, a voltage sharing module, and a thermal management component in order to facilitate easy integration of the cell into a variety of applications. By providing direct AC conversion, the IDCL cell based AC/AC converters reduce device count, eliminate the use of electrolytic capacitors that have life and reliability issues, and improve system efficiency compared with similarly rated back-to-back inverter system.

  19. A novel fiber-free technique for brain activity imaging in multiple freely behaving mice

    Science.gov (United States)

    Inagaki, Shigenori; Agetsuma, Masakazu; Nagai, Takeharu

    2018-02-01

    Brain functions and related psychiatric disorders have been investigated by recording electrophysiological field potential. When recording it, a conventional method requires fiber-based apparatus connected to the brain, which however hampers the simultaneous measurement in multiple animals (e.g. by a tangle of fibers). Here, we propose a fiber-free recording technique in conjunction with a ratiometric bioluminescent voltage indicator. Our method allows investigation of electrophysiological filed potential dynamics in multiple freely behaving animals simultaneously over a long time period. Therefore, this fiber-free technique opens up the way to investigate a new mechanism of brain function that governs social behaviors and animal-to-animal interaction.

  20. Reducing BER of spectral-amplitude coding optical code-division multiple-access systems by single photodiode detection technique

    Science.gov (United States)

    Al-Khafaji, H. M. R.; Aljunid, S. A.; Amphawan, A.; Fadhil, H. A.; Safar, A. M.

    2013-03-01

    In this paper, we present a single photodiode detection (SPD) technique for spectral-amplitude coding optical code-division multiple-access (SAC-OCDMA) systems. The proposed technique eliminates both phase-induced intensity noise (PIIN) and multiple-access interference (MAI) in the optical domain. Analytical results show that for 35 simultaneous users transmitting at data rate of 622 Mbps, the bit-error rate (BER) = 1.4x10^-28 for SPD technique is much better compared to 9.3x10^-6 and 9.6x10^-3 for the modified-AND as well as the AND detection techniques, respectively. Moreover, we verified the improved performance afforded by the proposed technique using data transmission simulations.

  1. A suggested approach for imputation of missing dietary data for young children in daycare

    OpenAIRE

    Stevens, June; Ou, Fang-Shu; Truesdale, Kimberly P.; Zeng, Donglin; Vaughn, Amber E.; Pratt, Charlotte; Ward, Dianne S.

    2015-01-01

    Background: Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult.Objective: The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method.Design: Data were from children aged 2-5 years in the My Parenting...

  2. GRIMP: A web- and grid-based tool for high-speed analysis of large-scale genome-wide association using imputed data.

    NARCIS (Netherlands)

    K. Estrada Gil (Karol); A. Abuseiris (Anis); F.G. Grosveld (Frank); A.G. Uitterlinden (André); T.A. Knoch (Tobias); F. Rivadeneira Ramirez (Fernando)

    2009-01-01

    textabstractThe current fast growth of genome-wide association studies (GWAS) combined with now common computationally expensive imputation requires the online access of large user groups to high-performance computing resources capable of analyzing rapidly and efficiently millions of genetic

  3. Techniques necessary for multiple tracer quantitative small-animal imaging studies

    International Nuclear Information System (INIS)

    Sharp, Terry L.; Dence, Carmen S.; Engelbach, John A.; Herrero, Pilar; Gropler, Robert J.; Welch, Michael J.

    2005-01-01

    Introduction: An increasing number and variety of studies on rodent models are being conducted using small-animal positron emission tomography scanners. We aimed to determine if animal handling techniques could be developed to perform routine animal imaging in a timely and efficient manner and with minimal effect on animal physiology. These techniques need to be reproducible in the same animal while maintaining hemodynamic and physiological stability. Methods: The necessary techniques include (a) the use of inhalant anesthesia, (b) arterial and venous cannulation for multiple tracer administrations and blood sampling, (c) development of small-volume analytic columns and techniques and (d) measurement of the physiological environment during the imaging session. Results: We provide an example of a cardiac imaging study using four radiotracers ( 15 O-water, 1-[ 11 C]-acetate, 1-[ 11 C]-palmitate and 1-[ 11 C]-glucose) injected into normal rats. Plasma substrates, CO 2 production and total metabolites were measured. The animals remained anesthetized over the entire imaging session, and their physiological state was maintained. Conclusion: The intrastudy stability of the physiological measurements and substrate levels and interstudy reproducibility of the measurements are reported

  4. Mapping wildland fuels and forest structure for land management: a comparison of nearest neighbor imputation and other methods

    Science.gov (United States)

    Kenneth B. Pierce; Janet L. Ohmann; Michael C. Wimberly; Matthew J. Gregory; Jeremy S. Fried

    2009-01-01

    Land managers need consistent information about the geographic distribution of wildland fuels and forest structure over large areas to evaluate fire risk and plan fuel treatments. We compared spatial predictions for 12 fuel and forest structure variables across three regions in the western United States using gradient nearest neighbor (GNN) imputation, linear models (...

  5. Treating pre-instrumental data as "missing" data: using a tree-ring-based paleoclimate record and imputations to reconstruct streamflow in the Missouri River Basin

    Science.gov (United States)

    Ho, M. W.; Lall, U.; Cook, E. R.

    2015-12-01

    Advances in paleoclimatology in the past few decades have provided opportunities to expand the temporal perspective of the hydrological and climatological variability across the world. The North American region is particularly fortunate in this respect where a relatively dense network of high resolution paleoclimate proxy records have been assembled. One such network is the annually-resolved Living Blended Drought Atlas (LBDA): a paleoclimate reconstruction of the Palmer Drought Severity Index (PDSI) that covers North America on a 0.5° × 0.5° grid based on tree-ring chronologies. However, the use of the LBDA to assess North American streamflow variability requires a model by which streamflow may be reconstructed. Paleoclimate reconstructions have typically used models that first seek to quantify the relationship between the paleoclimate variable and the environmental variable of interest before extrapolating the relationship back in time. In contrast, the pre-instrumental streamflow is here considered as "missing" data. A method of imputing the "missing" streamflow data, prior to the instrumental record, is applied through multiple imputation using chained equations for streamflow in the Missouri River Basin. In this method, the distribution of the instrumental streamflow and LBDA is used to estimate sets of plausible values for the "missing" streamflow data resulting in a ~600 year-long streamflow reconstruction. Past research into external climate forcings, oceanic-atmospheric variability and its teleconnections, and assessments of rare multi-centennial instrumental records demonstrate that large temporal oscillations in hydrological conditions are unlikely to be captured in most instrumental records. The reconstruction of multi-centennial records of streamflow will enable comprehensive assessments of current and future water resource infrastructure and operations under the existing scope of natural climate variability.

  6. Methods for Mediation Analysis with Missing Data

    Science.gov (United States)

    Zhang, Zhiyong; Wang, Lijuan

    2013-01-01

    Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…

  7. A comparison of genomic selection models across time in interior spruce (Picea engelmannii × glauca) using unordered SNP imputation methods.

    Science.gov (United States)

    Ratcliffe, B; El-Dien, O G; Klápště, J; Porth, I; Chen, C; Jaquish, B; El-Kassaby, Y A

    2015-12-01

    Genomic selection (GS) potentially offers an unparalleled advantage over traditional pedigree-based selection (TS) methods by reducing the time commitment required to carry out a single cycle of tree improvement. This quality is particularly appealing to tree breeders, where lengthy improvement cycles are the norm. We explored the prospect of implementing GS for interior spruce (Picea engelmannii × glauca) utilizing a genotyped population of 769 trees belonging to 25 open-pollinated families. A series of repeated tree height measurements through ages 3-40 years permitted the testing of GS methods temporally. The genotyping-by-sequencing (GBS) platform was used for single nucleotide polymorphism (SNP) discovery in conjunction with three unordered imputation methods applied to a data set with 60% missing information. Further, three diverse GS models were evaluated based on predictive accuracy (PA), and their marker effects. Moderate levels of PA (0.31-0.55) were observed and were of sufficient capacity to deliver improved selection response over TS. Additionally, PA varied substantially through time accordingly with spatial competition among trees. As expected, temporal PA was well correlated with age-age genetic correlation (r=0.99), and decreased substantially with increasing difference in age between the training and validation populations (0.04-0.47). Moreover, our imputation comparisons indicate that k-nearest neighbor and singular value decomposition yielded a greater number of SNPs and gave higher predictive accuracies than imputing with the mean. Furthermore, the ridge regression (rrBLUP) and BayesCπ (BCπ) models both yielded equal, and better PA than the generalized ridge regression heteroscedastic effect model for the traits evaluated.

  8. Using beta coefficients to impute missing correlations in meta-analysis research: Reasons for caution.

    Science.gov (United States)

    Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Bobko, Philip

    2018-06-01

    Meta-analysis has become a well-accepted method for synthesizing empirical research about a given phenomenon. Many meta-analyses focus on synthesizing correlations across primary studies, but some primary studies do not report correlations. Peterson and Brown (2005) suggested that researchers could use standardized regression weights (i.e., beta coefficients) to impute missing correlations. Indeed, their beta estimation procedures (BEPs) have been used in meta-analyses in a wide variety of fields. In this study, the authors evaluated the accuracy of BEPs in meta-analysis. We first examined how use of BEPs might affect results from a published meta-analysis. We then developed a series of Monte Carlo simulations that systematically compared the use of existing correlations (that were not missing) to data sets that incorporated BEPs (that impute missing correlations from corresponding beta coefficients). These simulations estimated ρ̄ (mean population correlation) and SDρ (true standard deviation) across a variety of meta-analytic conditions. Results from both the existing meta-analysis and the Monte Carlo simulations revealed that BEPs were associated with potentially large biases when estimating ρ̄ and even larger biases when estimating SDρ. Using only existing correlations often substantially outperformed use of BEPs and virtually never performed worse than BEPs. Overall, the authors urge a return to the standard practice of using only existing correlations in meta-analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. A sectional-splinting technique for impressing multiple implant units by eliminating the use of an open tray

    Directory of Open Access Journals (Sweden)

    Suryakant C. Deogade

    2014-01-01

    Full Text Available Since the inception of root form implant dentistry by P-I Branemark in the early 1980′s, so many technical advances have been put forward by several authors. However, the open tray impression technique is still performed for impressing multiple implant fixtures as it was first described in the original Branemark procedure manual. The most critical aspect for a successful implant-supported restoration is the passive and an accurate fit of superstructures to avoid preload and loading stresses. Splinting impression technique in multiple implants has gained popularity. Auto-polymerizing acrylic resin is among the most routinely practiced splinting material for multiple implant units. However, unfortunately, it exhibits shrinkage, which makes an impression quite inaccurate. This case report presents the solution to minimize the shrinkage of resin by utilizing sectional-splinting technique as advocated in the previous implant literature.

  10. Analysis of multiple instructional techniques on the understanding and retention of select mechanical topics

    Science.gov (United States)

    Fetsco, Sara Elizabeth

    There are several topics that introductory physics students typically have difficulty understanding. The purpose of this thesis is to investigate if multiple instructional techniques will help students to better understand and retain the material. The three units analyzed in this study are graphing motion, projectile motion, and conservation of momentum. For each unit students were taught using new or altered instructional methods including online laboratory simulations, inquiry labs, and interactive demonstrations. Additionally, traditional instructional methods such as lecture and problem sets were retained. Effectiveness was measured through pre- and post-tests and student opinion surveys. Results suggest that incorporating multiple instructional techniques into teaching will improve student understanding and retention. Students stated that they learned well from all of the instructional methods used except the online simulations.

  11. Preparation of thin actinide metal disks using a multiple disk casting technique

    International Nuclear Information System (INIS)

    Conner, W.V.

    1975-01-01

    A casting technique has been developed for preparing multiple actinide metal disks which have a minimum thickness of 0.006 inch. This technique was based on an injection casting procedure which utilizes the weight of a tantalum metal rod to force the molten metal into the mold cavity. Using the proper mold design and casting parameters, it has been possible to prepare ten 1/2 inch diameter neptunium or plutonium metal disks in a single casting, This casting technique is capable of producing disks which are very uniform. The average thickness of the disks from a typical casting will vary no more than 0.001 inch and the variation in the thickness of the individual disks will range from 0.0001 to 0.0005 inch. (Auth.)

  12. Preparation of thin actinide metal disks using a multiple disk casting technique

    International Nuclear Information System (INIS)

    Conner, W.V.

    1976-01-01

    A casting technique has been developed for preparing multiple actinide metal disks which have a minimum thickness of 0.006 inch. This technique was based on an injection casting procedure which utilizes the weight of a tantalum metal rod to force the molten metal into the mold cavity. Using the proper mold design and casting parameters, it has been possible to prepare ten 1/2 inch diameter neptunium or plutonium metal disks in a single casting. This casting technique is capable of producing disks which are very uniform. The average thickness of the disks from a typical casting will vary no more than 0.001 inch and the variation in the thickness of the individual disks will range from 0.0001 to 0.0005 inch. (author)

  13. Breast Cancer and Modifiable Lifestyle Factors in Argentinean Women: Addressing Missing Data in a Case-Control Study

    Science.gov (United States)

    Coquet, Julia Becaria; Tumas, Natalia; Osella, Alberto Ruben; Tanzi, Matteo; Franco, Isabella; Diaz, Maria Del Pilar

    2016-01-01

    A number of studies have evidenced the effect of modifiable lifestyle factors such as diet, breastfeeding and nutritional status on breast cancer risk. However, none have addressed the missing data problem in nutritional epidemiologic research in South America. Missing data is a frequent problem in breast cancer studies and epidemiological settings in general. Estimates of effect obtained from these studies may be biased, if no appropriate method for handling missing data is applied. We performed Multiple Imputation for missing values on covariates in a breast cancer case-control study of Córdoba (Argentina) to optimize risk estimates. Data was obtained from a breast cancer case control study from 2008 to 2015 (318 cases, 526 controls). Complete case analysis and multiple imputation using chained equations were the methods applied to estimate the effects of a Traditional dietary pattern and other recognized factors associated with breast cancer. Physical activity and socioeconomic status were imputed. Logistic regression models were performed. When complete case analysis was performed only 31% of women were considered. Although a positive association of Traditional dietary pattern and breast cancer was observed from both approaches (complete case analysis OR=1.3, 95%CI=1.0-1.7; multiple imputation OR=1.4, 95%CI=1.2-1.7), effects of other covariates, like BMI and breastfeeding, were only identified when multiple imputation was considered. A Traditional dietary pattern, BMI and breastfeeding are associated with the occurrence of breast cancer in this Argentinean population when multiple imputation is appropriately performed. Multiple Imputation is suggested in Latin America’s epidemiologic studies to optimize effect estimates in the future. PMID:27892664

  14. Multiple predictor smoothing methods for sensitivity analysis: Description of techniques

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  15. On Matrix Sampling and Imputation of Context Questionnaires with Implications for the Generation of Plausible Values in Large-Scale Assessments

    Science.gov (United States)

    Kaplan, David; Su, Dan

    2016-01-01

    This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…

  16. 21 CFR 1404.630 - May the Office of National Drug Control Policy impute conduct of one person to another?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false May the Office of National Drug Control Policy impute conduct of one person to another? 1404.630 Section 1404.630 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) General Principles Relating to Suspension and Debarment Actions § 1404.630...

  17. A new strategy for enhancing imputation quality of rare variants from next-generation sequencing data via combining SNP and exome chip data

    NARCIS (Netherlands)

    Y.J. Kim (Young Jin); J. Lee (Juyoung); B.-J. Kim (Bong-Jo); T. Park (Taesung); G.R. Abecasis (Gonçalo); M.A.A. De Almeida (Marcio); D. Altshuler (David); J.L. Asimit (Jennifer L.); G. Atzmon (Gil); M. Barber (Mathew); A. Barzilai (Ari); N.L. Beer (Nicola L.); G.I. Bell (Graeme I.); J. Below (Jennifer); T. Blackwell (Tom); J. Blangero (John); M. Boehnke (Michael); D.W. Bowden (Donald W.); N.P. Burtt (Noël); J.C. Chambers (John); H. Chen (Han); P. Chen (Ping); P.S. Chines (Peter); S. Choi (Sungkyoung); C. Churchhouse (Claire); P. Cingolani (Pablo); B.K. Cornes (Belinda); N.J. Cox (Nancy); A.G. Day-Williams (Aaron); A. Duggirala (Aparna); J. Dupuis (Josée); T. Dyer (Thomas); S. Feng (Shuang); J. Fernandez-Tajes (Juan); T. Ferreira (Teresa); T.E. Fingerlin (Tasha E.); J. Flannick (Jason); J.C. Florez (Jose); P. Fontanillas (Pierre); T.M. Frayling (Timothy); C. Fuchsberger (Christian); E. Gamazon (Eric); K. Gaulton (Kyle); S. Ghosh (Saurabh); B. Glaser (Benjamin); A.L. Gloyn (Anna); R.L. Grossman (Robert L.); J. Grundstad (Jason); C. Hanis (Craig); A. Heath (Allison); H. Highland (Heather); M. Horikoshi (Momoko); I.-S. Huh (Ik-Soo); J.R. Huyghe (Jeroen R.); M.K. Ikram (Kamran); K.A. Jablonski (Kathleen); Y. Jun (Yang); N. Kato (Norihiro); J. Kim (Jayoun); Y.J. Kim (Young Jin); B.-J. Kim (Bong-Jo); J. Lee (Juyoung); C.R. King (C. Ryan); J.S. Kooner (Jaspal S.); M.-S. Kwon (Min-Seok); H.K. Im (Hae Kyung); M. Laakso (Markku); K.K.-Y. Lam (Kevin Koi-Yau); J. Lee (Jaehoon); S. Lee (Selyeong); S. Lee (Sungyoung); D.M. Lehman (Donna M.); H. Li (Heng); C.M. Lindgren (Cecilia); X. Liu (Xuanyao); O.E. Livne (Oren E.); A.E. Locke (Adam E.); A. Mahajan (Anubha); J.B. Maller (Julian B.); A.K. Manning (Alisa K.); T.J. Maxwell (Taylor J.); A. Mazoure (Alexander); M.I. McCarthy (Mark); J.B. Meigs (James B.); B. Min (Byungju); K.L. Mohlke (Karen); A.P. Morris (Andrew); S. Musani (Solomon); Y. Nagai (Yoshihiko); M.C.Y. Ng (Maggie C.Y.); D. Nicolae (Dan); S. Oh (Sohee); N.D. Palmer (Nicholette); T. Park (Taesung); T.I. Pollin (Toni I.); I. Prokopenko (Inga); D. Reich (David); M.A. Rivas (Manuel); L.J. Scott (Laura); M. Seielstad (Mark); Y.S. Cho (Yoon Shin); X. Sim (Xueling); R. Sladek (Rob); P. Smith (Philip); I. Tachmazidou (Ioanna); E.S. Tai (Shyong); Y.Y. Teo (Yik Ying); T.M. Teslovich (Tanya M.); J. Torres (Jason); V. Trubetskoy (Vasily); S.M. Willems (Sara); A.L. Williams (Amy L.); J.G. Wilson (James); S. Wiltshire (Steven); S. Won (Sungho); A.R. Wood (Andrew); W. Xu (Wang); J. Yoon (Joon); M. Zawistowski (Matthew); E. Zeggini (Eleftheria); W. Zhang (Weihua); S. Zöllner (Sebastian)

    2015-01-01

    textabstractBackground: Rare variants have gathered increasing attention as a possible alternative source of missing heritability. Since next generation sequencing technology is not yet cost-effective for large-scale genomic studies, a widely used alternative approach is imputation. However, the

  18. Super-Resolution Enhancement From Multiple Overlapping Images: A Fractional Area Technique

    Science.gov (United States)

    Michaels, Joshua A.

    With the availability of large quantities of relatively low-resolution data from several decades of space borne imaging, methods of creating an accurate, higher-resolution image from the multiple lower-resolution images (i.e. super-resolution), have been developed almost since such imagery has been around. The fractional-area super-resolution technique developed in this thesis has never before been documented. Satellite orbits, like Landsat, have a quantifiable variation, which means each image is not centered on the exact same spot more than once and the overlapping information from these multiple images may be used for super-resolution enhancement. By splitting a single initial pixel into many smaller, desired pixels, a relationship can be created between them using the ratio of the area within the initial pixel. The ideal goal for this technique is to obtain smaller pixels with exact values and no error, yielding a better potential result than those methods that yield interpolated pixel values with consequential loss of spatial resolution. A Fortran 95 program was developed to perform all calculations associated with the fractional-area super-resolution technique. The fractional areas are calculated using traditional trigonometry and coordinate geometry and Linear Algebra Package (LAPACK; Anderson et al., 1999) is used to solve for the higher-resolution pixel values. In order to demonstrate proof-of-concept, a synthetic dataset was created using the intrinsic Fortran random number generator and Adobe Illustrator CS4 (for geometry). To test the real-life application, digital pictures from a Sony DSC-S600 digital point-and-shoot camera with a tripod were taken of a large US geological map under fluorescent lighting. While the fractional-area super-resolution technique works in perfect synthetic conditions, it did not successfully produce a reasonable or consistent solution in the digital photograph enhancement test. The prohibitive amount of processing time (up to

  19. Initial Clinical Experience in Multiple Myeloma Staging by Means of Whole-Body Resonance Techniques

    International Nuclear Information System (INIS)

    Gallego, J. I.; Concepcion, L.; Alonso, S.; Sanchez, B.; Manzi, F.

    2003-01-01

    To develop a magnetic resonance (MR) exploratory technique equivalent to serial bone X-ray, and to compare their precision in the staging of multiple myeloma (MM) patients. Multiple acquisition T1-weights TSE and STIR sequences in the coronal plane were performed. Ten healthy volunteers and 11 multiple myeloma diagnosed patients were included. The visualization of bony structures was particularly noted,with special attention given to those which would normally be included in a serial bone X-ray. In the case of the patients, a comparison was made between diagnostic capacities of the MR sequences. MR highlighters significantly more (p<0.05) bony elements than did the serial bone X-ray. This was greatly due to a sequential displacement of the scanner bed, allowing for field-of-views which were minimally from head to third proximal of the leg. Magnetic resonance detected a significantly higher number (p<0.05) of lesions. It was, in turn, capable of revealing greater lesion extensions, even to the point of implying staging classification changes in 18% of the patients. The utilization of whole-body MR techniques in multiple myeloma patients is feasible and clinically beneficial. MR is both more sensitive and more specific than serial bone X-ray for evaluation of bony lesions in MM. It is currently serving as a valid alternative in a growing numbers of patients. (Author) 10 refs

  20. Endoscopic treatment of multilocular walled-off pancreatic necrosis with the multiple transluminal gateway technique.

    Science.gov (United States)

    Jagielski, Mateusz; Smoczyński, Marian; Adrych, Krystian

    2017-06-01

    The development of minimally invasive techniques allowed access to the necrotic cavity through transperitoneal, retroperitoneal, transmural and transpapillary routes. The choice of access to walled-off pancreatic necrosis (WOPN) should depend not only on the spread of necrosis, but also on the experience of the clinical center. Herein we describe treatment of a patient with multilocular symptomatic walled-off pancreatic necrosis using minimally invasive techniques. The single transmural access (single transluminal gateway technique - SGT) to the necrotic collection of the patient was ineffective. The second gastrocystostomy was performed using the same minimally invasive technique as an extra way of access to the necrosis (multiple transluminal gateway technique - MTGT). In the described case the performance of the new technique consisting in endoscopic multiplexing transmural access (MTGT) was effective enough and led to complete recovery of the patient.

  1. Different methods for analysing and imputation missing values in wind speed series; La problematica de la calidad de la informacion en series de velocidad del viento-metodologias de analisis y imputacion de datos faltantes

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, A. M.

    2004-07-01

    This study concerns about different methods for analysing and imputation missing values in wind speed series. The algorithm EM and a methodology derivated from the sequential hot deck have been utilized. Series with missing values imputed are compared with original and complete series, using several criteria, such the wind potential; and appears to exist a significant goodness of fit between the estimates and real values. (Author)

  2. Comparison of peripheral nerve stimulator versus ultrasonography guided axillary block using multiple injection technique.

    Science.gov (United States)

    Kumar, Alok; Sharma, Dk; Sibi, Maj E; Datta, Barun; Gogoi, Biraj

    2014-01-01

    The established methods of nerve location were based on either proper motor response on nerve stimulation (NS) or ultrasound guidance. In this prospective, randomised, observer-blinded study, we compared ultrasound guidance with NS for axillary brachial plexus block using 0.5% bupivacaine with the multiple injection techniques. A total of 120 patients receiving axillary brachial plexus block with 0.5% bupivacaine, using a multiple injection technique, were randomly allocated to receive either NS (group NS, n = 60), or ultrasound guidance (group US, n = 60) for nerve location. A blinded observer recorded the onset of sensory and motor blocks, skin punctures, needle redirections, procedure-related pain and patient satisfaction. The median (range) number of skin punctures were 2 (2-4) in group US and 3 (2-5) in group NS (P =0.27). Insufficient block was observed in three patient (5%) of group US and four patients (6.67%) of group NS (P > =0.35). Patient acceptance was similarly good in the two groups. Multiple injection axillary blocks with ultrasound guidance provided similar success rates and comparable incidence of complications as compared with NS guidance with 20 ml 0.5% bupivacaine.

  3. Comparison of peripheral nerve stimulator versus ultrasonography guided axillary block using multiple injection technique

    Directory of Open Access Journals (Sweden)

    Alok Kumar

    2014-01-01

    Full Text Available Background: The established methods of nerve location were based on either proper motor response on nerve stimulation (NS or ultrasound guidance. In this prospective, randomised, observer-blinded study, we compared ultrasound guidance with NS for axillary brachial plexus block using 0.5% bupivacaine with the multiple injection techniques. Methods : A total of 120 patients receiving axillary brachial plexus block with 0.5% bupivacaine, using a multiple injection technique, were randomly allocated to receive either NS (group NS, n = 60, or ultrasound guidance (group US, n = 60 for nerve location. A blinded observer recorded the onset of sensory and motor blocks, skin punctures, needle redirections, procedure-related pain and patient satisfaction. Results: The median (range number of skin punctures were 2 (2-4 in group US and 3 (2-5 in group NS (P =0.27. Insufficient block was observed in three patient (5% of group US and four patients (6.67% of group NS (P > =0.35. Patient acceptance was similarly good in the two groups. Conclusion: Multiple injection axillary blocks with ultrasound guidance provided similar success rates and comparable incidence of complications as compared with NS guidance with 20 ml 0.5% bupivacaine.

  4. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Science.gov (United States)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  5. Comparison of static conformal field with multiple noncoplanar arc techniques for stereotactic radiosurgery or stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Hamilton, Russell J.; Kuchnir, Franca T.; Sweeney, Patrick; Rubin, Steven J.; Dujovny, Manuel; Pelizzari, Charles A.; Chen, George T. Y.

    1995-01-01

    Purpose: Compare the use of static conformal fields with the use of multiple noncoplanar arcs for stereotactic radiosurgery or stereotactic radiotherapy treatment of intracranial lesions. Evaluate the efficacy of these treatment techniques to deliver dose distributions comparable to those considered acceptable in current radiotherapy practice. Methods and Materials: A previously treated radiosurgery case of a patient presenting with an irregularly shaped intracranial lesion was selected. Using a three-dimensional (3D) treatment-planning system, treatment plans using a single isocenter multiple noncoplanar arc technique and multiple noncoplanar conformal static fields were generated. Isodose distributions and dose volume histograms (DVHs) were computed for each treatment plan. We required that the 80% (of maximum dose) isodose surface enclose the target volume for all treatment plans. The prescription isodose was set equal to the minimum target isodose. The DVHs were analyzed to evaluate and compare the different treatment plans. Results: The dose distribution in the target volume becomes more uniform as the number of conformal fields increases. The volume of normal tissue receiving low doses (> 10% of prescription isodose) increases as the number of static fields increases. The single isocenter multiple arc plan treats the greatest volume of normal tissue to low doses, approximately 1.6 times more volume than that treated by four static fields. The volume of normal tissue receiving high (> 90% of prescription isodose) and intermediate (> 50% of prescription isodose) doses decreases by 29 and 22%, respectively, as the number of static fields is increased from four to eight. Increasing the number of static fields to 12 only further reduces the high and intermediate dose volumes by 10 and 6%, respectively. The volume receiving the prescription dose is more than 3.5 times larger than the target volume for all treatment plans. Conclusions: Use of a multiple noncoplanar

  6. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    Science.gov (United States)

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  7. An Improved Fuzzy Based Missing Value Estimation in DNA Microarray Validated by Gene Ranking

    Directory of Open Access Journals (Sweden)

    Sujay Saha

    2016-01-01

    Full Text Available Most of the gene expression data analysis algorithms require the entire gene expression matrix without any missing values. Hence, it is necessary to devise methods which would impute missing data values accurately. There exist a number of imputation algorithms to estimate those missing values. This work starts with a microarray dataset containing multiple missing values. We first apply the modified version of the fuzzy theory based existing method LRFDVImpute to impute multiple missing values of time series gene expression data and then validate the result of imputation by genetic algorithm (GA based gene ranking methodology along with some regular statistical validation techniques, like RMSE method. Gene ranking, as far as our knowledge, has not been used yet to validate the result of missing value estimation. Firstly, the proposed method has been tested on the very popular Spellman dataset and results show that error margins have been drastically reduced compared to some previous works, which indirectly validates the statistical significance of the proposed method. Then it has been applied on four other 2-class benchmark datasets, like Colorectal Cancer tumours dataset (GDS4382, Breast Cancer dataset (GSE349-350, Prostate Cancer dataset, and DLBCL-FL (Leukaemia for both missing value estimation and ranking the genes, and the results show that the proposed method can reach 100% classification accuracy with very few dominant genes, which indirectly validates the biological significance of the proposed method.

  8. Whitelists Based Multiple Filtering Techniques in SCADA Sensor Networks

    Directory of Open Access Journals (Sweden)

    DongHo Kang

    2014-01-01

    Full Text Available Internet of Things (IoT consists of several tiny devices connected together to form a collaborative computing environment. Recently IoT technologies begin to merge with supervisory control and data acquisition (SCADA sensor networks to more efficiently gather and analyze real-time data from sensors in industrial environments. But SCADA sensor networks are becoming more and more vulnerable to cyber-attacks due to increased connectivity. To safely adopt IoT technologies in the SCADA environments, it is important to improve the security of SCADA sensor networks. In this paper we propose a multiple filtering technique based on whitelists to detect illegitimate packets. Our proposed system detects the traffic of network and application protocol attacks with a set of whitelists collected from normal traffic.

  9. Datafish Multiphase Data Mining Technique to Match Multiple Mutually Inclusive Independent Variables in Large PACS Databases.

    Science.gov (United States)

    Kelley, Brendan P; Klochko, Chad; Halabi, Safwan; Siegal, Daniel

    2016-06-01

    Retrospective data mining has tremendous potential in research but is time and labor intensive. Current data mining software contains many advanced search features but is limited in its ability to identify patients who meet multiple complex independent search criteria. Simple keyword and Boolean search techniques are ineffective when more complex searches are required, or when a search for multiple mutually inclusive variables becomes important. This is particularly true when trying to identify patients with a set of specific radiologic findings or proximity in time across multiple different imaging modalities. Another challenge that arises in retrospective data mining is that much variation still exists in how image findings are described in radiology reports. We present an algorithmic approach to solve this problem and describe a specific use case scenario in which we applied our technique to a real-world data set in order to identify patients who matched several independent variables in our institution's picture archiving and communication systems (PACS) database.

  10. Photoelectrode Fabrication of Dye-Sensitized Nanosolar Cells Using Multiple Spray Coating Technique

    Directory of Open Access Journals (Sweden)

    Chien-Chih Chen

    2013-01-01

    Full Text Available This paper presents a spray coating technique for fabricating nanoporous film of photoelectrode in dye-sensitized nanosolar cells (DSSCs. Spray coating can quickly fabricate nanoporous film of the photoelectrode with lower cost, which can further help the DSSCs to be commercialized in the future. This paper analyzed photoelectric conversion efficiency of the DSSCs using spray coated photoelectrode in comparison with the photoelectrode made with the doctor blade method. Spray coating can easily control transmittance of the photoelectrode through the multiple spray coating process. This work mainly used a dispersant with help of ultrasonic oscillation to prepare the required nano-TiO2 solution and then sprayed it on the ITO glasses. In this work, a motor-operated conveyor belt was built to transport the ITO glasses automatically for multiple spray coating and drying alternately. Experiments used transmittance of the photoelectrode as a fabrication parameter to analyze photoelectric conversion efficiency of the DSSCs. The influencing factors of the photoelectrode transmittance during fabrication are the spray flow rate, the spray distance, and the moving speed of the conveyor belt. The results show that DSSC with the photoelectrode transmittance of ca. 68.0 ± 1.5% and coated by the spray coating technique has the best photoelectric conversion efficiency in this work.

  11. Key issues of multiple access technique for LEO satellite communication systems

    Institute of Scientific and Technical Information of China (English)

    温萍萍; 顾学迈

    2004-01-01

    The large carrier frequency shift caused by the high-speed movement of satellite (Doppler effects) and the propagation delay on the up-down link are very critical issues in an LEO satellite communication system, which affects both the selection and the implementation of a suitable access method. A Doppler based multiple access technique is used here to control the flow and an MPRMA-HS protocol is proposed for the application in LEO satellite communication systems. The extended simulation trials prove that the proposed scheme seems to be a very promising access method.

  12. Missing Value Imputation Improves Mortality Risk Prediction Following Cardiac Surgery: An Investigation of an Australian Patient Cohort.

    Science.gov (United States)

    Karim, Md Nazmul; Reid, Christopher M; Tran, Lavinia; Cochrane, Andrew; Billah, Baki

    2017-03-01

    The aim of this study was to evaluate the impact of missing values on the prediction performance of the model predicting 30-day mortality following cardiac surgery as an example. Information from 83,309 eligible patients, who underwent cardiac surgery, recorded in the Australia and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) database registry between 2001 and 2014, was used. An existing 30-day mortality risk prediction model developed from ANZSCTS database was re-estimated using the complete cases (CC) analysis and using multiple imputation (MI) analysis. Agreement between the risks generated by the CC and MI analysis approaches was assessed by the Bland-Altman method. Performances of the two models were compared. One or more missing predictor variables were present in 15.8% of the patients in the dataset. The Bland-Altman plot demonstrated significant disagreement between the risk scores (prisk of mortality. Compared to CC analysis, MI analysis resulted in an average of 8.5% decrease in standard error, a measure of uncertainty. The MI model provided better prediction of mortality risk (observed: 2.69%; MI: 2.63% versus CC: 2.37%, Pvalues improved the 30-day mortality risk prediction following cardiac surgery. Copyright © 2016 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  13. Analysis of Case-Control Association Studies: SNPs, Imputation and Haplotypes

    KAUST Repository

    Chatterjee, Nilanjan; Chen, Yi-Hau; Luo, Sheng; Carroll, Raymond J.

    2009-01-01

    Although prospective logistic regression is the standard method of analysis for case-control data, it has been recently noted that in genetic epidemiologic studies one can use the "retrospective" likelihood to gain major power by incorporating various population genetics model assumptions such as Hardy-Weinberg-Equilibrium (HWE), gene-gene and gene-environment independence. In this article we review these modern methods and contrast them with the more classical approaches through two types of applications (i) association tests for typed and untyped single nucleotide polymorphisms (SNPs) and (ii) estimation of haplotype effects and haplotype-environment interactions in the presence of haplotype-phase ambiguity. We provide novel insights to existing methods by construction of various score-tests and pseudo-likelihoods. In addition, we describe a novel two-stage method for analysis of untyped SNPs that can use any flexible external algorithm for genotype imputation followed by a powerful association test based on the retrospective likelihood. We illustrate applications of the methods using simulated and real data. © Institute of Mathematical Statistics, 2009.

  14. Analysis of Case-Control Association Studies: SNPs, Imputation and Haplotypes

    KAUST Repository

    Chatterjee, Nilanjan

    2009-11-01

    Although prospective logistic regression is the standard method of analysis for case-control data, it has been recently noted that in genetic epidemiologic studies one can use the "retrospective" likelihood to gain major power by incorporating various population genetics model assumptions such as Hardy-Weinberg-Equilibrium (HWE), gene-gene and gene-environment independence. In this article we review these modern methods and contrast them with the more classical approaches through two types of applications (i) association tests for typed and untyped single nucleotide polymorphisms (SNPs) and (ii) estimation of haplotype effects and haplotype-environment interactions in the presence of haplotype-phase ambiguity. We provide novel insights to existing methods by construction of various score-tests and pseudo-likelihoods. In addition, we describe a novel two-stage method for analysis of untyped SNPs that can use any flexible external algorithm for genotype imputation followed by a powerful association test based on the retrospective likelihood. We illustrate applications of the methods using simulated and real data. © Institute of Mathematical Statistics, 2009.

  15. Retrospective Study on Laser Treatment of Oral Vascular Lesions Using the "Leopard Technique": The Multiple Spot Irradiation Technique with a Single-Pulsed Wave.

    Science.gov (United States)

    Miyazaki, Hidetaka; Ohshiro, Takafumi; Romeo, Umberto; Noguchi, Tadahide; Maruoka, Yutaka; Gaimari, Gianfranco; Tomov, Georgi; Wada, Yoshitaka; Tanaka, Kae; Ohshiro, Toshio; Asamura, Shinichi

    2018-06-01

    This study aimed to retrospectively evaluate the efficacy and safety of laser treatment of oral vascular lesions using the multiple spot irradiation technique with a single-pulsed wave. In laser therapy for vascular lesions, heat accumulation induced by excessive irradiation can cause adverse events postoperatively, including ulcer formation, resultant scarring, and severe pain. To prevent heat accumulation and side effects, we have applied a multiple pulsed spot irradiation technique, the so-called "leopard technique" (LT) to oral vascular lesions. This approach was originally proposed for laser treatment of nevi. It can avoid thermal concentration at the same spot and spare the epithelium, which promotes smooth healing. The goal of the study was to evaluate this procedure and treatment outcomes. The subjects were 46 patients with 47 oral vascular lesions treated with the LT using a Nd:YAG laser (1064 nm), including 24 thick lesions treated using a combination of the LT and intralesional photocoagulation. All treatment outcomes were satisfactory without serious complications such as deep ulcer formation, scarring, bleeding, or severe swelling. Laser therapy with the LT is a promising less-invasive treatment for oral vascular lesions.

  16. Investigation of multiple visualisation techniques and dynamic queries in conjunction with direct sonification to support the browsing of audio resources

    OpenAIRE

    Brazil, Eoin

    2003-01-01

    non-peer-reviewed In this thesis, a prototype system for the browsing of audio resources was developed and an initial evaluation of this system was performed. The main contributions of this thesis are dynamic queries and multiple visualisation techniques in conjunction with direct sonification. Dynamic queries are queries that provide immediate feedback while maintaining consistency between the queries themselves and the graphical/auditory display. The multiple visualisation techniques are...

  17. Synthetic Minority Oversampling Technique and Fractal Dimension for Identifying Multiple Sclerosis

    Science.gov (United States)

    Zhang, Yu-Dong; Zhang, Yin; Phillips, Preetha; Dong, Zhengchao; Wang, Shuihua

    Multiple sclerosis (MS) is a severe brain disease. Early detection can provide timely treatment. Fractal dimension can provide statistical index of pattern changes with scale at a given brain image. In this study, our team used susceptibility weighted imaging technique to obtain 676 MS slices and 880 healthy slices. We used synthetic minority oversampling technique to process the unbalanced dataset. Then, we used Canny edge detector to extract distinguishing edges. The Minkowski-Bouligand dimension was a fractal dimension estimation method and used to extract features from edges. Single hidden layer neural network was used as the classifier. Finally, we proposed a three-segment representation biogeography-based optimization to train the classifier. Our method achieved a sensitivity of 97.78±1.29%, a specificity of 97.82±1.60% and an accuracy of 97.80±1.40%. The proposed method is superior to seven state-of-the-art methods in terms of sensitivity and accuracy.

  18. Code division multiple-access techniques in optical fiber networks. II - Systems performance analysis

    Science.gov (United States)

    Salehi, Jawad A.; Brackett, Charles A.

    1989-08-01

    A technique based on optical orthogonal codes was presented by Salehi (1989) to establish a fiber-optic code-division multiple-access (FO-CDMA) communications system. The results are used to derive the bit error rate of the proposed FO-CDMA system as a function of data rate, code length, code weight, number of users, and receiver threshold. The performance characteristics for a variety of system parameters are discussed. A means of reducing the effective multiple-access interference signal by placing an optical hard-limiter at the front end of the desired optical correlator is presented. Performance calculations are shown for the FO-CDMA with an ideal optical hard-limiter, and it is shown that using a optical hard-limiter would, in general, improve system performance.

  19. Estimation of Tree Lists from Airborne Laser Scanning Using Tree Model Clustering and k-MSN Imputation

    Directory of Open Access Journals (Sweden)

    Jörgen Wallerman

    2013-04-01

    Full Text Available Individual tree crowns may be delineated from airborne laser scanning (ALS data by segmentation of surface models or by 3D analysis. Segmentation of surface models benefits from using a priori knowledge about the proportions of tree crowns, which has not yet been utilized for 3D analysis to any great extent. In this study, an existing surface segmentation method was used as a basis for a new tree model 3D clustering method applied to ALS returns in 104 circular field plots with 12 m radius in pine-dominated boreal forest (64°14'N, 19°50'E. For each cluster below the tallest canopy layer, a parabolic surface was fitted to model a tree crown. The tree model clustering identified more trees than segmentation of the surface model, especially smaller trees below the tallest canopy layer. Stem attributes were estimated with k-Most Similar Neighbours (k-MSN imputation of the clusters based on field-measured trees. The accuracy at plot level from the k-MSN imputation (stem density root mean square error or RMSE 32.7%; stem volume RMSE 28.3% was similar to the corresponding results from the surface model (stem density RMSE 33.6%; stem volume RMSE 26.1% with leave-one-out cross-validation for one field plot at a time. Three-dimensional analysis of ALS data should also be evaluated in multi-layered forests since it identified a larger number of small trees below the tallest canopy layer.

  20. Single, double or multiple-injection techniques for non-ultrasound guided axillary brachial plexus block in adults undergoing surgery of the lower arm.

    Science.gov (United States)

    Chin, Ki Jinn; Alakkad, Husni; Cubillos, Javier E

    2013-08-08

    Regional anaesthesia comprising axillary block of the brachial plexus is a common anaesthetic technique for distal upper limb surgery. This is an update of a review first published in 2006 and updated in 2011. To compare the relative effects (benefits and harms) of three injection techniques (single, double and multiple) of axillary block of the brachial plexus for distal upper extremity surgery. We considered these effects primarily in terms of anaesthetic effectiveness; the complication rate (neurological and vascular); and pain and discomfort caused by performance of the block. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library), MEDLINE, EMBASE and reference lists of trials. We contacted trial authors. The date of the last search was March 2013 (updated from March 2011). We included randomized controlled trials that compared double with single-injection techniques, multiple with single-injection techniques, or multiple with double-injection techniques for axillary block in adults undergoing surgery of the distal upper limb. We excluded trials using ultrasound-guided techniques. Independent study selection, risk of bias assessment and data extraction were performed by at least two investigators. We undertook meta-analysis. The 21 included trials involved a total of 2148 participants who received regional anaesthesia for hand, wrist, forearm or elbow surgery. Risk of bias assessment indicated that trial design and conduct were generally adequate; the most common areas of weakness were in blinding and allocation concealment.Eight trials comparing double versus single injections showed a statistically significant decrease in primary anaesthesia failure (risk ratio (RR 0.51), 95% confidence interval (CI) 0.30 to 0.85). Subgroup analysis by method of nerve location showed that the effect size was greater when neurostimulation was used rather than the transarterial technique.Eight trials comparing multiple with single

  1. Insertion of central venous catheters for hemodialysis using angiographic techniques in patients with previous multiple catheterizations

    Energy Technology Data Exchange (ETDEWEB)

    Kotsikoris, Ioannis, E-mail: gkotsikoris@gmail.com [Department of Vascular Surgery, “Erythros Stauros” General Hospital (Greece); Zygomalas, Apollon, E-mail: azygomalas@upatras.gr [Department of General Surgery, University Hospital of Patras (Greece); Papas, Theofanis, E-mail: pfanis@otenet.gr [Department of Vascular Surgery, “Erythros Stauros” General Hospital (Greece); Maras, Dimitris, E-mail: dimmaras@gmail.com [Department of Vascular Surgery, “Erythros Stauros” General Hospital (Greece); Pavlidis, Polyvios, E-mail: polpavlidis@yahoo.gr [Department of Vascular Surgery, “Erythros Stauros” General Hospital (Greece); Andrikopoulou, Maria, E-mail: madric@gmail.com [Department of Vascular Surgery, “Erythros Stauros” General Hospital (Greece); Tsanis, Antonis, E-mail: atsanis@gmail.com [Department of Interventional Radiology, “Erythros Stauros” General Hospital (Greece); Alivizatos, Vasileios, E-mail: valiviz@hol.gr [Department of General Surgery and Artificial Nutrition Unit, “Agios Andreas” General Hospital of Patras (Greece); Bessias, Nikolaos, E-mail: bessias@otenet.gr [Department of Vascular Surgery, “Erythros Stauros” General Hospital (Greece)

    2012-09-15

    Introduction: Central venous catheter placement is an effective alternative vascular access for dialysis in patients with chronic renal failure. The purpose of this study was to evaluate the insertion of central venous catheters for hemodialysis using angiographic techniques in patients with previous multiple catheterizations in terms of efficacy of the procedure and early complications. Materials and methods: Between 2008 and 2010, the vascular access team of our hospital placed 409 central venous catheters in patients with chronic renal failure. The procedure was performed using the Seldinger blind technique. In 18 (4.4%) cases it was impossible to advance the guidewire, and so the patients were transported to the angiography suite. Results: Using the angiographic technique, the guidewire was advanced in order to position the central venous catheter. The latter was inserted into the subclavian vein in 12 (66.6%) cases, into the internal jugular vein in 4 (22.2%) and into the femoral vein in 2 (11.1%) cases. There was only one complicated case with severe arrhythmia in 1 (5.5%) patient. Conclusion: Our results suggest that insertion of central venous catheters using angiographic techniques in hemodialysis patients with previous multiple catheterizations is a safe and effective procedure with few complications and high success rates.

  2. Insertion of central venous catheters for hemodialysis using angiographic techniques in patients with previous multiple catheterizations

    International Nuclear Information System (INIS)

    Kotsikoris, Ioannis; Zygomalas, Apollon; Papas, Theofanis; Maras, Dimitris; Pavlidis, Polyvios; Andrikopoulou, Maria; Tsanis, Antonis; Alivizatos, Vasileios; Bessias, Nikolaos

    2012-01-01

    Introduction: Central venous catheter placement is an effective alternative vascular access for dialysis in patients with chronic renal failure. The purpose of this study was to evaluate the insertion of central venous catheters for hemodialysis using angiographic techniques in patients with previous multiple catheterizations in terms of efficacy of the procedure and early complications. Materials and methods: Between 2008 and 2010, the vascular access team of our hospital placed 409 central venous catheters in patients with chronic renal failure. The procedure was performed using the Seldinger blind technique. In 18 (4.4%) cases it was impossible to advance the guidewire, and so the patients were transported to the angiography suite. Results: Using the angiographic technique, the guidewire was advanced in order to position the central venous catheter. The latter was inserted into the subclavian vein in 12 (66.6%) cases, into the internal jugular vein in 4 (22.2%) and into the femoral vein in 2 (11.1%) cases. There was only one complicated case with severe arrhythmia in 1 (5.5%) patient. Conclusion: Our results suggest that insertion of central venous catheters using angiographic techniques in hemodialysis patients with previous multiple catheterizations is a safe and effective procedure with few complications and high success rates

  3. Human mixed lymphocyte cultures. Evaluation of microculture technique utilizing the multiple automated sample harvester (MASH)

    Science.gov (United States)

    Thurman, G. B.; Strong, D. M.; Ahmed, A.; Green, S. S.; Sell, K. W.; Hartzman, R. J.; Bach, F. H.

    1973-01-01

    Use of lymphocyte cultures for in vitro studies such as pretransplant histocompatibility testing has established the need for standardization of this technique. A microculture technique has been developed that has facilitated the culturing of lymphocytes and increased the quantity of cultures feasible, while lowering the variation between replicate samples. Cultures were prepared for determination of tritiated thymidine incorporation using a Multiple Automated Sample Harvester (MASH). Using this system, the parameters that influence the in vitro responsiveness of human lymphocytes to allogeneic lymphocytes have been investigated. PMID:4271568

  4. Nuclear techniques in the development of fertilizer practices for multiple cropping systems

    International Nuclear Information System (INIS)

    1986-12-01

    This document summarizes the results of a coordinated research programme. Eight Member States of the FAO and IAEA carried out a series of field studies aimed at identifying optimum practices for the use of fertilizers in multiple cropping systems and for maximizing the contribution of atmospheric nitrogen biologically fixed by the legume component of such systems to the non-fixing cereal component or to the succeeding crop. Isotope techniques allowed the researchers to accurately determine the uptake of specific nutrients and to compare selected treatments

  5. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    Science.gov (United States)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  6. Self-normalizing multiple-echo technique for measuring the in vivo apparent diffusion coefficient

    International Nuclear Information System (INIS)

    Perman, W.H.; Gado, M.; Sandstrom, J.C.

    1989-01-01

    This paper presents work to develop a new technique for quantitating the in vivo apparent diffusion/perfusion coefficient (ADC) by obtaining multiple data points from only two images with the capability to normalize the data from consecutive images, thus minimizing the effect of interimage variation. Two multiple-echo (six-to eight-echo) cardiac-gated images are obtained, one without and one with additional diffusion/perfusion encoding gradients placed about the 180 RF pulses of all but the first echo. Since the first echoes of both images have identical pulse sequence parameters, variations in signal intensity-between the first echoes represent image-to-image variation. The signal intensities of the subsequent echoes with additional diffusion/perfusion encoding gradients are then normalized by using the ratio of the first-echo signal intensities

  7. Impression of multiple implants using photogrammetry: description of technique and case presentation.

    Science.gov (United States)

    Peñarrocha-Oltra, David; Agustín-Panadero, Rubén; Bagán, Leticia; Giménez, Beatriz; Peñarrocha, María

    2014-07-01

    To describe a technique for registering the positions of multiple dental implants using a system based on photogrammetry. A case is presented in which a prosthetic treatment was performed using this technique. Three Euroteknika® dental implants were placed to rehabilitate a 55-year-old male patient with right posterior maxillary edentulism. Three months later, the positions of the implants were registered using a photogrammetry-based stereo-camera (PICcamera®). After processing patient and implant data, special abutments (PICabutment®) were screwed onto each implant. The PICcamera® was then used to capture images of the implant positions, automatically taking 150 images in less than 60 seconds. From this information a file was obtained describing the relative positions - angles and distances - of each implant in vector form. Information regarding the soft tissues was obtained from an alginate impression that was cast in plaster and scanned. A Cr-Co structure was obtained using CAD/CAM, and its passive fit was verified in the patient's mouth using the Sheffield test and the screw resistance test. Twelve months after loading, peri-implant tissues were healthy and no marginal bone loss was observed. The clinical application of this new system using photogrammetry to record the position of multiple dental implants facilitated the rehabilitation of a patient with posterior maxillary edentulism by means of a prosthesis with optimal fit. The prosthetic process was accurate, fast, simple to apply and comfortable for the patient.

  8. Multiple Input - Multiple Output (MIMO) SAR

    Data.gov (United States)

    National Aeronautics and Space Administration — This effort will research and implement advanced Multiple-Input Multiple-Output (MIMO) Synthetic Aperture Radar (SAR) techniques which have the potential to improve...

  9. A three-source capture-recapture estimate of the number of new HIV diagnoses in children in France from 2003–2006 with multiple imputation of a variable of heterogeneous catchability

    Directory of Open Access Journals (Sweden)

    Héraud-Bousquet Vanina

    2012-10-01

    Full Text Available Abstract Background Nearly all HIV infections in children worldwide are acquired through mother-to-child transmission (MTCT during pregnancy, labour, delivery or breastfeeding. The objective of our study was to estimate the number and rate of new HIV diagnoses in children less than 13 years of age in mainland France from 2003–2006. Methods We performed a capture-recapture analysis based on three sources of information: the mandatory HIV case reporting (DOVIH, the French Perinatal Cohort (ANRS-EPF and a laboratory-based surveillance of HIV (LaboVIH. The missing values of a variable of heterogeneous catchability were estimated through multiple imputation. Log-linear modelling provided estimates of the number of new HIV infections in children, taking into account dependencies between sources and variables of heterogeneous catchability. Results The three sources observed 216 new HIV diagnoses after record-linkage. The number of new HIV diagnoses in children was estimated at 387 (95%CI [271–503] from 2003–2006, among whom 60% were born abroad. The estimated rate of new HIV diagnoses in children in mainland France was 9.1 per million in 2006 and was 38 times higher in children born abroad than in those born in France. The estimated completeness of the three sources combined was 55.8% (95% CI [42.9 – 79.7] and varied according to the source; the completeness of DOVIH (28.4% and ANRS-EPF (26.1% were lower than that of LaboVIH (33.3%. Conclusion Our study provided, for the first time, an estimated annual rate of new HIV diagnoses in children under 13 years old in mainland France. A more systematic HIV screening of pregnant women that is repeated during pregnancy among women likely to engage in risky behaviour is needed to optimise the prevention of MTCT. HIV screening for children who migrate from countries with high HIV prevalence to France could be recommended to facilitate early diagnosis and treatment.

  10. A new basaltic glass microanalytical reference material for multiple techniques

    Science.gov (United States)

    Wilson, Steve; Koenig, Alan; Lowers, Heather

    2012-01-01

    The U.S. Geological Survey (USGS) has been producing reference materials since the 1950s. Over 50 materials have been developed to cover bulk rock, sediment, and soils for the geological community. These materials are used globally in geochemistry, environmental, and analytical laboratories that perform bulk chemistry and/or microanalysis for instrument calibration and quality assurance testing. To answer the growing demand for higher spatial resolution and sensitivity, there is a need to create a new generation of microanalytical reference materials suitable for a variety of techniques, such as scanning electron microscopy/X-ray spectrometry (SEM/EDS), electron probe microanalysis (EPMA), laser ablation inductively coupled mass spectrometry (LA-ICP-MS), and secondary ion mass spectrometry (SIMS). As such, the microanalytical reference material (MRM) needs to be stable under the beam, be homogeneous at scales of better than 10–25 micrometers for the major to ultra-trace element level, and contain all of the analytes (elements or isotopes) of interest. Previous development of basaltic glasses intended for LA-ICP-MS has resulted in a synthetic basaltic matrix series of glasses (USGS GS-series) and a natural basalt series of glasses (BCR-1G, BHVO-2G, and NKT-1G). These materials have been useful for the LA-ICP-MS community but were not originally intended for use by the electron or ion beam community. A material developed from start to finish with intended use in multiple microanalytical instruments would be useful for inter-laboratory and inter-instrument platform comparisons. This article summarizes the experiments undertaken to produce a basalt glass reference material suitable for distribution as a multiple-technique round robin material. The goal of the analytical work presented here is to demonstrate that the elemental homogeneity of the new glass is acceptable for its use as a reference material. Because the round robin exercise is still underway, only

  11. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  12. Comprehensive comparison of gap filling techniques for eddy covariance net carbon fluxes

    Science.gov (United States)

    Moffat, A. M.; Papale, D.; Reichstein, M.; Hollinger, D. Y.; Richardson, A. D.; Barr, A. G.; Beckstein, C.; Braswell, B. H.; Churkina, G.; Desai, A. R.; Falge, E.; Gove, J. H.; Heimann, M.; Hui, D.; Jarvis, A. J.; Kattge, J.; Noormets, A.; Stauch, V. J.

    2007-12-01

    Review of fifteen techniques for estimating missing values of net ecosystem CO2 exchange (NEE) in eddy covariance time series and evaluation of their performance for different artificial gap scenarios based on a set of ten benchmark datasets from six forested sites in Europe. The goal of gap filling is the reproduction of the NEE time series and hence this present work focuses on estimating missing NEE values, not on editing or the removal of suspect values in these time series due to systematic errors in the measurements (e.g. nighttime flux, advection). The gap filling was examined by generating fifty secondary datasets with artificial gaps (ranging in length from single half-hours to twelve consecutive days) for each benchmark dataset and evaluating the performance with a variety of statistical metrics. The performance of the gap filling varied among sites and depended on the level of aggregation (native half- hourly time step versus daily), long gaps were more difficult to fill than short gaps, and differences among the techniques were more pronounced during the day than at night. The non-linear regression techniques (NLRs), the look-up table (LUT), marginal distribution sampling (MDS), and the semi-parametric model (SPM) generally showed good overall performance. The artificial neural network based techniques (ANNs) were generally, if only slightly, superior to the other techniques. The simple interpolation technique of mean diurnal variation (MDV) showed a moderate but consistent performance. Several sophisticated techniques, the dual unscented Kalman filter (UKF), the multiple imputation method (MIM), the terrestrial biosphere model (BETHY), but also one of the ANNs and one of the NLRs showed high biases which resulted in a low reliability of the annual sums, indicating that additional development might be needed. An uncertainty analysis comparing the estimated random error in the ten benchmark datasets with the artificial gap residuals suggested that the

  13. Magnetic resonance techniques for investigation of multiple sclerosis

    Science.gov (United States)

    MacKay, Alex; Laule, Cornelia; Li, David K. B.; Meyers, Sandra M.; Russell-Schulz, Bretta; Vavasour, Irene M.

    2014-11-01

    Multiple sclerosis (MS) is a common neurological disease which can cause loss of vision and balance, muscle weakness, impaired speech, fatigue, cognitive dysfunction and even paralysis. The key pathological processes in MS are inflammation, edema, myelin loss, axonal loss and gliosis. Unfortunately, the cause of MS is still not understood and there is currently no cure. Magnetic resonance imaging (MRI) is an important clinical and research tool for MS. 'Conventional' MRI images of MS brain reveal bright lesions, or plaques, which demark regions of severe tissue damage. Conventional MRI has been extremely valuable for the diagnosis and management of people who have MS and also for the assessment of therapies designed to reduce inflammation and promote repair. While conventional MRI is clearly valuable, it lack pathological specificity and, in some cases, sensitivity to non-lesional pathology. Advanced MR techniques have been developed to provide information that is more sensitive and specific than what is available with clinical scanning. Diffusion tensor imaging and magnetization transfer provide a general but non-specific measure of the pathological state of brain tissue. MR spectroscopy provides concentrations of brain metabolites which can be related to specific pathologies. Myelin water imaging was designed to assess brain myelination and has proved useful for measuring myelin loss in MS. To combat MS, it is crucial that the pharmaceutical industry finds therapies which can reverse the neurodegenerative processes which occur in the disease. The challenge for magnetic resonance researchers is to design imaging techniques which can provide detailed pathological information relating to the mechanisms of MS therapies. This paper briefly describes the pathologies of MS and demonstrates how MS-associated pathologies can be followed using both conventional and advanced MR imaging protocols.

  14. Assessment of Consequences of Replacement of System of the Uniform Tax on Imputed Income Patent System of the Taxation

    Directory of Open Access Journals (Sweden)

    Galina A. Manokhina

    2012-11-01

    Full Text Available The article highlights the main questions concerning possible consequences of replacement of nowadays operating system in the form of a single tax in reference to imputed income with patent system of the taxation. The main advantages and drawbacks of new system of the taxation are shown, including the opinion that not the replacement of one special mode of the taxation with another is more effective, but the introduction of patent a taxation system as an auxilary system.

  15. Disclosure Control using Partially Synthetic Data for Large-Scale Health Surveys, with Applications to CanCORS

    OpenAIRE

    Loong, Bronwyn; Zaslavsky, Alan M.; He, Yulei; Harrington, David P.

    2013-01-01

    Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents’ identities and sensitive attributes, by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by CanCORS, a comprehensive observational study of the experiences, treat...

  16. Fusing Data Mining, Machine Learning and Traditional Statistics to Detect Biomarkers Associated with Depression

    OpenAIRE

    Dipnall, Joanna F.; Pasco, Julie A.; Berk, Michael; Williams, Lana J.; Dodd, Seetal; Jacka, Felice N.; Meyer, Denny

    2016-01-01

    Background Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study. Methods The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted reg...

  17. Combined interpretation of multiple geophysical techniques: an archaeological case study

    Science.gov (United States)

    Riedl, S.; Reichmann, S.; Tronicke, J.; Lück, E.

    2009-04-01

    In order to locate and ascertain the dimensions of an ancient orangery, we explored an area of about 70 m x 60 m in the Rheinsberg Palace Garden (Germany) with multiple geophysical techniques. The Rheinsberg Park, situated about 100 km northwest of Berlin, Germany, was established by the Prussian emperors in the 18th century. Due to redesign of the architecture and the landscaping during the past 300 years, buildings were dismantled and detailed knowledge about some original buildings got lost. We surveyed an area close to a gazebo where, after historical sources, an orangery was planned around the year 1740. However, today it is not clear to what extent this plan has been realized and if remains of this building are still buried in the subsurface. Applied geophysical techniques include magnetic gradiometry, frequency domain electromagnetic (FDEM) and direct current (DC) resistivity mapping as well as ground penetrating radar (GPR). To get an overview of the site, we performed FDEM electrical conductivity mapping using an EM38 instrument and magnetic gradiometry with caesium magnetometers. Both data sets were collected with an in- and crossline data point spacing of ca. 10 cm and 50 cm, respectively. DC resistivity surveying was performed using a pole-pole electrode configuration with an electrode spacing of 1.5 m and a spacing of 1.0 m between individual readings. A 3-D GPR survey was conducted using 200 MHz antennae and in- and crossline spacing of ca. 10 cm and 40 cm, respectively. A standard processing sequence including 3-D migration was applied. A combined interpretation of all collected data sets illustrates that the magnetic gradient and the EM38 conductivity maps is are dominated by anomalies from metallic water pipes from belonging to the irrigation system of the park. The DC resistivity map outlines a rectangular area which might indicate the extension of a former building south of the gazebo. The 3-D GPR data set provides further insights about

  18. Multiple and high-throughput droplet reactions via combination of microsampling technique and microfluidic chip

    KAUST Repository

    Wu, Jinbo

    2012-11-20

    Microdroplets offer unique compartments for accommodating a large number of chemical and biological reactions in tiny volume with precise control. A major concern in droplet-based microfluidics is the difficulty to address droplets individually and achieve high throughput at the same time. Here, we have combined an improved cartridge sampling technique with a microfluidic chip to perform droplet screenings and aggressive reaction with minimal (nanoliter-scale) reagent consumption. The droplet composition, distance, volume (nanoliter to subnanoliter scale), number, and sequence could be precisely and digitally programmed through the improved sampling technique, while sample evaporation and cross-contamination are effectively eliminated. Our combined device provides a simple model to utilize multiple droplets for various reactions with low reagent consumption and high throughput. © 2012 American Chemical Society.

  19. Estimating range of influence in case of missing spatial data

    DEFF Research Database (Denmark)

    Bihrmann, Kristine; Ersbøll, Annette Kjær

    2015-01-01

    BACKGROUND: The range of influence refers to the average distance between locations at which the observed outcome is no longer correlated. In many studies, missing data occur and a popular tool for handling missing data is multiple imputation. The objective of this study was to investigate how...... the estimated range of influence is affected when 1) the outcome is only observed at some of a given set of locations, and 2) multiple imputation is used to impute the outcome at the non-observed locations. METHODS: The study was based on the simulation of missing outcomes in a complete data set. The range...... of influence was estimated from a logistic regression model with a spatially structured random effect, modelled by a Gaussian field. Results were evaluated by comparing estimates obtained from complete, missing, and imputed data. RESULTS: In most simulation scenarios, the range estimates were consistent...

  20. Multiple-Trait Genomic Selection Methods Increase Genetic Value Prediction Accuracy

    Science.gov (United States)

    Jia, Yi; Jannink, Jean-Luc

    2012-01-01

    Genetic correlations between quantitative traits measured in many breeding programs are pervasive. These correlations indicate that measurements of one trait carry information on other traits. Current single-trait (univariate) genomic selection does not take advantage of this information. Multivariate genomic selection on multiple traits could accomplish this but has been little explored and tested in practical breeding programs. In this study, three multivariate linear models (i.e., GBLUP, BayesA, and BayesCπ) were presented and compared to univariate models using simulated and real quantitative traits controlled by different genetic architectures. We also extended BayesA with fixed hyperparameters to a full hierarchical model that estimated hyperparameters and BayesCπ to impute missing phenotypes. We found that optimal marker-effect variance priors depended on the genetic architecture of the trait so that estimating them was beneficial. We showed that the prediction accuracy for a low-heritability trait could be significantly increased by multivariate genomic selection when a correlated high-heritability trait was available. Further, multiple-trait genomic selection had higher prediction accuracy than single-trait genomic selection when phenotypes are not available on all individuals and traits. Additional factors affecting the performance of multiple-trait genomic selection were explored. PMID:23086217

  1. Multiple regression technique for Pth degree polynominals with and without linear cross products

    Science.gov (United States)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  2. Analyzing the Impacts of Alternated Number of Iterations in Multiple Imputation Method on Explanatory Factor Analysis

    Directory of Open Access Journals (Sweden)

    Duygu KOÇAK

    2017-11-01

    Full Text Available The study aims to identify the effects of iteration numbers used in multiple iteration method, one of the methods used to cope with missing values, on the results of factor analysis. With this aim, artificial datasets of different sample sizes were created. Missing values at random and missing values at complete random were created in various ratios by deleting data. For the data in random missing values, a second variable was iterated at ordinal scale level and datasets with different ratios of missing values were obtained based on the levels of this variable. The data were generated using “psych” program in R software, while “dplyr” program was used to create codes that would delete values according to predetermined conditions of missing value mechanism. Different datasets were generated by applying different iteration numbers. Explanatory factor analysis was conducted on the datasets completed and the factors and total explained variances are presented. These values were first evaluated based on the number of factors and total variance explained of the complete datasets. The results indicate that multiple iteration method yields a better performance in cases of missing values at random compared to datasets with missing values at complete random. Also, it was found that increasing the number of iterations in both missing value datasets decreases the difference in the results obtained from complete datasets.

  3. Security of public key encryption technique based on multiple chaotic systems

    International Nuclear Information System (INIS)

    Wang Kai; Pei Wenjiang; Zou Liuhua; Cheung Yiuming; He Zhenya

    2006-01-01

    Recently, a new public key encryption technique based on multiple chaotic systems has been proposed [B. Ranjan, Phys. Rev. Lett. 95 (2005) 098702]. This scheme employs m-chaotic systems and a set of linear functions for key exchange over an insecure channel. Security of the proposed algorithm grows as (NP) m , where N, P are the size of the key and the computational complexity of the linear functions respectively. In this Letter, the fundamental weakness of the cryptosystem is pointed out and a successful attack is described. Given the public keys and the initial vector, one can calculate the secret key based on Parseval's theorem. Both theoretical and experimental results show that the attacker can access to the secret key without difficulty. The lack of security discourages the use of such algorithm for practical applications

  4. PHEA-PLA biocompatible nanoparticles by technique of solvent evaporation from multiple emulsions.

    Science.gov (United States)

    Cavallaro, Gennara; Craparo, Emanuela Fabiola; Sardo, Carla; Lamberti, Gaetano; Barba, Anna Angela; Dalmoro, Annalisa

    2015-11-30

    Nanocarriers of amphiphilic polymeric materials represent versatile delivery systems for poorly water soluble drugs. In this work the technique of solvent evaporation from multiple emulsions was applied to produce nanovectors based on new amphiphilic copolymer, the α,β-poly(N-2-hydroxyethyl)-DL-aspartamide-polylactic acid (PHEA-PLA), purposely synthesized to be used in the controlled release of active molecules poorly soluble in water. To this aim an amphiphilic derivative of PHEA, a hydrophilic polymer, was synthesized by derivatization of the polymeric backbone with hydrophobic grafts of polylactic acid (PLA). The achieved copolymer was thus used to produce nanoparticles loaded with α tocopherol (vitamin E) adopted as lipophilic model molecule. Applying a protocol based on solvent evaporation from multiple emulsions assisted by ultrasonic energy and optimizing the emulsification process (solvent selection/separation stages), PHEA-PLA nanostructured particles with total α tocopherol entrapment efficiency (100%), were obtained. The drug release is expected to take place in lower times with respect to PLA due to the presence of the hydrophilic PHEA, therefore the produced nanoparticles can be used for semi-long term release drug delivery systems. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Security and reliability analysis of diversity combining techniques in SIMO mixed RF/FSO with multiple users

    KAUST Repository

    Abd El-Malek, Ahmed H.; Salhab, Anas M.; Zummo, Salam A.; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we investigate the impact of different diversity combining techniques on the security and reliability analysis of a single-input-multiple-output (SIMO) mixed radio frequency (RF)/free space optical (FSO) relay network with opportunistic multiuser scheduling. In this model, the user of the best channel among multiple users communicates with a multiple antennas relay node over an RF link, and then, the relay node employs amplify-and-forward (AF) protocol in retransmitting the user data to the destination over an FSO link. Moreover, the authorized transmission is assumed to be attacked by a single passive RF eavesdropper equipped with multiple antennas. Therefore, the system security reliability trade-off analysis is investigated. Closed-form expressions for the system outage probability and the system intercept probability are derived. Then, the newly derived expressions are simplified to their asymptotic formulas at the high signal-to-noise- ratio (SNR) region. Numerical results are presented to validate the achieved exact and asymptotic results and to illustrate the impact of various system parameters on the system performance. © 2016 IEEE.

  6. Security and reliability analysis of diversity combining techniques in SIMO mixed RF/FSO with multiple users

    KAUST Repository

    Abd El-Malek, Ahmed H.

    2016-07-26

    In this paper, we investigate the impact of different diversity combining techniques on the security and reliability analysis of a single-input-multiple-output (SIMO) mixed radio frequency (RF)/free space optical (FSO) relay network with opportunistic multiuser scheduling. In this model, the user of the best channel among multiple users communicates with a multiple antennas relay node over an RF link, and then, the relay node employs amplify-and-forward (AF) protocol in retransmitting the user data to the destination over an FSO link. Moreover, the authorized transmission is assumed to be attacked by a single passive RF eavesdropper equipped with multiple antennas. Therefore, the system security reliability trade-off analysis is investigated. Closed-form expressions for the system outage probability and the system intercept probability are derived. Then, the newly derived expressions are simplified to their asymptotic formulas at the high signal-to-noise- ratio (SNR) region. Numerical results are presented to validate the achieved exact and asymptotic results and to illustrate the impact of various system parameters on the system performance. © 2016 IEEE.

  7. Fovea sparing internal limiting membrane peeling using multiple parafoveal curvilinear peels for myopic foveoschisis: technique and outcome.

    Science.gov (United States)

    Jin, Haiying; Zhang, Qi; Zhao, Peiquan

    2016-10-18

    To introduce a modified surgical technique, the "parafoveal multiple curvelinear internal limiting membrane (ILM) peeling", to preserve epi-foveal ILM in myopic foveoschisis surgery. Consecutive patients with myopic foveoschisis were enrolled in the present prospective interventional case series. The surgeries were performed using transconjunctival 23-gauge system. The macular area was divided into quadrants. ILM was peeled off in a curvilinear manner centered around the site that was away from the central fovea in each quadrant. Shearing forces were used to control the direction to keep the peeling away from central fovea. ILM at central fovea of about 500 to 1000 μm was preserved by this technique. This technique was performed in 20 eyes of 20 consecutive patients. Epi-foveal ILM was successfully preserved in all cases using the technique. Patients were followed up for more than 12 months. The mean postoperative logMAR visual acuity improved from 1.67 ± 0.65 preoperatively to 1.15 ± 0.49 (P = 0.015; paired t-test). Postoperative OCT examinations showed that full-thickness macular holes (MHs) did not developed in any case. Central fovea thickness decreased from 910 ± 261 μm preoperatively to 125 ± 85 postoperatively (P = 0.001; paired t-test). Fovea sparing ILM peeling using multiple parafoveal curvilinear peels prevents the development of postoperative full-thickness MHs in eyes with myopic foveoschisis.

  8. Preserving Logical Relations while Estimating Missing Values

    Directory of Open Access Journals (Sweden)

    Ton de Waal

    2017-09-01

    Full Text Available Item-nonresponse is often treated by means of an imputation technique. In some cases, the data have to satisfy certain constraints, which are frequently referred to as edits. An example of an edit for numerical data is that the profit of an enterprise equals its turnover minus its costs. Edits place restrictions on the imputations that are allowed and hence complicate the imputation process. In this paper we explore an adjustment approach. This adjustment approach consists of three steps. In the first step, the imputation step, nearest neighbour hot deck imputation is used to find several pre-imputed values. In a second step, the adjustment step, these pre-imputed values are adjusted so the resulting records satisfy all edits. In a third step, the best donor record is selected. The adjusted record corresponding to that donor record is the final imputed record. In principle, a potential donor that is not the closest to the record to be imputed may still give the best results after adjustment. In this paper we therefore focus on the number of potential donor records that are considered in the imputation step.

  9. Simultaneous rotational and vibrational CARS generation through a multiple-frequency combination technique

    International Nuclear Information System (INIS)

    Alden, M.; Bengtsson, P.E.; Edner, H.

    1987-01-01

    One most promising laser technique for probing combustion processes is coherent anti-Stokes Raman scattering (CARS), which due to its coherent nature and signal strength is applied in several real-world applications. Until today almost all CARS experiments are based on probing the population of molecular vibrational energy levels. However, there are several reasons rotational CARS, i.e. probing of rotational energy levels, may provide a complement to or even a better choice than vibrational CARS. Recently an alternative way to produce rotational CARS spectra is proposed, which is based on a multiple-frequency combination technique. The energy-level diagram for this process is presented. Two dye laser beams at ω/sub r/, and one fix frequency laser beam at ω/sub g/ are employed. ω/sub r,1/ and ω/sub r,2/ are two frequencies of many possible pairs with a frequency difference matching a rotational transition in a molecule. The excitation induced by ω/sub r,1/ and ω/sub r,2/ is then scattered by the narrowband ω/sub g/ beam resulting in a CARS beam ω/sub g/ at ω/sub g/ + ω/sub r,1/ - ω/sub r,2/. An interesting feature with this technique is that it is possible to generate simultaneously a rotational and vibrational CARS spectrum by using a double-folded boxcars phase matching approach. The authors believe that the proposed technique for producing rotational and vibration CARS spectra could be of interest, e.g., when measuring in highly turbulent flows. In this case the rotational CARS spectra could use for temperature measurements in the cooler parts, whereas vibrational CARS are to be preferred when measuring in the hotter parts

  10. Combination of individual tree detection and area-based approach in imputation of forest variables using airborne laser data

    Science.gov (United States)

    Vastaranta, Mikko; Kankare, Ville; Holopainen, Markus; Yu, Xiaowei; Hyyppä, Juha; Hyyppä, Hannu

    2012-01-01

    The two main approaches to deriving forest variables from laser-scanning data are the statistical area-based approach (ABA) and individual tree detection (ITD). With ITD it is feasible to acquire single tree information, as in field measurements. Here, ITD was used for measuring training data for the ABA. In addition to automatic ITD (ITD auto), we tested a combination of ITD auto and visual interpretation (ITD visual). ITD visual had two stages: in the first, ITD auto was carried out and in the second, the results of the ITD auto were visually corrected by interpreting three-dimensional laser point clouds. The field data comprised 509 circular plots ( r = 10 m) that were divided equally for testing and training. ITD-derived forest variables were used for training the ABA and the accuracies of the k-most similar neighbor ( k-MSN) imputations were evaluated and compared with the ABA trained with traditional measurements. The root-mean-squared error (RMSE) in the mean volume was 24.8%, 25.9%, and 27.2% with the ABA trained with field measurements, ITD auto, and ITD visual, respectively. When ITD methods were applied in acquiring training data, the mean volume, basal area, and basal area-weighted mean diameter were underestimated in the ABA by 2.7-9.2%. This project constituted a pilot study for using ITD measurements as training data for the ABA. Further studies are needed to reduce the bias and to determine the accuracy obtained in imputation of species-specific variables. The method could be applied in areas with sparse road networks or when the costs of fieldwork must be minimized.

  11. A single-gradient junction technique to replace multiple-junction shifts for craniospinal irradiation treatment

    International Nuclear Information System (INIS)

    Hadley, Austin; Ding, George X.

    2014-01-01

    Craniospinal irradiation (CSI) requires abutting fields at the cervical spine. Junction shifts are conventionally used to prevent setup error–induced overdosage/underdosage from occurring at the same location. This study compared the dosimetric differences at the cranial-spinal junction between a single-gradient junction technique and conventional multiple-junction shifts and evaluated the effect of setup errors on the dose distributions between both techniques for a treatment course and single fraction. Conventionally, 2 lateral brain fields and a posterior spine field(s) are used for CSI with weekly 1-cm junction shifts. We retrospectively replanned 4 CSI patients using a single-gradient junction between the lateral brain fields and the posterior spine field. The fields were extended to allow a minimum 3-cm field overlap. The dose gradient at the junction was achieved using dose painting and intensity-modulated radiation therapy planning. The effect of positioning setup errors on the dose distributions for both techniques was simulated by applying shifts of ± 3 and 5 mm. The resulting cervical spine doses across the field junction for both techniques were calculated and compared. Dose profiles were obtained for both a single fraction and entire treatment course to include the effects of the conventional weekly junction shifts. Compared with the conventional technique, the gradient-dose technique resulted in higher dose uniformity and conformity to the target volumes, lower organ at risk (OAR) mean and maximum doses, and diminished hot spots from systematic positioning errors over the course of treatment. Single-fraction hot and cold spots were improved for the gradient-dose technique. The single-gradient junction technique provides improved conformity, dose uniformity, diminished hot spots, lower OAR mean and maximum dose, and one plan for the entire treatment course, which reduces the potential human error associated with conventional 4-shifted plans

  12. Fusing Data Mining, Machine Learning and Traditional Statistics to Detect Biomarkers Associated with Depression

    Science.gov (United States)

    Dipnall, Joanna F.

    2016-01-01

    Background Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study. Methods The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009–2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators. Results After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (p<0.001). Conclusion The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and

  13. Fusing Data Mining, Machine Learning and Traditional Statistics to Detect Biomarkers Associated with Depression.

    Directory of Open Access Journals (Sweden)

    Joanna F Dipnall

    Full Text Available Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study.The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009-2010. Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators.After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30, serum glucose (OR 1.01; 95% CI 1.00, 1.01 and total bilirubin (OR 0.12; 95% CI 0.05, 0.28. Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016, and current smokers (p<0.001.The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling

  14. Multiple-Input Multiple-Output OFDM with Index Modulation

    OpenAIRE

    Basar, Ertugrul

    2015-01-01

    Orthogonal frequency division multiplexing with index modulation (OFDM-IM) is a novel multicarrier transmission technique which has been proposed as an alternative to classical OFDM. The main idea of OFDM-IM is the use of the indices of the active subcarriers in an OFDM system as an additional source of information. In this work, we propose multiple-input multiple-output OFDM-IM (MIMO-OFDM-IM) scheme by combining OFDM-IM and MIMO transmission techniques. The low complexity transceiver structu...

  15. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Science.gov (United States)

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    Regolith is the weathered, typically mineral-rich layer from fresh bedrock to land surface. It encompasses soil (A, E and B horizons) that has undergone pedogenesis. Below is the weathered C horizon that retains at least some of the original rocky fabric and structure. At the base of this is the lower regolith boundary of continuous hard bedrock (the R horizon). Regolith may be absent, e.g. at rocky outcrops, or may be many 10's of metres deep. Comparatively little is known about regolith, and critical questions remain regarding composition and characteristics - especially deeper where the challenge of collecting reliable data increases with depth. In Australia research is underway to characterise and map regolith using consistent methods at scales ranging from local (e.g. hillslope) to continental scales. These efforts are driven by many research needs, including Critical Zone modelling and simulation. Pilot research in South Australia using digitally-based environmental correlation techniques modelled the depth to bedrock to 9 m for an upland area of 128 000 ha. One finding was the inability to reliably model local scale depth variations over horizontal distances of 2 - 3 m and vertical distances of 1 - 2 m. The need to better characterise variations in regolith to strengthen models at these fine scales was discussed. Addressing this need, we describe high intensity, ground-based multi-sensor geophysical profiling of three hillslope transects in different regolith-landscape settings to characterise fine resolution (i.e. a number of frequencies; multiple frequency, multiple coil electromagnetic induction; and high resolution resistivity. These were accompanied by georeferenced, closely spaced deep cores to 9 m - or to core refusal. The intact cores were sub-sampled to standard depths and analysed for regolith properties to compile core datasets consisting of: water content; texture; electrical conductivity; and weathered state. After preprocessing (filtering, geo

  16. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    Science.gov (United States)

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  17. An Improved Clutter Suppression Method for Weather Radars Using Multiple Pulse Repetition Time Technique

    Directory of Open Access Journals (Sweden)

    Yingjie Yu

    2017-01-01

    Full Text Available This paper describes the implementation of an improved clutter suppression method for the multiple pulse repetition time (PRT technique based on simulated radar data. The suppression method is constructed using maximum likelihood methodology in time domain and is called parametric time domain method (PTDM. The procedure relies on the assumption that precipitation and clutter signal spectra follow a Gaussian functional form. The multiple interleaved pulse repetition frequencies (PRFs that are used in this work are set to four PRFs (952, 833, 667, and 513 Hz. Based on radar simulation, it is shown that the new method can provide accurate retrieval of Doppler velocity even in the case of strong clutter contamination. The obtained velocity is nearly unbiased for all the range of Nyquist velocity interval. Also, the performance of the method is illustrated on simulated radar data for plan position indicator (PPI scan. Compared with staggered 2-PRT transmission schemes with PTDM, the proposed method presents better estimation accuracy under certain clutter situations.

  18. A field comparison of multiple techniques to quantify groundwater - surface-water interactions

    Science.gov (United States)

    González-Pinzón, Ricardo; Ward, Adam S; Hatch, Christine E; Wlostowski, Adam N; Singha, Kamini; Gooseff, Michael N.; Haggerty, Roy; Harvey, Judson; Cirpka, Olaf A; Brock, James T

    2015-01-01

    implementing multiple techniques through collaborative research.

  19. Imputing Variants in HLA-DR Beta Genes Reveals That HLA-DRB1 Is Solely Associated with Rheumatoid Arthritis and Systemic Lupus Erythematosus.

    Directory of Open Access Journals (Sweden)

    Kwangwoo Kim

    Full Text Available The genetic association of HLA-DRB1 with rheumatoid arthritis (RA and systemic lupus erythematosus (SLE is well documented, but association with other HLA-DR beta genes (HLA-DRB3, HLA-DRB4 and HLA-DRB5 has not been thoroughly studied, despite their similar functions and chromosomal positions. We examined variants in all functional HLA-DR beta genes in RA and SLE patients and controls, down to the amino-acid level, to better understand disease association with the HLA-DR locus. To this end, we improved an existing HLA reference panel to impute variants in all protein-coding HLA-DR beta genes. Using the reference panel, HLA variants were inferred from high-density SNP data of 9,271 RA-control subjects and 5,342 SLE-control subjects. Disease association tests were performed by logistic regression and log-likelihood ratio tests. After imputation using the newly constructed HLA reference panel and statistical analysis, we observed that HLA-DRB1 variants better accounted for the association between MHC and susceptibility to RA and SLE than did the other three HLA-DRB variants. Moreover, there were no secondary effects in HLA-DRB3, HLA-DRB4, or HLA-DRB5 in RA or SLE. Of all the HLA-DR beta chain paralogs, those encoded by HLA-DRB1 solely or dominantly influence susceptibility to RA and SLE.

  20. Research on Innovating, Applying Multiple Paths Routing Technique Based on Fuzzy Logic and Genetic Algorithm for Routing Messages in Service - Oriented Routing

    Directory of Open Access Journals (Sweden)

    Nguyen Thanh Long

    2015-02-01

    Full Text Available MANET (short for Mobile Ad-Hoc Network consists of a set of mobile network nodes, network configuration changes very fast. In content based routing, data is transferred from source node to request nodes is not based on destination addresses. Therefore, it is very flexible and reliable, because source node does not need to know destination nodes. If We can find multiple paths that satisfies bandwidth requirement, split the original message into multiple smaller messages to transmit concurrently on these paths. On destination nodes, combine separated messages into the original message. Hence it can utilize better network resources, causes data transfer rate to be higher, load balancing, failover. Service Oriented Routing is inherited from the model of content based routing (CBR, combined with several advanced techniques such as Multicast, multiple path routing, Genetic algorithm to increase the data rate, and data encryption to ensure information security. Fuzzy logic is a logical field study evaluating the accuracy of the results based on the approximation of the components involved, make decisions based on many factors relative accuracy based on experimental or mathematical proof. This article presents some techniques to support multiple path routing from one network node to a set of nodes with guaranteed quality of service. By using these techniques can decrease the network load, congestion, use network resources efficiently.

  1. Multiple-Choice Testing Using Immediate Feedback--Assessment Technique (IF AT®) Forms: Second-Chance Guessing vs. Second-Chance Learning?

    Science.gov (United States)

    Merrel, Jeremy D.; Cirillo, Pier F.; Schwartz, Pauline M.; Webb, Jeffrey A.

    2015-01-01

    Multiple choice testing is a common but often ineffective method for evaluating learning. A newer approach, however, using Immediate Feedback Assessment Technique (IF AT®, Epstein Educational Enterprise, Inc.) forms, offers several advantages. In particular, a student learns immediately if his or her answer is correct and, in the case of an…

  2. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Science.gov (United States)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  3. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  4. An imputation/copula-based stochastic individual tree growth model for mixed species Acadian forests: a case study using the Nova Scotia permanent sample plot network

    Directory of Open Access Journals (Sweden)

    John A. KershawJr

    2017-09-01

    Full Text Available Background A novel approach to modelling individual tree growth dynamics is proposed. The approach combines multiple imputation and copula sampling to produce a stochastic individual tree growth and yield projection system. Methods The Nova Scotia, Canada permanent sample plot network is used as a case study to develop and test the modelling approach. Predictions from this model are compared to predictions from the Acadian variant of the Forest Vegetation Simulator, a widely used statistical individual tree growth and yield model. Results Diameter and height growth rates were predicted with error rates consistent with those produced using statistical models. Mortality and ingrowth error rates were higher than those observed for diameter and height, but also were within the bounds produced by traditional approaches for predicting these rates. Ingrowth species composition was very poorly predicted. The model was capable of reproducing a wide range of stand dynamic trajectories and in some cases reproduced trajectories that the statistical model was incapable of reproducing. Conclusions The model has potential to be used as a benchmarking tool for evaluating statistical and process models and may provide a mechanism to separate signal from noise and improve our ability to analyze and learn from large regional datasets that often have underlying flaws in sample design.

  5. Discovery and fine-mapping of adiposity loci using high density imputation of genome-wide association studies in individuals of African ancestry: African Ancestry Anthropometry Genetics Consortium.

    Science.gov (United States)

    Ng, Maggie C Y; Graff, Mariaelisa; Lu, Yingchang; Justice, Anne E; Mudgal, Poorva; Liu, Ching-Ti; Young, Kristin; Yanek, Lisa R; Feitosa, Mary F; Wojczynski, Mary K; Rand, Kristin; Brody, Jennifer A; Cade, Brian E; Dimitrov, Latchezar; Duan, Qing; Guo, Xiuqing; Lange, Leslie A; Nalls, Michael A; Okut, Hayrettin; Tajuddin, Salman M; Tayo, Bamidele O; Vedantam, Sailaja; Bradfield, Jonathan P; Chen, Guanjie; Chen, Wei-Min; Chesi, Alessandra; Irvin, Marguerite R; Padhukasahasram, Badri; Smith, Jennifer A; Zheng, Wei; Allison, Matthew A; Ambrosone, Christine B; Bandera, Elisa V; Bartz, Traci M; Berndt, Sonja I; Bernstein, Leslie; Blot, William J; Bottinger, Erwin P; Carpten, John; Chanock, Stephen J; Chen, Yii-Der Ida; Conti, David V; Cooper, Richard S; Fornage, Myriam; Freedman, Barry I; Garcia, Melissa; Goodman, Phyllis J; Hsu, Yu-Han H; Hu, Jennifer; Huff, Chad D; Ingles, Sue A; John, Esther M; Kittles, Rick; Klein, Eric; Li, Jin; McKnight, Barbara; Nayak, Uma; Nemesure, Barbara; Ogunniyi, Adesola; Olshan, Andrew; Press, Michael F; Rohde, Rebecca; Rybicki, Benjamin A; Salako, Babatunde; Sanderson, Maureen; Shao, Yaming; Siscovick, David S; Stanford, Janet L; Stevens, Victoria L; Stram, Alex; Strom, Sara S; Vaidya, Dhananjay; Witte, John S; Yao, Jie; Zhu, Xiaofeng; Ziegler, Regina G; Zonderman, Alan B; Adeyemo, Adebowale; Ambs, Stefan; Cushman, Mary; Faul, Jessica D; Hakonarson, Hakon; Levin, Albert M; Nathanson, Katherine L; Ware, Erin B; Weir, David R; Zhao, Wei; Zhi, Degui; Arnett, Donna K; Grant, Struan F A; Kardia, Sharon L R; Oloapde, Olufunmilayo I; Rao, D C; Rotimi, Charles N; Sale, Michele M; Williams, L Keoki; Zemel, Babette S; Becker, Diane M; Borecki, Ingrid B; Evans, Michele K; Harris, Tamara B; Hirschhorn, Joel N; Li, Yun; Patel, Sanjay R; Psaty, Bruce M; Rotter, Jerome I; Wilson, James G; Bowden, Donald W; Cupples, L Adrienne; Haiman, Christopher A; Loos, Ruth J F; North, Kari E

    2017-04-01

    Genome-wide association studies (GWAS) have identified >300 loci associated with measures of adiposity including body mass index (BMI) and waist-to-hip ratio (adjusted for BMI, WHRadjBMI), but few have been identified through screening of the African ancestry genomes. We performed large scale meta-analyses and replications in up to 52,895 individuals for BMI and up to 23,095 individuals for WHRadjBMI from the African Ancestry Anthropometry Genetics Consortium (AAAGC) using 1000 Genomes phase 1 imputed GWAS to improve coverage of both common and low frequency variants in the low linkage disequilibrium African ancestry genomes. In the sex-combined analyses, we identified one novel locus (TCF7L2/HABP2) for WHRadjBMI and eight previously established loci at P African ancestry individuals. An additional novel locus (SPRYD7/DLEU2) was identified for WHRadjBMI when combined with European GWAS. In the sex-stratified analyses, we identified three novel loci for BMI (INTS10/LPL and MLC1 in men, IRX4/IRX2 in women) and four for WHRadjBMI (SSX2IP, CASC8, PDE3B and ZDHHC1/HSD11B2 in women) in individuals of African ancestry or both African and European ancestry. For four of the novel variants, the minor allele frequency was low (African ancestry sex-combined and sex-stratified analyses, 26 BMI loci and 17 WHRadjBMI loci contained ≤ 20 variants in the credible sets that jointly account for 99% posterior probability of driving the associations. The lead variants in 13 of these loci had a high probability of being causal. As compared to our previous HapMap imputed GWAS for BMI and WHRadjBMI including up to 71,412 and 27,350 African ancestry individuals, respectively, our results suggest that 1000 Genomes imputation showed modest improvement in identifying GWAS loci including low frequency variants. Trans-ethnic meta-analyses further improved fine mapping of putative causal variants in loci shared between the African and European ancestry populations.

  6. RIDDLE: Race and ethnicity Imputation from Disease history with Deep LEarning.

    Directory of Open Access Journals (Sweden)

    Ji-Sung Kim

    2018-04-01

    Full Text Available Anonymized electronic medical records are an increasingly popular source of research data. However, these datasets often lack race and ethnicity information. This creates problems for researchers modeling human disease, as race and ethnicity are powerful confounders for many health exposures and treatment outcomes; race and ethnicity are closely linked to population-specific genetic variation. We showed that deep neural networks generate more accurate estimates for missing racial and ethnic information than competing methods (e.g., logistic regression, random forest, support vector machines, and gradient-boosted decision trees. RIDDLE yielded significantly better classification performance across all metrics that were considered: accuracy, cross-entropy loss (error, precision, recall, and area under the curve for receiver operating characteristic plots (all p < 10-9. We made specific efforts to interpret the trained neural network models to identify, quantify, and visualize medical features which are predictive of race and ethnicity. We used these characterizations of informative features to perform a systematic comparison of differential disease patterns by race and ethnicity. The fact that clinical histories are informative for imputing race and ethnicity could reflect (1 a skewed distribution of blue- and white-collar professions across racial and ethnic groups, (2 uneven accessibility and subjective importance of prophylactic health, (3 possible variation in lifestyle, such as dietary habits, and (4 differences in background genetic variation which predispose to diseases.

  7. RIDDLE: Race and ethnicity Imputation from Disease history with Deep LEarning

    KAUST Repository

    Kim, Ji-Sung

    2018-04-26

    Anonymized electronic medical records are an increasingly popular source of research data. However, these datasets often lack race and ethnicity information. This creates problems for researchers modeling human disease, as race and ethnicity are powerful confounders for many health exposures and treatment outcomes; race and ethnicity are closely linked to population-specific genetic variation. We showed that deep neural networks generate more accurate estimates for missing racial and ethnic information than competing methods (e.g., logistic regression, random forest, support vector machines, and gradient-boosted decision trees). RIDDLE yielded significantly better classification performance across all metrics that were considered: accuracy, cross-entropy loss (error), precision, recall, and area under the curve for receiver operating characteristic plots (all p < 10-9). We made specific efforts to interpret the trained neural network models to identify, quantify, and visualize medical features which are predictive of race and ethnicity. We used these characterizations of informative features to perform a systematic comparison of differential disease patterns by race and ethnicity. The fact that clinical histories are informative for imputing race and ethnicity could reflect (1) a skewed distribution of blue- and white-collar professions across racial and ethnic groups, (2) uneven accessibility and subjective importance of prophylactic health, (3) possible variation in lifestyle, such as dietary habits, and (4) differences in background genetic variation which predispose to diseases.

  8. Incomplete Data in Smart Grid: Treatment of Values in Electric Vehicle Charging Data

    Energy Technology Data Exchange (ETDEWEB)

    Majipour, Mostafa; Chu, Peter; Gadh, Rajit; Pota, Hemanshu R.

    2014-11-03

    In this paper, five imputation methods namely Constant (zero), Mean, Median, Maximum Likelihood, and Multiple Imputation methods have been applied to compensate for missing values in Electric Vehicle (EV) charging data. The outcome of each of these methods have been used as the input to a prediction algorithm to forecast the EV load in the next 24 hours at each individual outlet. The data is real world data at the outlet level from the UCLA campus parking lots. Given the sparsity of the data, both Median and Constant (=zero) imputations improved the prediction results. Since in most missing value cases in our database, all values of that instance are missing, the multivariate imputation methods did not improve the results significantly compared to univariate approaches.

  9. Disclosure control using partially synthetic data for large-scale health surveys, with applications to CanCORS.

    Science.gov (United States)

    Loong, Bronwyn; Zaslavsky, Alan M; He, Yulei; Harrington, David P

    2013-10-30

    Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents' identities and sensitive attributes by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by the Cancer Care Outcomes Research and Surveillance (CanCORS) project, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the USA. We review inferential methods for partially synthetic data and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Geomorphological and Geoelectric Techniques for Kwoi's Multiple Tremor Assessment

    Science.gov (United States)

    Dikedi, P. N.

    2017-12-01

    This work epicentres on geomorphological and geoelectric techniques for multiple tremor assessment in Kwoi, Nigeria. Earth tremor occurrences have been noted by Akpan and Yakubu (2010) within the last 70 years, in nine regions in Nigeria; on September 11,12,20,22, 23 and 24, 2016, additional earth tremors rocked the village of Kwoi eleven times. Houses cracked and collapsed, a rock split and slid and smoke evolved at N9027''5.909''', E800'44.951'', from an altitude of 798m. By employing the Ohmega Meter and Schlumberger configuration, four VES points are sounded for subsurface structure characterisation. Thereafter, a cylindrical steel ring is hammered into the ground at the first point (VES 1) and earth samples are scooped from this location; this procedure is repeated for other points (VES 2, 3 and 4). Winresist, Geo-earth, and Surfer version 12.0.626 software are employed to generate geo-sections, lithology, resistivity profile, Iso resistivity and Isopach maps, of the region. Outcome of results reveal some lithological formations of lateritic topsoil, fractured basement and fresh basement; additionally, results reveal 206.6m, 90.7m, 73.2m and 99.4m fractured basement thicknesses for four points. Scooped samples are transferred to the specimen stage of a Scanning Electron Microscope (SEM). SEM images show rounded inter-granular boundaries—the granular structures act like micro-wheels making the upper crustal mass susceptible to movement at the slightest vibration. Collapsed buildings are sited around VES1 location; samples from VES 1 are the most well fragmented sample owing to multiple microfractures—this result explains why VES 1 has the thickest fractured basement. Abrupt frictional sliding occurs between networks of fault lines; there is a likelihood that friction is most intense at the rock slide site on N9027'21.516'' and E800'44.9993'', VES 1 at N9027'5.819'' and E8005'3.1120'' and smoke sites—holo-centres are suspected below these locations. The

  11. Fusing Data Mining, Machine Learning and Traditional Statistics to Detect Biomarkers Associated with Depression.

    Science.gov (United States)

    Dipnall, Joanna F; Pasco, Julie A; Berk, Michael; Williams, Lana J; Dodd, Seetal; Jacka, Felice N; Meyer, Denny

    2016-01-01

    Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study. The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009-2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators. After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (pmachine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling methodology and was demonstrated to be a useful tool for detecting three biomarkers associated with depression for future

  12. Laboratory model study of newly deposited dredger fills using improved multiple-vacuum preloading technique

    Directory of Open Access Journals (Sweden)

    Jingjin Liu

    2017-10-01

    Full Text Available Problems continue to be encountered concerning the traditional vacuum preloading method in field during the treatment of newly deposited dredger fills. In this paper, an improved multiple-vacuum preloading method was developed to consolidate newly dredger fills that are hydraulically placed in seawater for land reclamation in Lingang Industrial Zone of Tianjin City, China. With this multiple-vacuum preloading method, the newly deposited dredger fills could be treated effectively by adopting a novel moisture separator and a rapid improvement technique without sand cushion. A series of model tests was conducted in the laboratory for comparing the results from the multiple-vacuum preloading method and the traditional one. Ten piezometers and settlement plates were installed to measure the variations in excess pore water pressures and moisture content, and vane shear strength was measured at different positions. The testing results indicate that water discharge–time curves obtained by the traditional vacuum preloading method can be divided into three phases: rapid growth phase, slow growth phase, and steady phase. According to the process of fluid flow concentrated along tiny ripples and building of larger channels inside soils during the whole vacuum loading process, the fluctuations of pore water pressure during each loading step are divided into three phases: steady phase, rapid dissipation phase, and slow dissipation phase. An optimal loading pattern which could have a best treatment effect was proposed for calculating the water discharge and pore water pressure of soil using the improved multiple-vacuum preloading method. For the newly deposited dredger fills at Lingang Industrial Zone of Tianjin City, the best loading step was 20 kPa and the loading of 40–50 kPa produced the highest drainage consolidation. The measured moisture content and vane shear strength were discussed in terms of the effect of reinforcement, both of which indicate

  13. The new technique of using the epigastric arteries in renal transplantation with multiple renal arteries

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Amirzargar

    2013-01-01

    Full Text Available The most common anatomic variant seen in the donor kidneys for renal transplantation is multiple renal arteries (MRA, which can cause an increased risk of complications. We describe the long-term outcomes of 16 years of experience in 76 kidney transplantations with MRAs. In a new reconstruction technique, we remove arterial clamps after anastomosing the donor to the recipient′s main renal vessels, which cause backflow from accessory arteries to prevent thrombosis. By this technique, we reduce the ischemic times as well as the operating times. Both in live or cadaver donor kidneys, lower polar arteries were anastomosed to the inferior epigastric artery and upper polar arteries were anastomosed to the superior epigastric arteries. Injection of Papaverine and ablation of sympathic nerves of these arteries dilate and prevent them from post-operative spasm. Follow-up DTPA renal scan in all patients showed good perfusion and function of the transplanted kidney, except two cases of polar arterial thrombosis. Mean creatinine levels during at least two years of follow-up remained acceptable. Patient and graft survival were excellent. No cases of ATN, hypertension, rejection and urologic complications were found. In conclusion, this technique can be safely and successfully utilized for renal transplantation with kidneys having MRAs, and may be associated with a lower complication rate and better graft function compared with the existing techniques.

  14. Impact of airway gas exchange on the multiple inert gas elimination technique: theory.

    Science.gov (United States)

    Anderson, Joseph C; Hlastala, Michael P

    2010-03-01

    The multiple inert gas elimination technique (MIGET) provides a method for estimating alveolar gas exchange efficiency. Six soluble inert gases are infused into a peripheral vein. Measurements of these gases in breath, arterial blood, and venous blood are interpreted using a mathematical model of alveolar gas exchange (MIGET model) that neglects airway gas exchange. A mathematical model describing airway and alveolar gas exchange predicts that two of these gases, ether and acetone, exchange primarily within the airways. To determine the effect of airway gas exchange on the MIGET, we selected two additional gases, toluene and m-dichlorobenzene, that have the same blood solubility as ether and acetone and minimize airway gas exchange via their low water solubility. The airway-alveolar gas exchange model simulated the exchange of toluene, m-dichlorobenzene, and the six MIGET gases under multiple conditions of alveolar ventilation-to-perfusion, VA/Q, heterogeneity. We increased the importance of airway gas exchange by changing bronchial blood flow, Qbr. From these simulations, we calculated the excretion and retention of the eight inert gases and divided the results into two groups: (1) the standard MIGET gases which included acetone and ether and (2) the modified MIGET gases which included toluene and m-dichlorobenzene. The MIGET mathematical model predicted distributions of ventilation and perfusion for each grouping of gases and multiple perturbations of VA/Q and Qbr. Using the modified MIGET gases, MIGET predicted a smaller dead space fraction, greater mean VA, greater log(SDVA), and more closely matched the imposed VA distribution than that using the standard MIGET gases. Perfusion distributions were relatively unaffected.

  15. What are the appropriate methods for analyzing patient-reported outcomes in randomized trials when data are missing?

    Science.gov (United States)

    Hamel, J F; Sebille, V; Le Neel, T; Kubis, G; Boyer, F C; Hardouin, J B

    2017-12-01

    Subjective health measurements using Patient Reported Outcomes (PRO) are increasingly used in randomized trials, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: Classical Test Theory (CTT) and Item Response Theory models (IRT). These two strategies display very similar characteristics when data are complete, but in the common case when data are missing, whether IRT or CTT would be the most appropriate remains unknown and was investigated using simulations. We simulated PRO data such as quality of life data. Missing responses to items were simulated as being completely random, depending on an observable covariate or on an unobserved latent trait. The considered CTT-based methods allowed comparing scores using complete-case analysis, personal mean imputations or multiple-imputations based on a two-way procedure. The IRT-based method was the Wald test on a Rasch model including a group covariate. The IRT-based method and the multiple-imputations-based method for CTT displayed the highest observed power and were the only unbiased method whatever the kind of missing data. Online software and Stata® modules compatibles with the innate mi impute suite are provided for performing such analyses. Traditional procedures (listwise deletion and personal mean imputations) should be avoided, due to inevitable problems of biases and lack of power.

  16. Genome of the Netherlands population-specific imputations identify an ABCA6 variant associated with cholesterol levels

    Science.gov (United States)

    van Leeuwen, Elisabeth M.; Karssen, Lennart C.; Deelen, Joris; Isaacs, Aaron; Medina-Gomez, Carolina; Mbarek, Hamdi; Kanterakis, Alexandros; Trompet, Stella; Postmus, Iris; Verweij, Niek; van Enckevort, David J.; Huffman, Jennifer E.; White, Charles C.; Feitosa, Mary F.; Bartz, Traci M.; Manichaikul, Ani; Joshi, Peter K.; Peloso, Gina M.; Deelen, Patrick; van Dijk, Freerk; Willemsen, Gonneke; de Geus, Eco J.; Milaneschi, Yuri; Penninx, Brenda W.J.H.; Francioli, Laurent C.; Menelaou, Androniki; Pulit, Sara L.; Rivadeneira, Fernando; Hofman, Albert; Oostra, Ben A.; Franco, Oscar H.; Leach, Irene Mateo; Beekman, Marian; de Craen, Anton J.M.; Uh, Hae-Won; Trochet, Holly; Hocking, Lynne J.; Porteous, David J.; Sattar, Naveed; Packard, Chris J.; Buckley, Brendan M.; Brody, Jennifer A.; Bis, Joshua C.; Rotter, Jerome I.; Mychaleckyj, Josyf C.; Campbell, Harry; Duan, Qing; Lange, Leslie A.; Wilson, James F.; Hayward, Caroline; Polasek, Ozren; Vitart, Veronique; Rudan, Igor; Wright, Alan F.; Rich, Stephen S.; Psaty, Bruce M.; Borecki, Ingrid B.; Kearney, Patricia M.; Stott, David J.; Adrienne Cupples, L.; Neerincx, Pieter B.T.; Elbers, Clara C.; Francesco Palamara, Pier; Pe'er, Itsik; Abdellaoui, Abdel; Kloosterman, Wigard P.; van Oven, Mannis; Vermaat, Martijn; Li, Mingkun; Laros, Jeroen F.J.; Stoneking, Mark; de Knijff, Peter; Kayser, Manfred; Veldink, Jan H.; van den Berg, Leonard H.; Byelas, Heorhiy; den Dunnen, Johan T.; Dijkstra, Martijn; Amin, Najaf; Joeri van der Velde, K.; van Setten, Jessica; Kattenberg, Mathijs; van Schaik, Barbera D.C.; Bot, Jan; Nijman, Isaäc J.; Mei, Hailiang; Koval, Vyacheslav; Ye, Kai; Lameijer, Eric-Wubbo; Moed, Matthijs H.; Hehir-Kwa, Jayne Y.; Handsaker, Robert E.; Sunyaev, Shamil R.; Sohail, Mashaal; Hormozdiari, Fereydoun; Marschall, Tobias; Schönhuth, Alexander; Guryev, Victor; Suchiman, H. Eka D.; Wolffenbuttel, Bruce H.; Platteel, Mathieu; Pitts, Steven J.; Potluri, Shobha; Cox, David R.; Li, Qibin; Li, Yingrui; Du, Yuanping; Chen, Ruoyan; Cao, Hongzhi; Li, Ning; Cao, Sujie; Wang, Jun; Bovenberg, Jasper A.; Jukema, J. Wouter; van der Harst, Pim; Sijbrands, Eric J.; Hottenga, Jouke-Jan; Uitterlinden, Andre G.; Swertz, Morris A.; van Ommen, Gert-Jan B.; de Bakker, Paul I.W.; Eline Slagboom, P.; Boomsma, Dorret I.; Wijmenga, Cisca; van Duijn, Cornelia M.

    2015-01-01

    Variants associated with blood lipid levels may be population-specific. To identify low-frequency variants associated with this phenotype, population-specific reference panels may be used. Here we impute nine large Dutch biobanks (~35,000 samples) with the population-specific reference panel created by the Genome of the Netherlands Project and perform association testing with blood lipid levels. We report the discovery of five novel associations at four loci (P value <6.61 × 10−4), including a rare missense variant in ABCA6 (rs77542162, p.Cys1359Arg, frequency 0.034), which is predicted to be deleterious. The frequency of this ABCA6 variant is 3.65-fold increased in the Dutch and its effect (βLDL-C=0.135, βTC=0.140) is estimated to be very similar to those observed for single variants in well-known lipid genes, such as LDLR. PMID:25751400

  17. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  18. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  19. Fine-mapping identifies multiple prostate cancer risk loci at 5p15, one of which associates with TERT expression

    Science.gov (United States)

    Kote-Jarai, Zsofia; Saunders, Edward J.; Leongamornlert, Daniel A.; Tymrakiewicz, Malgorzata; Dadaev, Tokhir; Jugurnauth-Little, Sarah; Ross-Adams, Helen; Al Olama, Ali Amin; Benlloch, Sara; Halim, Silvia; Russel, Roslin; Dunning, Alison M.; Luccarini, Craig; Dennis, Joe; Neal, David E.; Hamdy, Freddie C.; Donovan, Jenny L.; Muir, Ken; Giles, Graham G.; Severi, Gianluca; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher A.; Schumacher, Fredrick; Henderson, Brian E.; Le Marchand, Loic; Lindstrom, Sara; Kraft, Peter; Hunter, David J.; Gapstur, Susan; Chanock, Stephen; Berndt, Sonja I.; Albanes, Demetrius; Andriole, Gerald; Schleutker, Johanna; Weischer, Maren; Canzian, Federico; Riboli, Elio; Key, Tim J.; Travis, Ruth C.; Campa, Daniele; Ingles, Sue A.; John, Esther M.; Hayes, Richard B.; Pharoah, Paul; Khaw, Kay-Tee; Stanford, Janet L.; Ostrander, Elaine A.; Signorello, Lisa B.; Thibodeau, Stephen N.; Schaid, Dan; Maier, Christiane; Vogel, Walther; Kibel, Adam S.; Cybulski, Cezary; Lubinski, Jan; Cannon-Albright, Lisa; Brenner, Hermann; Park, Jong Y.; Kaneva, Radka; Batra, Jyotsna; Spurdle, Amanda; Clements, Judith A.; Teixeira, Manuel R.; Govindasami, Koveela; Guy, Michelle; Wilkinson, Rosemary A.; Sawyer, Emma J.; Morgan, Angela; Dicks, Ed; Baynes, Caroline; Conroy, Don; Bojesen, Stig E.; Kaaks, Rudolf; Vincent, Daniel; Bacot, François; Tessier, Daniel C.; Easton, Douglas F.; Eeles, Rosalind A.

    2013-01-01

    Associations between single nucleotide polymorphisms (SNPs) at 5p15 and multiple cancer types have been reported. We have previously shown evidence for a strong association between prostate cancer (PrCa) risk and rs2242652 at 5p15, intronic in the telomerase reverse transcriptase (TERT) gene that encodes TERT. To comprehensively evaluate the association between genetic variation across this region and PrCa, we performed a fine-mapping analysis by genotyping 134 SNPs using a custom Illumina iSelect array or Sequenom MassArray iPlex, followed by imputation of 1094 SNPs in 22 301 PrCa cases and 22 320 controls in The PRACTICAL consortium. Multiple stepwise logistic regression analysis identified four signals in the promoter or intronic regions of TERT that independently associated with PrCa risk. Gene expression analysis of normal prostate tissue showed evidence that SNPs within one of these regions also associated with TERT expression, providing a potential mechanism for predisposition to disease. PMID:23535824

  20. Reconstruction of elongated bubbles fusing the information from multiple optical probes through a Bayesian inference technique

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Shubhankar; Das, Prasanta Kr., E-mail: pkd@mech.iitkgp.ernet.in [Department of Mechanical Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302 (India); Roy Chaudhuri, Partha [Department of Physics, Indian Institute of Technology Kharagpur, Kharagpur 721302 (India)

    2016-07-15

    In this communication, a novel optical technique has been proposed for the reconstruction of the shape of a Taylor bubble using measurements from multiple arrays of optical sensors. The deviation of an optical beam passing through the bubble depends on the contour of bubble surface. A theoretical model of the deviation of a beam during the traverse of a Taylor bubble through it has been developed. Using this model and the time history of the deviation captured by the sensor array, the bubble shape has been reconstructed. The reconstruction has been performed using an inverse algorithm based on Bayesian inference technique and Markov chain Monte Carlo sampling algorithm. The reconstructed nose shape has been compared with the true shape, extracted through image processing of high speed images. Finally, an error analysis has been performed to pinpoint the sources of the errors.

  1. Probabilistic images (PBIS): A concise image representation technique for multiple parameters

    International Nuclear Information System (INIS)

    Wu, L.C.; Yeh, S.H.; Chen, Z.; Liu, R.S.

    1984-01-01

    Based on m parametric images (PIs) derived from a dynamic series (DS), each pixel of DS is regarded as an m-dimensional vector. Given one set of normal samples (pixels) N and another of abnormal samples A, probability density functions (pdfs) of both sets are estimated. Any unknown sample is classified into N or A by calculating the probability of its being in the abnormal set using the Bayes' theorem. Instead of estimating the multivariate pdfs, a distance ratio transformation is introduced to map the m-dimensional sample space to one dimensional Euclidean space. Consequently, the image that localizes the regional abnormalities is characterized by the probability of being abnormal. This leads to the new representation scheme of PBIs. Tc-99m HIDA study for detecting intrahepatic lithiasis (IL) was chosen as an example of constructing PBI from 3 parameters derived from DS and such a PBI was compared with those 3 PIs, namely, retention ratio image (RRI), peak time image (TNMAX) and excretion mean transit time image (EMTT). 32 normal subjects and 20 patients with proved IL were collected and analyzed. The resultant sensitivity and specificity of PBI were 97% and 98% respectively. They were superior to those of any of the 3 PIs: RRI (94/97), TMAX (86/88) and EMTT (94/97). Furthermore, the contrast of PBI was much better than that of any other image. This new image formation technique, based on multiple parameters, shows the functional abnormalities in a structural way. Its good contrast makes the interpretation easy. This technique is powerful compared to the existing parametric image method

  2. Neutron Multiplicity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Frame, Katherine Chiyoko [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-06-28

    Neutron multiplicity measurements are widely used for nondestructive assay (NDA) of special nuclear material (SNM). When combined with isotopic composition information, neutron multiplicity analysis can be used to estimate the spontaneous fission rate and leakage multiplication of SNM. When combined with isotopic information, the total mass of fissile material can also be determined. This presentation provides an overview of this technique.

  3. Immunochip Analysis Identifies Multiple Susceptibility Loci for Systemic Sclerosis

    NARCIS (Netherlands)

    Mayes, Maureen D.; Bossini-Castillo, Lara; Gorlova, Olga; Martin, Jose Ezequiel; Zhou, Xiaodong; Chen, Wei V.; Assassi, Shervin; Ying, Jun; Tan, Filemon K.; Arnett, Frank C.; Reveille, John D.; Guerra, Sandra; Terue, Maria; Carmona, Francisco David; Gregersen, Peter K.; Lee, Annette T.; Lopez-Isac, Elena; Ochoa, Eguzkine; Carreira, Patricia; Simeon, Carmen Pilar; Castellvi, Ivan; Angel Gonzalez-Gay, Miguel; Zhernakova, Alexandra; Padyukov, Leonid; Aarcon-Riquelme, Marta; Wijmenga, Cisca; Beretta, Lorenzo; Riemekasten, Gabriela; Witte, Torsten; Hunzelmann, Nicolas; Kreuter, Alexander; Distler, Jorg H. W.; Voskuy, Alexandre E.; Schuerwegh, Annemie J.; Hesselstrand, Roger; Nordin, Annika; Airo, Paolo; Lunardi, Claudio; Shiels, Paul; van Laar, Jacob M.; Herrick, Ariane; Worthington, Jane; Denton, Christopher; Wigley, Fredrick M.; Hummers, Laura K.; Varga, John; Hinchcliff, Monique E.; Baron, Murray; Hudson, Marie; Pope, Janet E.

    2014-01-01

    In this study, 1,833 systemic sclerosis (SSc) cases and 3,466 controls were genotyped with the Immunochip array. Classical alleles, amino acid residues, and SNPs across the human leukocyte antigen (HLA) region were imputed and tested. These analyses resulted in a model composed of six polymorphic

  4. Comparison of Imputation Methods for Handling Missing Categorical Data with Univariate Pattern|| Una comparación de métodos de imputación de variables categóricas con patrón univariado

    Directory of Open Access Journals (Sweden)

    Torres Munguía, Juan Armando

    2014-06-01

    Full Text Available This paper examines the sample proportions estimates in the presence of univariate missing categorical data. A database about smoking habits (2011 National Addiction Survey of Mexico was used to create simulated yet realistic datasets at rates 5% and 15% of missingness, each for MCAR, MAR and MNAR mechanisms. Then the performance of six methods for addressing missingness is evaluated: listwise, mode imputation, random imputation, hot-deck, imputation by polytomous regression and random forests. Results showed that the most effective methods for dealing with missing categorical data in most of the scenarios assessed in this paper were hot-deck and polytomous regression approaches. || El presente estudio examina la estimación de proporciones muestrales en la presencia de valores faltantes en una variable categórica. Se utiliza una encuesta de consumo de tabaco (Encuesta Nacional de Adicciones de México 2011 para crear bases de datos simuladas pero reales con 5% y 15% de valores perdidos para cada mecanismo de no respuesta MCAR, MAR y MNAR. Se evalúa el desempeño de seis métodos para tratar la falta de respuesta: listwise, imputación de moda, imputación aleatoria, hot-deck, imputación por regresión politómica y árboles de clasificación. Los resultados de las simulaciones indican que los métodos más efectivos para el tratamiento de la no respuesta en variables categóricas, bajo los escenarios simulados, son hot-deck y la regresión politómica.

  5. Comparison of multiple support excitation solution techniques for piping systems

    International Nuclear Information System (INIS)

    Sterkel, H.P.; Leimbach, K.R.

    1980-01-01

    Design and analysis of nuclear power plant piping systems exposed to a variety of dynamic loads often require multiple support excitation analysis by modal or direct time integration methods. Both methods have recently been implemented in the computer program KWUROHR for static and dynamic analysis of piping systems, following the previous implementation of the multiple support excitation response spectrum method (see papers K 6/15 and K 6/15a of the SMiRT-4 Conference). The results of multiple support excitation response spectrum analyses can be examined by carrying out the equivalent time history analyses which do not distort the time phase relationship between the excitations at different support points. A frequent point of discussion is multiple versus single support excitation. A single support excitation analysis is computationally straightforward and tends to be on the conservative side, as the numerical results show. A multiple support excitation analysis, however, does not incur much more additional computer cost than the expenditure for an initial static solution involving three times the number, L, of excitation levels, i.e. 3L static load cases. The results are more realistic than those from a single support excitation analysis. A number of typical nuclear plant piping systems have been analyzed using single and multiple support excitation algorithms for: (1) the response spectrum method, (2) the modal time history method via the Wilson, Newmark and Goldberg integration operators and (3) the direct time history method via the Wilson integration operator. Characteristic results are presented to compare the computational quality of all three methods. (orig.)

  6. Penalized regression procedures for variable selection in the potential outcomes framework.

    Science.gov (United States)

    Ghosh, Debashis; Zhu, Yeying; Coffman, Donna L

    2015-05-10

    A recent topic of much interest in causal inference is model selection. In this article, we describe a framework in which to consider penalized regression approaches to variable selection for causal effects. The framework leads to a simple 'impute, then select' class of procedures that is agnostic to the type of imputation algorithm as well as penalized regression used. It also clarifies how model selection involves a multivariate regression model for causal inference problems and that these methods can be applied for identifying subgroups in which treatment effects are homogeneous. Analogies and links with the literature on machine learning methods, missing data, and imputation are drawn. A difference least absolute shrinkage and selection operator algorithm is defined, along with its multiple imputation analogs. The procedures are illustrated using a well-known right-heart catheterization dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  7. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  8. Early cost estimating for road construction projects using multiple regression techniques

    Directory of Open Access Journals (Sweden)

    Ibrahim Mahamid

    2011-12-01

    Full Text Available The objective of this study is to develop early cost estimating models for road construction projects using multiple regression techniques, based on 131 sets of data collected in the West Bank in Palestine. As the cost estimates are required at early stages of a project, considerations were given to the fact that the input data for the required regression model could be easily extracted from sketches or scope definition of the project. 11 regression models are developed to estimate the total cost of road construction project in US dollar; 5 of them include bid quantities as input variables and 6 include road length and road width. The coefficient of determination r2 for the developed models is ranging from 0.92 to 0.98 which indicate that the predicted values from a forecast models fit with the real-life data. The values of the mean absolute percentage error (MAPE of the developed regression models are ranging from 13% to 31%, the results compare favorably with past researches which have shown that the estimate accuracy in the early stages of a project is between ±25% and ±50%.

  9. Ant colony optimisation-direct cover: a hybrid ant colony direct cover technique for multi-level synthesis of multiple-valued logic functions

    Science.gov (United States)

    Abd-El-Barr, Mostafa

    2010-12-01

    The use of non-binary (multiple-valued) logic in the synthesis of digital systems can lead to savings in chip area. Advances in very large scale integration (VLSI) technology have enabled the successful implementation of multiple-valued logic (MVL) circuits. A number of heuristic algorithms for the synthesis of (near) minimal sum-of products (two-level) realisation of MVL functions have been reported in the literature. The direct cover (DC) technique is one such algorithm. The ant colony optimisation (ACO) algorithm is a meta-heuristic that uses constructive greediness to explore a large solution space in finding (near) optimal solutions. The ACO algorithm mimics the ant's behaviour in the real world in using the shortest path to reach food sources. We have previously introduced an ACO-based heuristic for the synthesis of two-level MVL functions. In this article, we introduce the ACO-DC hybrid technique for the synthesis of multi-level MVL functions. The basic idea is to use an ant to decompose a given MVL function into a number of levels and then synthesise each sub-function using a DC-based technique. The results obtained using the proposed approach are compared to those obtained using existing techniques reported in the literature. A benchmark set consisting of 50,000 randomly generated 2-variable 4-valued functions is used in the comparison. The results obtained using the proposed ACO-DC technique are shown to produce efficient realisation in terms of the average number of gates (as a measure of chip area) needed for the synthesis of a given MVL function.

  10. Resolvent-Techniques for Multiple Exercise Problems

    International Nuclear Information System (INIS)

    Christensen, Sören; Lempa, Jukka

    2015-01-01

    We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristics of the problems can be identified more explicitly. We illustrate the main results with explicit examples

  11. Resolvent-Techniques for Multiple Exercise Problems

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, Sören, E-mail: christensen@math.uni-kiel.de [Christian–Albrechts-University in Kiel, Mathematical Institute (Germany); Lempa, Jukka, E-mail: jukka.lempa@hioa.no [Oslo and Akershus University College, School of business, Faculty of Social Sciences (Norway)

    2015-02-15

    We study optimal multiple stopping of strong Markov processes with random refraction periods. The refraction periods are assumed to be exponentially distributed with a common rate and independent of the underlying dynamics. Our main tool is using the resolvent operator. In the first part, we reduce infinite stopping problems to ordinary ones in a general strong Markov setting. This leads to explicit solutions for wide classes of such problems. Starting from this result, we analyze problems with finitely many exercise rights and explain solution methods for some classes of problems with underlying Lévy and diffusion processes, where the optimal characteristics of the problems can be identified more explicitly. We illustrate the main results with explicit examples.

  12. Who cares and how much? The imputed economic contribution to the Canadian healthcare system of middle-aged and older unpaid caregivers providing care to the elderly.

    Science.gov (United States)

    Hollander, Marcus J; Liu, Guiping; Chappell, Neena L

    2009-01-01

    Canadians provide significant amounts of unpaid care to elderly family members and friends with long-term health problems. While some information is available on the nature of the tasks unpaid caregivers perform, and the amounts of time they spend on these tasks, the contribution of unpaid caregivers is often hidden. (It is recognized that some caregiving may be for short periods of time or may entail matters better described as "help" or "assistance," such as providing transportation. However, we use caregiving to cover the full range of unpaid care provided from some basic help to personal care.) Aggregate estimates of the market costs to replace the unpaid care provided are important to governments for policy development as they provide a means to situate the contributions of unpaid caregivers within Canada's healthcare system. The purpose of this study was to obtain an assessment of the imputed costs of replacing the unpaid care provided by Canadians to the elderly. (Imputed costs is used to refer to costs that would be incurred if the care provided by an unpaid caregiver was, instead, provided by a paid caregiver, on a direct hour-for-hour substitution basis.) The economic value of unpaid care as understood in this study is defined as the cost to replace the services provided by unpaid caregivers at rates for paid care providers.

  13. Bipolar radiofrequency ablation of benign thyroid nodules using a multiple overlapping shot technique in a 3-month follow-up.

    Science.gov (United States)

    Kohlhase, Konstantin David; Korkusuz, Yücel; Gröner, Daniel; Erbelding, Christian; Happel, Christian; Luboldt, Wolfgang; Grünwald, Frank

    2016-08-01

    Purpose The aim of this study was to evaluate the decrease of benign thyroid nodules after bipolar radiofrequency ablation (RFA) in a 3-month follow-up using a multiple overlapping shot technique ('MOST'). Methods A total of 18 patients with 20 symptomatic benign thyroid nodules (17 cold nodules, 3 hyperfunctioning nodules) were treated in one single session by bipolar RFA. Bipolar ablation was performed using MOST. The nodule volumes were measured prior to ablation and 3 months after the procedure using ultrasound. The population consisted of either solid (>80% solid tissue within the volume of interest), complex, or cystic nodules (nodule volume (ΔV), median 5.3 mL (range 0.13-43.1 mL), corresponding to a relative reduction in mean of 56 ± 17.9%. Median initial volume was 8 mL (range 0.48-62 mL); 3 months after ablation a median volume of 2.3 mL (range 0.3-32 mL) was measured. Nodule growth ≥50% occurred in 70% (14 nodules). At the follow-up no complications such as infections, persisting pain, nerve injuries or immunogen stimulation occurred. Patients with cold nodules (15) remained euthyroid, with hyperfunctioning nodules either euthyroid (2) or latent hypofunctional (1). Conclusion The use of bipolar RFA is an effective, safe and suitable thermoablative technique to treat benign thyroid nodules. Combined with the multiple overlapping shot technique it allows sufficient ablation.

  14. Diagnosis of soil-transmitted helminthiasis in an Amazonic community of Peru using multiple diagnostic techniques.

    Science.gov (United States)

    Machicado, Jorge D; Marcos, Luis A; Tello, Raul; Canales, Marco; Terashima, Angelica; Gotuzzo, Eduardo

    2012-06-01

    An observational descriptive study was conducted in a Shipibo-Conibo/Ese'Eja community of the rainforest in Peru to compare the Kato-Katz method and the spontaneous sedimentation in tube technique (SSTT) for the diagnosis of intestinal parasites as well as to report the prevalence of soil-transmitted helminth (STH) infections in this area. A total of 73 stool samples were collected and analysed by several parasitological techniques, including Kato-Katz, SSTT, modified Baermann technique (MBT), agar plate culture, Harada-Mori culture and the direct smear examination. Kato-Katz and SSTT had the same rate of detection for Ascaris lumbricoides (5%), Trichuris trichiura (5%), hookworm (14%) and Hymenolepis nana (26%). The detection rate for Strongyloides stercoralis larvae was 16% by SSTT and 0% by Kato-Katz, but 18% by agar plate culture and 16% by MBT. The SSTT also had the advantage of detecting multiple intestinal protozoa such as Blastocystis hominis (40%), Giardia intestinalis (29%) and Entamoeba histolytica/E. dispar (16%). The most common intestinal parasites found in this community were B. hominis, G. intestinalis, H. nana, S. stercoralis and hookworm. In conclusion, the SSTT is not inferior to Kato-Katz for the diagnosis of common STH infections but is largely superior for detecting intestinal protozoa and S. stercoralis larvae. Copyright © 2012 Royal Society of Tropical Medicine and Hygiene. Published by Elsevier Ltd. All rights reserved.

  15. Genetic evaluation with major genes and polygenic inheritance when some animals are not genotyped using gene content multiple-trait BLUP.

    Science.gov (United States)

    Legarra, Andrés; Vitezica, Zulma G

    2015-11-17

    In pedigreed populations with a major gene segregating for a quantitative trait, it is not clear how to use pedigree, genotype and phenotype information when some individuals are not genotyped. We propose to consider gene content at the major gene as a second trait correlated to the quantitative trait, in a gene content multiple-trait best linear unbiased prediction (GCMTBLUP) method. The genetic covariance between the trait and gene content at the major gene is a function of the substitution effect of the gene. This genetic covariance can be written in a multiple-trait form that accommodates any pattern of missing values for either genotype or phenotype data. Effects of major gene alleles and the genetic covariance between genotype at the major gene and the phenotype can be estimated using standard EM-REML or Gibbs sampling. Prediction of breeding values with genotypes at the major gene can use multiple-trait BLUP software. Major genes with more than two alleles can be considered by including negative covariances between gene contents at each different allele. We simulated two scenarios: a selected and an unselected trait with heritabilities of 0.05 and 0.5, respectively. In both cases, the major gene explained half the genetic variation. Competing methods used imputed gene contents derived by the method of Gengler et al. or by iterative peeling. Imputed gene contents, in contrast to GCMTBLUP, do not consider information on the quantitative trait for genotype prediction. GCMTBLUP gave unbiased estimates of the gene effect, in contrast to the other methods, with less bias and better or equal accuracy of prediction. GCMTBLUP improved estimation of genotypes in non-genotyped individuals, in particular if these individuals had own phenotype records and the trait had a high heritability. Ignoring the major gene in genetic evaluation led to serious biases and decreased prediction accuracy. CGMTBLUP is the best linear predictor of additive genetic merit including

  16. Prediction of regulatory gene pairs using dynamic time warping and gene ontology.

    Science.gov (United States)

    Yang, Andy C; Hsu, Hui-Huang; Lu, Ming-Da; Tseng, Vincent S; Shih, Timothy K

    2014-01-01

    Selecting informative genes is the most important task for data analysis on microarray gene expression data. In this work, we aim at identifying regulatory gene pairs from microarray gene expression data. However, microarray data often contain multiple missing expression values. Missing value imputation is thus needed before further processing for regulatory gene pairs becomes possible. We develop a novel approach to first impute missing values in microarray time series data by combining k-Nearest Neighbour (KNN), Dynamic Time Warping (DTW) and Gene Ontology (GO). After missing values are imputed, we then perform gene regulation prediction based on our proposed DTW-GO distance measurement of gene pairs. Experimental results show that our approach is more accurate when compared with existing missing value imputation methods on real microarray data sets. Furthermore, our approach can also discover more regulatory gene pairs that are known in the literature than other methods.

  17. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. New insights into the pharmacogenomics of antidepressant response from the GENDEP and STAR*D studies: rare variant analysis and high-density imputation.

    Science.gov (United States)

    Fabbri, C; Tansey, K E; Perlis, R H; Hauser, J; Henigsberg, N; Maier, W; Mors, O; Placentino, A; Rietschel, M; Souery, D; Breen, G; Curtis, C; Sang-Hyuk, L; Newhouse, S; Patel, H; Guipponi, M; Perroud, N; Bondolfi, G; O'Donovan, M; Lewis, G; Biernacka, J M; Weinshilboum, R M; Farmer, A; Aitchison, K J; Craig, I; McGuffin, P; Uher, R; Lewis, C M

    2017-11-21

    Genome-wide association studies have generally failed to identify polymorphisms associated with antidepressant response. Possible reasons include limited coverage of genetic variants that this study tried to address by exome genotyping and dense imputation. A meta-analysis of Genome-Based Therapeutic Drugs for Depression (GENDEP) and Sequenced Treatment Alternatives to Relieve Depression (STAR*D) studies was performed at the single-nucleotide polymorphism (SNP), gene and pathway levels. Coverage of genetic variants was increased compared with previous studies by adding exome genotypes to previously available genome-wide data and using the Haplotype Reference Consortium panel for imputation. Standard quality control was applied. Phenotypes were symptom improvement and remission after 12 weeks of antidepressant treatment. Significant findings were investigated in NEWMEDS consortium samples and Pharmacogenomic Research Network Antidepressant Medication Pharmacogenomic Study (PGRN-AMPS) for replication. A total of 7062 950 SNPs were analyzed in GENDEP (n=738) and STAR*D (n=1409). rs116692768 (P=1.80e-08, ITGA9 (integrin α9)) and rs76191705 (P=2.59e-08, NRXN3 (neurexin 3)) were significantly associated with symptom improvement during citalopram/escitalopram treatment. At the gene level, no consistent effect was found. At the pathway level, the Gene Ontology (GO) terms GO: 0005694 (chromosome) and GO: 0044427 (chromosomal part) were associated with improvement (corrected P=0.007 and 0.045, respectively). The association between rs116692768 and symptom improvement was replicated in PGRN-AMPS (P=0.047), whereas rs76191705 was not. The two SNPs did not replicate in NEWMEDS. ITGA9 codes for a membrane receptor for neurotrophins and NRXN3 is a transmembrane neuronal adhesion receptor involved in synaptic differentiation. Despite their meaningful biological rationale for being involved in antidepressant effect, replication was partial. Further studies may help in clarifying

  19. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  20. Dynamic reliability assessment and prediction for repairable systems with interval-censored data

    International Nuclear Information System (INIS)

    Peng, Yizhen; Wang, Yu; Zi, YanYang; Tsui, Kwok-Leung; Zhang, Chuhua

    2017-01-01

    The ‘Test, Analyze and Fix’ process is widely applied to improve the reliability of a repairable system. In this process, dynamic reliability assessment for the system has been paid a great deal of attention. Due to instrument malfunctions, staff omissions and imperfect inspection strategies, field reliability data are often subject to interval censoring, making dynamic reliability assessment become a difficult task. Most traditional methods assume this kind of data as multiple normal distributed variables or the missing mechanism as missing at random, which may cause a large bias in parameter estimation. This paper proposes a novel method to evaluate and predict the dynamic reliability of a repairable system subject to interval-censored problem. First, a multiple imputation strategy based on the assumption that the reliability growth trend follows a nonhomogeneous Poisson process is developed to derive the distributions of missing data. Second, a new order statistic model that can transfer the dependent variables into independent variables is developed to simplify the imputation procedure. The unknown parameters of the model are iteratively inferred by the Monte Carlo expectation maximization (MCEM) algorithm. Finally, to verify the effectiveness of the proposed method, a simulation and a real case study for gas pipeline compressor system are implemented. - Highlights: • A new multiple imputation strategy was developed to derive the PDF of missing data. • A new order statistic model was developed to simplify the imputation procedure. • The parameters of the order statistic model were iteratively inferred by MCEM. • A real cases study was conducted to verify the effectiveness of the proposed method.

  1. Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique

    Science.gov (United States)

    Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.

    2017-12-01

    Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.

  2. Multiple photon resonances

    International Nuclear Information System (INIS)

    Elliott, C.J.; Feldman, B.J.

    1979-02-01

    A detailed theoretical analysis is presented of the interaction of intense near-resonant monochromatic radiation with an N-level anharmonic oscillator. In particular, the phenomenon of multiple photon resonance, the process by which an N-level system resonantly absorbs two or more photons simultaneously, is investigated. Starting from the Schroedinger equation, diagrammatic techniques are developed that allow the resonant process to be analyzed quantitatively, in analogy with well-known two-level coherent phenomena. In addition, multiple photon Stark shifts of the resonances, shifts absent in two-level theory, are obtained from the diagrams. Insights into the nature of multiple photon resonances are gained by comparing the quantum mechanical system with classical coupled pendulums whose equations of motion possess identical eigenvalues and eigenvectors. In certain limiting cases, including that of the resonantly excited N-level harmonic oscillator and that of the equally spaced N-level system with equal matrix elements, analytic results are derived. The influence of population relaxation and phase-disrupting collisions on the multiple photon process are also analyzed, the latter by extension of the diagrammatic technique to the density matrix equations of motion. 11 figures

  3. [Study on correction of data bias caused by different missing mechanisms in survey of medical expenditure among students enrolling in Urban Resident Basic Medical Insurance].

    Science.gov (United States)

    Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong

    2015-05-01

    The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.

  4. Family-based Association Analyses of Imputed Genotypes Reveal Genome-Wide Significant Association of Alzheimer’s disease with OSBPL6, PTPRG and PDCL3

    Science.gov (United States)

    Herold, Christine; Hooli, Basavaraj V.; Mullin, Kristina; Liu, Tian; Roehr, Johannes T; Mattheisen, Manuel; Parrado, Antonio R.; Bertram, Lars; Lange, Christoph; Tanzi, Rudolph E.

    2015-01-01

    The genetic basis of Alzheimer's disease (AD) is complex and heterogeneous. Over 200 highly penetrant pathogenic variants in the genes APP, PSEN1 and PSEN2 cause a subset of early-onset familial Alzheimer's disease (EOFAD). On the other hand, susceptibility to late-onset forms of AD (LOAD) is indisputably associated to the ε4 allele in the gene APOE, and more recently to variants in more than two-dozen additional genes identified in the large-scale genome-wide association studies (GWAS) and meta-analyses reports. Taken together however, although the heritability in AD is estimated to be as high as 80%, a large proportion of the underlying genetic factors still remain to be elucidated. In this study we performed a systematic family-based genome-wide association and meta-analysis on close to 15 million imputed variants from three large collections of AD families (~3,500 subjects from 1,070 families). Using a multivariate phenotype combining affection status and onset age, meta-analysis of the association results revealed three single nucleotide polymorphisms (SNPs) that achieved genome-wide significance for association with AD risk: rs7609954 in the gene PTPRG (P-value = 3.98·10−08), rs1347297 in the gene OSBPL6 (P-value = 4.53·10−08), and rs1513625 near PDCL3 (P-value = 4.28·10−08). In addition, rs72953347 in OSBPL6 (P-value = 6.36·10−07) and two SNPs in the gene CDKAL1 showed marginally significant association with LOAD (rs10456232, P-value: 4.76·10−07; rs62400067, P-value: 3.54·10−07). In summary, family-based GWAS meta-analysis of imputed SNPs revealed novel genomic variants in (or near) PTPRG, OSBPL6, and PDCL3 that influence risk for AD with genome-wide significance. PMID:26830138

  5. Estimation of caries experience by multiple imputation and direct standardization

    NARCIS (Netherlands)

    Schuller, A. A.; Van Buuren, S.

    2014-01-01

    Valid estimates of caries experience are needed to monitor oral population health. Obtaining such estimates in practice is often complicated by nonresponse and missing data. The goal of this study was to estimate caries experiences in a population of children aged 5 and 11 years, in the presence of

  6. Estimation of Caries Experience by Multiple Imputation and Direct Standardization

    NARCIS (Netherlands)

    Schuller, A. A.; van Buuren, S.

    2014-01-01

    Valid estimates of caries experience are needed to monitor oral population health. Obtaining such estimates in practice is often complicated by nonresponse and missing data. The goal of this study was to estimate caries experiences in a population of children aged 5 and 11 years, in the presence of

  7. IMPLANTABLE RESONATORS – A TECHNIQUE FOR REPEATED MEASUREMENT OF OXYGEN AT MULTIPLE DEEP SITES WITH IN VIVO EPR

    Science.gov (United States)

    Li, Hongbin; Hou, Huagang; Sucheta, Artur; Williams, Benjamin B.; Lariviere, Jean P.; Khan, Nadeem; Lesniewski, Piotr N.; Swartz, Harold M.

    2013-01-01

    EPR oximetry using implantable resonators allow measurements at much deeper sites than are possible with surface resonators (> 80 mm vs. 10 mm) and have greater sensitivity at any depth. We report here the development of an improvement of the technique that now enables us to obtain the information from multiple sites and at a variety of depths. The measurements from the various sites are resolved using a simple magnetic field gradient. In the rat brain multi-probe implanted resonators measured pO2 at several sites simultaneously for over 6 months to record under normoxic, hypoxic and hyperoxic conditions. This technique also facilitates measurements in moving parts of the animal such as the heart, because the orientation of the paramagnetic material relative to the sensitive small loop is not altered by the motion. The measured response is very fast, enabling measurements in real time of physiological and pathological changes such as experimental cardiac ischemia in the mouse heart. The technique also is quite useful for following changes in tumor pO2, including applications with simultaneous measurements in tumors and adjacent normal tissues. PMID:20204802

  8. Multivariate two-part statistics for analysis of correlated mass spectrometry data from multiple biological specimens.

    Science.gov (United States)

    Taylor, Sandra L; Ruhaak, L Renee; Weiss, Robert H; Kelly, Karen; Kim, Kyoungmi

    2017-01-01

    High through-put mass spectrometry (MS) is now being used to profile small molecular compounds across multiple biological sample types from the same subjects with the goal of leveraging information across biospecimens. Multivariate statistical methods that combine information from all biospecimens could be more powerful than the usual univariate analyses. However, missing values are common in MS data and imputation can impact between-biospecimen correlation and multivariate analysis results. We propose two multivariate two-part statistics that accommodate missing values and combine data from all biospecimens to identify differentially regulated compounds. Statistical significance is determined using a multivariate permutation null distribution. Relative to univariate tests, the multivariate procedures detected more significant compounds in three biological datasets. In a simulation study, we showed that multi-biospecimen testing procedures were more powerful than single-biospecimen methods when compounds are differentially regulated in multiple biospecimens but univariate methods can be more powerful if compounds are differentially regulated in only one biospecimen. We provide R functions to implement and illustrate our method as supplementary information CONTACT: sltaylor@ucdavis.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Treatment of severe mitral regurgitation caused by lesions in both leaflets using multiple mitral valve plasty techniques in a small dog

    Directory of Open Access Journals (Sweden)

    Satoko Yokoyama

    2017-11-01

    Full Text Available Mitral valve plasty (MVP is preferred over mitral valve replacement (MVR for mitral regurgitation in humans because of its favorable effect on quality of life. In small dogs, it is difficult to repair multiple lesions in both leaflets using MVP. Herein, we report a case of severe mitral regurgitation caused by multiple severe lesions in the posterior leaflet (PL in a mixed Chihuahua. Initially, we had planned MVR with an artificial valve. However, MVP combined with artificial chordal reconstruction of both leaflets, semicircular suture annuloplasty, and valvuloplasty using a newly devised direct scallop suture for the PL was attempted in this dog. The dog recovered well and showed no adverse cardiac signs, surviving two major operations. The dog died 4 years and 10 months after the MVP due to non-cardiovascular disease. Our additional technique of using a direct scallop suture seemed useful for PL repair involving multiple scallops in a small dog.

  10. Interference-Assisted Techniques for Transmission and Multiple Access in Optical Communications

    Science.gov (United States)

    Guan, Xun

    communication (VLC) by adopting PNC, with a newly proposed phase-aligning method. PNC could improve the throughput at the bottlenecking relay node in a VLC system, and the proposed phase aligning method can improve the BER performance. The second part of this thesis discusses another interference-assisted technology in communication, that is, non-orthogonal multiple access (NOMA). NOMA multiplexes signals from multiple users in another dimension: power domain, with a non-orthogonal multiplexing in other dimensions such as time, frequency and code. Three schemes are proposed in this part. The first and the second schemes both realize NOMA in VLC, with different multiuser detection (MUD) techniques and a proposed phase pre-distortion method. Although both can decrease the system BER compared to conventional NOMA, the scheme using joint detection (JD) outperforms the one using successive interference cancellation (SIC). The third scheme investigated in this part is a combination of NOMA and a multicarrier precoding (MP) technology based on an orthogonal circulant transform matrix (OCT). This combination can avoid the complicated adaptive bit loading or electronic equalization, making NOMA more attractive in a practical system.

  11. A prospective study of calf factors affecting age, body size, and body condition score at first calving of holstein dairy heifers.

    Science.gov (United States)

    Heinrichs, A J; Heinrichs, B S; Harel, O; Rogers, G W; Place, N T

    2005-08-01

    Data were collected prospectively on parameters related to first calving on 18 farms located in Northeastern Pennsylvania. This project was designed to study possible residual effects of calf management practices and events occurring during the first 16 wk of life on age, BW, skeletal growth, and body condition score at first calving. Multiple imputation method for handling missing data was incorporated in these analyses. This method has the advantage over ad hoc single imputations because the appropriate error structure is maintained. Much similarity was found between the multiple imputation method and a traditional mixed model analysis, except that some estimates from the multiple imputation method seemed more logical in their effects on the parameter measured. Factors related to increased age at first calving were increased difficulty of delivery, antibiotic treatment of sick calves, increased amount of milk or milk replacer fed before weaning, reduced quality of forage fed to weaned calves, maximum humidity, mean daily temperature, and maximum ammonia levels in calf housing areas. Body weight at calving tended to increase with parity of the dam, increased amount of grain fed to calves, increased ammonia levels, and increased mean temperature of the calf housing area. Body condition score at calving tended to be positively influenced by delivery score at first calving, dam parity, and milk or milk replacer dry matter intake. Withers height at calving was positively affected by treatment of animals with antibiotics and increased mean temperature in the calf area. This study demonstrated that nutrition, housing, and management factors that affect health and growth of calves have long-term effects on the animal at least through first calving.

  12. Study of ferrallitisation process in soil by application of isotopic dilution kinetic technique to iron

    International Nuclear Information System (INIS)

    Thomann, Christiane

    1978-01-01

    Isotopic dilution kinetic technique applied to iron may contribute to make clear the conditions of ''potential'' mobility of iron in soils, under the action of three factors: moisture, incubation period and organic matter imputs. Comparison between surface horizons of three tropical soils: leached ferruginous tropical soil, weakly ferrallitic red soil and ferrallitic soil shows that in the ferrallitisation process, weakly ferrallitic soil would take place between the two other types of soils with a maximum mobility of iron. This mobility decreases when organic matter rate decreases leading then to ''beige'' soil (ferruginous leached tropical soil), and when hydroxide rate increases, which leads to ferrallitic soil. In podzol (A 1 horizon), for the same rate of organic matter, potential mobility of iron is higher than in ferallitic soil, because it contains ten times more free iron than the podzol [fr

  13. [Difference between perinatal mortality in multiple pregnancies obtained spontaneously versus assisted reproduction].

    Science.gov (United States)

    del Rayo Rivas-Ortiz, Yazmín; Hernández-Herrera, Ricardo Jorge

    2010-06-01

    Recently assisted reproduction techniques are more common, which increases multiple pregnancies and adverse perinatal outcomes. Some authors report increased mortality in multiple pregnancies products obtained by techniques of assisted reproduction vs. conceived spontaneously, although other authors found no significant difference. To evaluate mortality rate of multiple pregnancies comparing those obtained by assisted reproduction vs. spontaneous conception. Retrospective, observational and comparative study. We included pregnant women with 3 or more products that went to the Unidad Médica de Alta Especialidad No. 23, IMSS, in Monterrey, NL (Mexico), between 2002-2008. We compared the number of complicated pregnancies and dead products obtained by a technique of assisted reproduction vs. spontaneous. 68 multiple pregnancies were included. On average, spontaneously conceived fetuses had more weeks of gestation and more birth weight than those achieved by assisted reproduction techniques (p = ns). 20.5% (14/68) of multiple pregnancies had one or more fatal events: 10/40 (25%) by assisted reproduction techniques vs. 4/28 (14%) of spontaneous multiple pregnancies (p = 0.22). 21/134 (16%) of the products conceived by assisted reproduction techniques and 6/88 (7%) of spontaneous (p assisted reproduction and 21% of the cases had one or more fatal events (11% more in pregnancies achieved by assisted reproduction techniques). 12% of the products of multiple pregnancies died (9% more in those obtained by a technique of assisted reproduction).

  14. The population genomics of archaeological transition in west Iberia: Investigation of ancient substructure using imputation and haplotype-based methods.

    Directory of Open Access Journals (Sweden)

    Rui Martiniano

    2017-07-01

    Full Text Available We analyse new genomic data (0.05-2.95x from 14 ancient individuals from Portugal distributed from the Middle Neolithic (4200-3500 BC to the Middle Bronze Age (1740-1430 BC and impute genomewide diploid genotypes in these together with published ancient Eurasians. While discontinuity is evident in the transition to agriculture across the region, sensitive haplotype-based analyses suggest a significant degree of local hunter-gatherer contribution to later Iberian Neolithic populations. A more subtle genetic influx is also apparent in the Bronze Age, detectable from analyses including haplotype sharing with both ancient and modern genomes, D-statistics and Y-chromosome lineages. However, the limited nature of this introgression contrasts with the major Steppe migration turnovers within third Millennium northern Europe and echoes the survival of non-Indo-European language in Iberia. Changes in genomic estimates of individual height across Europe are also associated with these major cultural transitions, and ancestral components continue to correlate with modern differences in stature.

  15. Analyzing the changing gender wage gap based on multiply inputed right censored wages

    OpenAIRE

    Gartner, Hermann; Rässler, Susanne

    2005-01-01

    In order to analyze the gender wage gap with the German IAB-employment sample we have to solve the problem of censored wages at the upper limit of the social security system. We treat this problem as a missing data problem. We regard the missingness mechanism as not missing at random (NMAR, according to Little and Rubin, 1987, 2002) as well as missing by design. The censored wages are multiply imputed by draws of a random variable from a truncated distribution. The multiple imputation is base...

  16. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques.

    Science.gov (United States)

    Ji, Hongwei; He, Jiangping; Yang, Xin; Deklerck, Rudi; Cornelis, Jan

    2013-05-01

    In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.

  17. The Rebirth of the Theory of Imputation in the Science of Criminal Law: to an Overcoming Stage or an Involution to Pre-Scientific Conceptions?

    Directory of Open Access Journals (Sweden)

    Nicolás Santiago Cordini

    2015-06-01

    Full Text Available The Science of Criminal Law goes through a moment that can be characterized as a “crisis”. Faced with this situation, have been proliferate theories that define themselves as “theories of imputation” that leave, in whole or in part, the theory of crime up to now dominating. The aim of this article is to analyze three theories enrolled under the concept of imputation and determine in which proportion they conserve other they get off the categories proposed by the theory of crime. Then, we will establish in which proportion these theories constitute an advance for the Science of Criminal Law or, on the contrary, they are manifestations of a retreat to a pre-scientific stage.

  18. Use of modified lip repositioning technique associated with esthetic crown lengthening for treatment of excessive gingival display: A case report of multiple etiologies

    Science.gov (United States)

    Mantovani, Matheus Bortoluzzi; Souza, Eduardo Clemente; Marson, Fabiano Carlos; Corrêa, Giovani Oliveira; Progiante, Patrícia Saram; Silva, Cléverson Oliveira

    2016-01-01

    Excessive gingival display during smile can result in compromised esthetics. This study aims to report a case of excessive gingival display with multiple etiologies treated by means of modified lip repositioning technique associated with esthetic crown lengthening. A 23-year-old female patient, with 5-mm gingival display during smile caused by altered passive eruption and hypermobility of the upper lip, underwent the modified lip repositioning technique associated with gingivectomy followed by flap elevation and ostectomy/osteoplasty. Seven months after the second procedure, the patient had her esthetic complaint solved appearing stable in the observation period. The modified lip repositioning technique is an effective procedure employed to reduce gingival display and when associated with esthetic clinical crown lengthening, can appropriately treat cases of gummy smile. PMID:27041845

  19. A comparison of multiple regression and neural network techniques for mapping in situ pCO2 data

    International Nuclear Information System (INIS)

    Lefevre, Nathalie; Watson, Andrew J.; Watson, Adam R.

    2005-01-01

    Using about 138,000 measurements of surface pCO 2 in the Atlantic subpolar gyre (50-70 deg N, 60-10 deg W) during 1995-1997, we compare two methods of interpolation in space and time: a monthly distribution of surface pCO 2 constructed using multiple linear regressions on position and temperature, and a self-organizing neural network approach. Both methods confirm characteristics of the region found in previous work, i.e. the subpolar gyre is a sink for atmospheric CO 2 throughout the year, and exhibits a strong seasonal variability with the highest undersaturations occurring in spring and summer due to biological activity. As an annual average the surface pCO 2 is higher than estimates based on available syntheses of surface pCO 2 . This supports earlier suggestions that the sink of CO 2 in the Atlantic subpolar gyre has decreased over the last decade instead of increasing as previously assumed. The neural network is able to capture a more complex distribution than can be well represented by linear regressions, but both techniques agree relatively well on the average values of pCO 2 and derived fluxes. However, when both techniques are used with a subset of the data, the neural network predicts the remaining data to a much better accuracy than the regressions, with a residual standard deviation ranging from 3 to 11 μatm. The subpolar gyre is a net sink of CO 2 of 0.13 Gt-C/yr using the multiple linear regressions and 0.15 Gt-C/yr using the neural network, on average between 1995 and 1997. Both calculations were made with the NCEP monthly wind speeds converted to 10 m height and averaged between 1995 and 1997, and using the gas exchange coefficient of Wanninkhof

  20. Multivariate missing data in hydrology - Review and applications

    Science.gov (United States)

    Ben Aissia, Mohamed-Aymen; Chebana, Fateh; Ouarda, Taha B. M. J.

    2017-12-01

    Water resources planning and management require complete data sets of a number of hydrological variables, such as flood peaks and volumes. However, hydrologists are often faced with the problem of missing data (MD) in hydrological databases. Several methods are used to deal with the imputation of MD. During the last decade, multivariate approaches have gained popularity in the field of hydrology, especially in hydrological frequency analysis (HFA). However, treating the MD remains neglected in the multivariate HFA literature whereas the focus has been mainly on the modeling component. For a complete analysis and in order to optimize the use of data, MD should also be treated in the multivariate setting prior to modeling and inference. Imputation of MD in the multivariate hydrological framework can have direct implications on the quality of the estimation. Indeed, the dependence between the series represents important additional information that can be included in the imputation process. The objective of the present paper is to highlight the importance of treating MD in multivariate hydrological frequency analysis by reviewing and applying multivariate imputation methods and by comparing univariate and multivariate imputation methods. An application is carried out for multiple flood attributes on three sites in order to evaluate the performance of the different methods based on the leave-one-out procedure. The results indicate that, the performance of imputation methods can be improved by adopting the multivariate setting, compared to mean substitution and interpolation methods, especially when using the copula-based approach.

  1. A regressive methodology for estimating missing data in rainfall daily time series

    Science.gov (United States)

    Barca, E.; Passarella, G.

    2009-04-01

    The "presence" of gaps in environmental data time series represents a very common, but extremely critical problem, since it can produce biased results (Rubin, 1976). Missing data plagues almost all surveys. The problem is how to deal with missing data once it has been deemed impossible to recover the actual missing values. Apart from the amount of missing data, another issue which plays an important role in the choice of any recovery approach is the evaluation of "missingness" mechanisms. When data missing is conditioned by some other variable observed in the data set (Schafer, 1997) the mechanism is called MAR (Missing at Random). Otherwise, when the missingness mechanism depends on the actual value of the missing data, it is called NCAR (Not Missing at Random). This last is the most difficult condition to model. In the last decade interest arose in the estimation of missing data by using regression (single imputation). More recently multiple imputation has become also available, which returns a distribution of estimated values (Scheffer, 2002). In this paper an automatic methodology for estimating missing data is presented. In practice, given a gauging station affected by missing data (target station), the methodology checks the randomness of the missing data and classifies the "similarity" between the target station and the other gauging stations spread over the study area. Among different methods useful for defining the similarity degree, whose effectiveness strongly depends on the data distribution, the Spearman correlation coefficient was chosen. Once defined the similarity matrix, a suitable, nonparametric, univariate, and regressive method was applied in order to estimate missing data in the target station: the Theil method (Theil, 1950). Even though the methodology revealed to be rather reliable an improvement of the missing data estimation can be achieved by a generalization. A first possible improvement consists in extending the univariate technique to

  2. Hierarchical assembly of inorganic nanostructure building blocks to octahedral superstructures-a true template-free self-assembly

    International Nuclear Information System (INIS)

    Kuchibhatla, Satyanarayana V N T; Karakoti, Ajay S; Seal, Sudipta

    2007-01-01

    A room temperature, template-free, wet chemical synthesis of ceria nanoparticles and their long term ageing characteristics are reported. High resolution transmission electron microscopy and UV-visible spectroscopy techniques are used to observe the variation in size, structure and oxidation state, respectively as a function of time. The morphology variation and the hierarchical assembly (octahedral superstructure) of nanostructures are imputed to the inherent structural aspects of cerium oxide. It is hypothesized that the 3-5 nm individual building blocks will undergo an intra-agglomerate re-orientation to attain the low energy configuration. This communication also emphasizes the need for long term ageing studies of nanomaterials in various solvents for multiple functionalities

  3. MULTIPLE OBJECTS

    Directory of Open Access Journals (Sweden)

    A. A. Bosov

    2015-04-01

    Full Text Available Purpose. The development of complicated techniques of production and management processes, information systems, computer science, applied objects of systems theory and others requires improvement of mathematical methods, new approaches for researches of application systems. And the variety and diversity of subject systems makes necessary the development of a model that generalizes the classical sets and their development – sets of sets. Multiple objects unlike sets are constructed by multiple structures and represented by the structure and content. The aim of the work is the analysis of multiple structures, generating multiple objects, the further development of operations on these objects in application systems. Methodology. To achieve the objectives of the researches, the structure of multiple objects represents as constructive trio, consisting of media, signatures and axiomatic. Multiple object is determined by the structure and content, as well as represented by hybrid superposition, composed of sets, multi-sets, ordered sets (lists and heterogeneous sets (sequences, corteges. Findings. In this paper we study the properties and characteristics of the components of hybrid multiple objects of complex systems, proposed assessments of their complexity, shown the rules of internal and external operations on objects of implementation. We introduce the relation of arbitrary order over multiple objects, we define the description of functions and display on objects of multiple structures. Originality.In this paper we consider the development of multiple structures, generating multiple objects.Practical value. The transition from the abstract to the subject of multiple structures requires the transformation of the system and multiple objects. Transformation involves three successive stages: specification (binding to the domain, interpretation (multiple sites and particularization (goals. The proposed describe systems approach based on hybrid sets

  4. Neutron/gamma dose separation by the multiple-ion-chamber technique

    International Nuclear Information System (INIS)

    Goetsch, S.J.

    1983-01-01

    Many mixed n/γ dosimetry systems rely on two dosimeters, one composed of a tissue-equivalent material and the other made from a non-hydrogenous material. The paired chamber technique works well in fields of neutron radiation nearly identical in spectral composition to that in which the dosimeters were calibrated. However, this technique is drastically compromised in phantom due to the degradation of the neutron spectrum. The three-dosimeter technique allows for the fall-off in neutron sensitivity of the two non-hydrogenous dosimeters. Precise and physically meaningful results were obtained with this technique with a D-T source in air and in phantom and with simultaneous D-T neutron and 60 Co gamma ray irradiation in air. The MORSE-CG coupled n/γ three-dimensional Monte Carlo code was employed to calculate neutron and gamma doses in a water phantom. Gamma doses calculated in phantom with this code were generally lower than corresponding ion chamber measurements. This can be explained by the departure of irradiation conditions from ideal narrow-beam geometry. 97 references

  5. Multiple-Access Quantum-Classical Networks

    Science.gov (United States)

    Razavi, Mohsen

    2011-10-01

    A multi-user network that supports both classical and quantum communication is proposed. By relying on optical code-division multiple access techniques, this system offers simultaneous key exchange between multiple pairs of network users. A lower bound on the secure key generation rate will be derived for decoy-state quantum key distribution protocols.

  6. Study on uranium-water multiplicative means of the (RESUCO-Subcritical experimental reactor of uranium with oxygen) subcritical assembly by pulsed neutron technique

    International Nuclear Information System (INIS)

    Jesus Barbosa, S. de.

    1987-01-01

    The effective multiplication factor and the nuclear parameters associated with the variation of (RESUCO- Subcritical Experimental Reactor of Uranium with Oxygen) Subcritical Assembly Configuration, using pulsed neutron technique are analysed. BF3 detectors were used to detect the variation of thermal neutrons in the system, positioned parallelly to fuel elements, and a proton recoil detector was used for monitoring the neutron generation. (M.C.K.) [pt

  7. Parkinson’s Disease Severity at 3 Years Can Be Predicted from Non-Motor Symptoms at Baseline

    Directory of Open Access Journals (Sweden)

    Alba Ayala

    2017-10-01

    Full Text Available ObjectiveThe aim of this study is to present a predictive model of Parkinson’s disease (PD global severity, measured with the Clinical Impression of Severity Index for Parkinson’s Disease (CISI-PD.MethodsThis is an observational, longitudinal study with annual follow-up assessments over 3 years (four time points. A multilevel analysis and multiple imputation techniques were performed to generate a predictive model that estimates changes in the CISI-PD at 1, 2, and 3 years.ResultsThe clinical state of patients (CISI-PD significantly worsened in the 3-year follow-up. However, this change was of small magnitude (effect size: 0.44. The following baseline variables were significant predictors of the global severity change: baseline global severity of disease, levodopa equivalent dose, depression and anxiety symptoms, autonomic dysfunction, and cognitive state. The goodness-of-fit of the model was adequate, and the sensitive analysis showed that the data imputation method applied was suitable.ConclusionDisease progression depends more on the individual’s baseline characteristics than on the 3-year time period. Results may contribute to a better understanding of the evolution of PD including the non-motor manifestations of the disease.

  8. 3-D thermal weight function method and multiple virtual crack extension technique for thermal shock problems

    International Nuclear Information System (INIS)

    Lu Yanlin; Zhou Xiao; Qu Jiadi; Dou Yikang; He Yinbiao

    2005-01-01

    An efficient scheme, 3-D thermal weight function (TWF) method, and a novel numerical technique, multiple virtual crack extension (MVCE) technique, were developed for determination of histories of transient stress intensity factor (SIF) distributions along 3-D crack fronts of a body subjected to thermal shock. The TWF is a universal function, which is dependent only on the crack configuration and body geometry. TWF is independent of time during thermal shock, so the whole history of transient SIF distributions along crack fronts can be directly calculated through integration of the products of TWF and transient temperatures and temperature gradients. The repeated determinations of the distributions of stresses (or displacements) fields for individual time instants are thus avoided in the TWF method. An expression of the basic equation for the 3-D universal weight function method for Mode I in an isotropic elastic body is derived. This equation can also be derived from Bueckner-Rice's 3-D WF formulations in the framework of transformation strain. It can be understood from this equation that the so-called thermal WF is in fact coincident with the mechanical WF except for some constants of elasticity. The details and formulations of the MVCE technique are given for elliptical cracks. The MVCE technique possesses several advantages. The specially selected linearly independent VCE modes can directly be used as shape functions for the interpolation of unknown SIFs. As a result, the coefficient matrix of the final system of equations in the MVCE method is a triple-diagonal matrix and the values of the coefficients on the main diagonal are large. The system of equations has good numerical properties. The number of linearly independent VCE modes that can be introduced in a problem is unlimited. Complex situations in which the SIFs vary dramatically along crack fronts can be numerically well simulated by the MVCE technique. An integrated system of programs for solving the

  9. A Review of the State-of-the-Art on Combining Multiple NDT Techniques in Terms of Precise Fault Detection

    Directory of Open Access Journals (Sweden)

    Ashish Khaira

    2017-06-01

    Full Text Available The present industrial scenario demands optimum quality, feasible processing time and enhanced machine availability to cope-up with continuously increasing customer expectations. To achieve this target, it is mandatory to ensure the optimum performance and higher availability of machinery. Therefore, the present work begins with, review of the research work of different researchers, which includes the applied combinations from year 2000-2016 proceed with discussion on the parameters being checked before making combination of NDT and finally, covers the maintenance performance parameters for quantifying improvement in performance after combining NDT. The result indicates that very few researches uses combination of NDT’s, in areas like aeronautical, compressors etc., and most of the works done in composites to be tested without using any decision making technique.  The researchers and practitioners can use the outcome of this work as a guideline for combining multiple NDT technique to achieve precise fault prediction and forecasting of upcoming failures.

  10. An Innovative Technique to Assess Spontaneous Baroreflex Sensitivity with Short Data Segments: Multiple Trigonometric Regressive Spectral Analysis.

    Science.gov (United States)

    Li, Kai; Rüdiger, Heinz; Haase, Rocco; Ziemssen, Tjalf

    2018-01-01

    Objective: As the multiple trigonometric regressive spectral (MTRS) analysis is extraordinary in its ability to analyze short local data segments down to 12 s, we wanted to evaluate the impact of the data segment settings by applying the technique of MTRS analysis for baroreflex sensitivity (BRS) estimation using a standardized data pool. Methods: Spectral and baroreflex analyses were performed on the EuroBaVar dataset (42 recordings, including lying and standing positions). For this analysis, the technique of MTRS was used. We used different global and local data segment lengths, and chose the global data segments from different positions. Three global data segments of 1 and 2 min and three local data segments of 12, 20, and 30 s were used in MTRS analysis for BRS. Results: All the BRS-values calculated on the three global data segments were highly correlated, both in the supine and standing positions; the different global data segments provided similar BRS estimations. When using different local data segments, all the BRS-values were also highly correlated. However, in the supine position, using short local data segments of 12 s overestimated BRS compared with those using 20 and 30 s. In the standing position, the BRS estimations using different local data segments were comparable. There was no proportional bias for the comparisons between different BRS estimations. Conclusion: We demonstrate that BRS estimation by the MTRS technique is stable when using different global data segments, and MTRS is extraordinary in its ability to evaluate BRS in even short local data segments (20 and 30 s). Because of the non-stationary character of most biosignals, the MTRS technique would be preferable for BRS analysis especially in conditions when only short stationary data segments are available or when dynamic changes of BRS should be monitored.

  11. Class II HLA interactions modulate genetic risk for multiple sclerosis

    Science.gov (United States)

    Dilthey, Alexander T; Xifara, Dionysia K; Ban, Maria; Shah, Tejas S; Patsopoulos, Nikolaos A; Alfredsson, Lars; Anderson, Carl A; Attfield, Katherine E; Baranzini, Sergio E; Barrett, Jeffrey; Binder, Thomas M C; Booth, David; Buck, Dorothea; Celius, Elisabeth G; Cotsapas, Chris; D’Alfonso, Sandra; Dendrou, Calliope A; Donnelly, Peter; Dubois, Bénédicte; Fontaine, Bertrand; Fugger, Lars; Goris, An; Gourraud, Pierre-Antoine; Graetz, Christiane; Hemmer, Bernhard; Hillert, Jan; Kockum, Ingrid; Leslie, Stephen; Lill, Christina M; Martinelli-Boneschi, Filippo; Oksenberg, Jorge R; Olsson, Tomas; Oturai, Annette; Saarela, Janna; Søndergaard, Helle Bach; Spurkland, Anne; Taylor, Bruce; Winkelmann, Juliane; Zipp, Frauke; Haines, Jonathan L; Pericak-Vance, Margaret A; Spencer, Chris C A; Stewart, Graeme; Hafler, David A; Ivinson, Adrian J; Harbo, Hanne F; Hauser, Stephen L; De Jager, Philip L; Compston, Alastair; McCauley, Jacob L; Sawcer, Stephen; McVean, Gil

    2016-01-01

    Association studies have greatly refined the understanding of how variation within the human leukocyte antigen (HLA) genes influences risk of multiple sclerosis. However, the extent to which major effects are modulated by interactions is poorly characterized. We analyzed high-density SNP data on 17,465 cases and 30,385 controls from 11 cohorts of European ancestry, in combination with imputation of classical HLA alleles, to build a high-resolution map of HLA genetic risk and assess the evidence for interactions involving classical HLA alleles. Among new and previously identified class II risk alleles (HLA-DRB1*15:01, HLA-DRB1*13:03, HLA-DRB1*03:01, HLA-DRB1*08:01 and HLA-DQB1*03:02) and class I protective alleles (HLA-A*02:01, HLA-B*44:02, HLA-B*38:01 and HLA-B*55:01), we find evidence for two interactions involving pairs of class II alleles: HLA-DQA1*01:01–HLA-DRB1*15:01 and HLA-DQB1*03:01–HLA-DQB1*03:02. We find no evidence for interactions between classical HLA alleles and non-HLA risk-associated variants and estimate a minimal effect of polygenic epistasis in modulating major risk alleles. PMID:26343388

  12. Neutron multiplicity of fission fragments

    Energy Technology Data Exchange (ETDEWEB)

    Abdelrahman, Y S [Physics department, mu` rah university Al-Karak, (Jordan)

    1995-10-01

    The total average neutron multiplicity of the fission fragments produced by the spontaneous fission of {sup 248} Cm has been measured. This measurement has been done by using a new experimental technique. This technique mainly depends on {gamma}-{gamma} coincidence using a very high resolution high purity germanium (HPGe) detector. 2 figs.

  13. Principle Component Analysis with Incomplete Data: A simulation of R pcaMethods package in Constructing an Environmental Quality Index with Missing Data

    Science.gov (United States)

    Missing data is a common problem in the application of statistical techniques. In principal component analysis (PCA), a technique for dimensionality reduction, incomplete data points are either discarded or imputed using interpolation methods. Such approaches are less valid when ...

  14. Faster Double-Size Bipartite Multiplication out of Montgomery Multipliers

    Science.gov (United States)

    Yoshino, Masayuki; Okeya, Katsuyuki; Vuillaume, Camille

    This paper proposes novel algorithms for computing double-size modular multiplications with few modulus-dependent precomputations. Low-end devices such as smartcards are usually equipped with hardware Montgomery multipliers. However, due to progresses of mathematical attacks, security institutions such as NIST have steadily demanded longer bit-lengths for public-key cryptography, making the multipliers quickly obsolete. In an attempt to extend the lifespan of such multipliers, double-size techniques compute modular multiplications with twice the bit-length of the multipliers. Techniques are known for extending the bit-length of classical Euclidean multipliers, of Montgomery multipliers and the combination thereof, namely bipartite multipliers. However, unlike classical and bipartite multiplications, Montgomery multiplications involve modulus-dependent precomputations, which amount to a large part of an RSA encryption or signature verification. The proposed double-size technique simulates double-size multiplications based on single-size Montgomery multipliers, and yet precomputations are essentially free: in an 2048-bit RSA encryption or signature verification with public exponent e=216+1, the proposal with a 1024-bit Montgomery multiplier is at least 1.5 times faster than previous double-size Montgomery multiplications.

  15. The advantages, and challenges, in using multiple techniques in the estimation of surface water-groundwater fluxes.

    Science.gov (United States)

    Shanafield, M.; Cook, P. G.

    2014-12-01

    When estimating surface water-groundwater fluxes, the use of complimentary techniques helps to fill in uncertainties in any individual method, and to potentially gain a better understanding of spatial and temporal variability in a system. It can also be a way of preventing the loss of data during infrequent and unpredictable flow events. For example, much of arid Australia relies on groundwater, which is recharged by streamflow through ephemeral streams during flood events. Three recent surface water/groundwater investigations from arid Australian systems provide good examples of how using multiple field and analysis techniques can help to more fully characterize surface water-groundwater fluxes, but can also result in conflicting values over varying spatial and temporal scales. In the Pilbara region of Western Australia, combining streambed radon measurements, vertical heat transport modeling, and a tracer test helped constrain very low streambed residence times, which are on the order of minutes. Spatial and temporal variability between the methods yielded hyporheic exchange estimates between 10-4 m2 s-1 and 4.2 x 10-2 m2 s-1. In South Australia, three-dimensional heat transport modeling captured heterogeneity within 20 square meters of streambed, identifying areas of sandy soil (flux rates of up to 3 m d-1) and clay (flux rates too slow to be accurately characterized). Streamflow front modeling showed similar flux rates, but averaged over 100 m long stream segments for a 1.6 km reach. Finally, in central Australia, several methods are used to decipher whether any of the flow down a highly ephemeral river contributes to regional groundwater recharge, showing that evaporation and evapotranspiration likely accounts for all of the infiltration into the perched aquifer. Lessons learned from these examples demonstrate the influences of the spatial and temporal variability between techniques on estimated fluxes.

  16. Decoupling Solar Variability and Instrument Trends Using the Multiple Same-Irradiance-Level (MuSIL) Analysis Technique

    Science.gov (United States)

    Woods, Thomas N.; Eparvier, Francis G.; Harder, Jerald; Snow, Martin

    2018-05-01

    The solar spectral irradiance (SSI) dataset is a key record for studying and understanding the energetics and radiation balance in Earth's environment. Understanding the long-term variations of the SSI over timescales of the 11-year solar activity cycle and longer is critical for many Sun-Earth research topics. Satellite measurements of the SSI have been made since the 1970s, most of them in the ultraviolet, but recently also in the visible and near-infrared. A limiting factor for the accuracy of previous solar variability results is the uncertainties for the instrument degradation corrections, which need fairly large corrections relative to the amount of solar cycle variability at some wavelengths. The primary objective of this investigation has been to separate out solar cycle variability and any residual uncorrected instrumental trends in the SSI measurements from the Solar Radiation and Climate Experiment (SORCE) mission and the Thermosphere, Mesosphere, Ionosphere, Energetic, and Dynamics (TIMED) mission. A new technique called the Multiple Same-Irradiance-Level (MuSIL) analysis has been developed, which examines an SSI time series at different levels of solar activity to provide long-term trends in an SSI record, and the most common result is a downward trend that most likely stems from uncorrected instrument degradation. This technique has been applied to each wavelength in the SSI records from SORCE (2003 - present) and TIMED (2002 - present) to provide new solar cycle variability results between 27 nm and 1600 nm with a resolution of about 1 nm at most wavelengths. This technique, which was validated with the highly accurate total solar irradiance (TSI) record, has an estimated relative uncertainty of about 5% of the measured solar cycle variability. The MuSIL results are further validated with the comparison of the new solar cycle variability results from different solar cycles.

  17. Approach and landing guidance design for reusable launch vehicle using multiple sliding surfaces technique

    Directory of Open Access Journals (Sweden)

    Xiangdong LIU

    2017-08-01

    Full Text Available An autonomous approach and landing (A&L guidance law is presented in this paper for landing an unpowered reusable launch vehicle (RLV at the designated runway touchdown. Considering the full nonlinear point-mass dynamics, a guidance scheme is developed in three-dimensional space. In order to guarantee a successful A&L movement, the multiple sliding surfaces guidance (MSSG technique is applied to derive the closed-loop guidance law, which stems from higher order sliding mode control theory and has advantage in the finite time reaching property. The global stability of the proposed guidance approach is proved by the Lyapunov-based method. The designed guidance law can generate new trajectories on-line without any specific requirement on off-line analysis except for the information on the boundary conditions of the A&L phase and instantaneous states of the RLV. Therefore, the designed guidance law is flexible enough to target different touchdown points on the runway and is capable of dealing with large initial condition errors resulted from the previous flight phase. Finally, simulation results show the effectiveness of the proposed guidance law in different scenarios.

  18. Visualizing Matrix Multiplication

    Science.gov (United States)

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  19. System identification via sparse multiple kernel-based regularization using sequential convex optimization techniques

    DEFF Research Database (Denmark)

    Chen, Tianshi; Andersen, Martin Skovgaard; Ljung, Lennart

    2014-01-01

    Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels...

  20. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    Science.gov (United States)

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  1. ENVIRONMENT INDEPENDENT DIRECTIONAL GESTURE RECOGNITION TECHNIQUE FOR ROBOTS USING MULTIPLE DATA FUSION

    Directory of Open Access Journals (Sweden)

    Kishore Abishek

    2013-10-01

    Full Text Available A technique is presented here for directional gesture recognition by robots. The usual technique employed now is using camera vision and image processing. One major disadvantage with that is the environmental constrain. The machine vision system has a lot of lighting constrains. It is therefore only possible to use that technique in a conditioned environment, where the lighting is compatible with camera system used. The technique presented here is designed to work in any environment. It does not employ machine vision. It utilizes a set of sensors fixed on the hands of a human to identify the direction in which the hand is pointing. This technique uses cylindrical coordinate system to precisely find the direction. A programmed computing block in the robot identifies the direction accurately within the given range.

  2. Efficient Time-Domain Ray-Tracing Technique for the Analysis of Ultra-Wideband Indoor Environments including Lossy Materials and Multiple Effects

    Directory of Open Access Journals (Sweden)

    F. Saez de Adana

    2009-01-01

    Full Text Available This paper presents an efficient application of the Time-Domain Uniform Theory of Diffraction (TD-UTD for the analysis of Ultra-Wideband (UWB mobile communications for indoor environments. The classical TD-UTD formulation is modified to include the contribution of lossy materials and multiple-ray interactions with the environment. The electromagnetic analysis is combined with a ray-tracing acceleration technique to treat realistic and complex environments. The validity of this method is tested with measurements performed inside the Polytechnic building of the University of Alcala and shows good performance of the model for the analysis of UWB propagation.

  3. Multiple variables data sets visualization in ROOT

    International Nuclear Information System (INIS)

    Couet, O

    2008-01-01

    The ROOT graphical framework provides support for many different functions including basic graphics, high-level visualization techniques, output on files, 3D viewing etc. They use well-known world standards to render graphics on screen, to produce high-quality output files, and to generate images for Web publishing. Many techniques allow visualization of all the basic ROOT data types, but the graphical framework was still a bit weak in the visualization of multiple variables data sets. This paper presents latest developments done in the ROOT framework to visualize multiple variables (>4) data sets

  4. Radiotherapy in the treatment of multiple myeloma.

    Science.gov (United States)

    Bosch, A; Frias, Z

    1988-12-01

    Fifty-nine patients with multiple myeloma referred for treatment of painful bony lesions received irradiation to 95 local areas, and 16 of the 59 were irradiated using hemibody techniques. Pain relief was obtained in practically all of the irradiated regions. Most local areas were treated to doses of 3000 cGy in 10 to 15 fractions. Patients with generalized pain due to multiple site involvement were treated with single dose hemibody irradiation, to doses of 600 cGy to the upper hemibody, and of 800 cGy to the lower hemibody. This treatment was well tolerated and side effects minimal. Median survival from diagnosis was 30 months and the survival at 1, 3, and 5 years was 80%, 42%, and 12% respectively. Key articles on radiation therapy of multiple myeloma are reviewed and discussed. Since all patients eventually relapse after chemotherapy, the role of radiotherapy using present techniques should be fully evaluated and considered as an alternative in the primary treatment of multiple myeloma.

  5. fcGENE: a versatile tool for processing and transforming SNP datasets.

    Directory of Open Access Journals (Sweden)

    Nab Raj Roshyara

    Full Text Available Modern analysis of high-dimensional SNP data requires a number of biometrical and statistical methods such as pre-processing, analysis of population structure, association analysis and genotype imputation. Software used for these purposes often rely on specific and incompatible input and output data formats. Therefore extensive data management including multiple format conversions is necessary during analyses.In order to support fast and efficient management and bio-statistical quality control of high-dimensional SNP data, we developed the publically available software fcGENE using C++ object-oriented programming language. This software simplifies and automates the use of different existing analysis packages, especially during the workflow of genotype imputations and corresponding analyses.fcGENE transforms SNP data and imputation results into different formats required for a large variety of analysis packages such as PLINK, SNPTEST, HAPLOVIEW, EIGENSOFT, GenABEL and tools used for genotype imputation such as MaCH, IMPUTE, BEAGLE and others. Data Management tasks like merging, splitting, extracting SNP and pedigree information can be performed. fcGENE also supports a number of bio-statistical quality control processes and quality based filtering processes at SNP- and sample-wise level. The tool also generates templates of commands required to run specific software packages, especially those required for genotype imputation. We demonstrate the functionality of fcGENE by example workflows of SNP data analyses and provide a comprehensive manual of commands, options and applications.We have developed a user-friendly open-source software fcGENE, which comprehensively supports SNP data management, quality control and analysis workflows. Download statistics and corresponding feedbacks indicate that software is highly recognised and extensively applied by the scientific community.

  6. White matter tract-specific quantitative analysis in multiple sclerosis: Comparison of optic radiation reconstruction techniques.

    Directory of Open Access Journals (Sweden)

    Chenyu Wang

    Full Text Available The posterior visual pathway is commonly affected by multiple sclerosis (MS pathology that results in measurable clinical and electrophysiological impairment. Due to its highly structured retinotopic mapping, the visual pathway represents an ideal substrate for investigating patho-mechanisms in MS. Therefore, a reliable and robust imaging segmentation method for in-vivo delineation of the optic radiations (OR is needed. However, diffusion-based tractography approaches, which are typically used for OR segmentation are confounded by the presence of focal white matter lesions. Current solutions require complex acquisition paradigms and demand expert image analysis, limiting application in both clinical trials and clinical practice. In the current study, using data acquired in a clinical setting on a 3T scanner, we optimised and compared two approaches for optic radiation (OR reconstruction: individual probabilistic tractography-based and template-based methods. OR segmentation results were applied to subjects with MS and volumetric and diffusivity parameters were compared between OR segmentation techniques. Despite differences in reconstructed OR volumes, both OR lesion volume and OR diffusivity measurements in MS subjects were highly comparable using optimised probabilistic tractography-based, and template-based, methods. The choice of OR reconstruction technique should be determined primarily by the research question and the nature of the available dataset. Template-based approaches are particularly suited to the semi-automated analysis of large image datasets and have utility even in the absence of dMRI acquisitions. Individual tractography methods, while more complex than template based OR reconstruction, permit measurement of diffusivity changes along fibre bundles that are affected by specific MS lesions or other focal pathologies.

  7. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    International Nuclear Information System (INIS)

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  8. Multiple equilibria of divertor plasmas

    International Nuclear Information System (INIS)

    Vu, H.X.; Prinja, A.K.

    1993-01-01

    A one-dimensional, two-fluid transport model with a temperature-dependent neutral recycling coefficient is shown to give rise to multiple equilibria of divertor plasmas (bifurcation). Numerical techniques for obtaining these multiple equilibria and for examining their stability are presented. Although these numerical techniques have been well known to the scientific community, this is the first time they have been applied to divertor plasma modeling to show the existence of multiple equilibria as well as the stability of these solutions. Numerical and approximate analytical solutions of the present one-dimensional transport model both indicate that there exists three steady-state solutions corresponding to (1) a high-temperature, low-density equilibrium, (2) a low-temperature, high-density equilibrium, and (3) an intermediate-temperature equilibrium. While both the low-temperature and the high-temperature equilibria are stable, with respect to small perturbations in the plasma conditions, the intermediate-temperature equilibrium is physically unstable, i.e., any small perturbation about this equilibrium will cause a transition toward either the high-temperature or low-temperature equilibrium

  9. Robust multiple frequency multiple power localization schemes in the presence of multiple jamming attacks.

    Directory of Open Access Journals (Sweden)

    Ahmed Abdulqader Hussein

    Full Text Available Localization of the wireless sensor network is a vital area acquiring an impressive research concern and called upon to expand more with the rising of its applications. As localization is gaining prominence in wireless sensor network, it is vulnerable to jamming attacks. Jamming attacks disrupt communication opportunity among the sender and receiver and deeply impact the localization process, leading to a huge error of the estimated sensor node position. Therefore, detection and elimination of jamming influence are absolutely indispensable. Range-based techniques especially Received Signal Strength (RSS is facing severe impact of these attacks. This paper proposes algorithms based on Combination Multiple Frequency Multiple Power Localization (C-MFMPL and Step Function Multiple Frequency Multiple Power Localization (SF-MFMPL. The algorithms have been tested in the presence of multiple types of jamming attacks including capture and replay, random and constant jammers over a log normal shadow fading propagation model. In order to overcome the impact of random and constant jammers, the proposed method uses two sets of frequencies shared by the implemented anchor nodes to obtain the averaged RSS readings all over the transmitted frequencies successfully. In addition, three stages of filters have been used to cope with the replayed beacons caused by the capture and replay jammers. In this paper the localization performance of the proposed algorithms for the ideal case which is defined by without the existence of the jamming attack are compared with the case of jamming attacks. The main contribution of this paper is to achieve robust localization performance in the presence of multiple jamming attacks under log normal shadow fading environment with a different simulation conditions and scenarios.

  10. Update on rehabilitation in multiple sclerosis.

    Science.gov (United States)

    Donzé, Cécile

    2015-04-01

    Given that mobility impairment is a hallmark of multiple sclerosis, people with this disease are likely to benefit from rehabilitation therapy throughout the course of their illness. The review provides an update on rehabilitation focused on balance and walking impairment. Classical rehabilitation focusing on muscle rehabilitation, neurotherapeutic facilitation is effective and recommended. Other techniques did not prove their superiority: transcutaneal neurostimulation, repetitive magnetic stimulation, electromagnetic therapy, whole body vibration and robot-assisted gait rehabilitation and need more studies to conclude. Cooling therapy, hydrotherapy, orthoses and textured insoles could represent a complementary service to other techniques in specific conditions. Multidisciplinary rehabilitation program provides positive effects and high satisfaction for patients with multiple sclerosis but needs more evaluation. New technologies using serious game and telerehabilitation seem to be an interesting technique to promote physical activity, self-management and quality of life. Rehabilitation like other therapy needs regular clinical evaluation to adapt the program and propose appropriate techniques. Moreover, the objective of rehabilitation needs to be decided with the patient with realistic expectation. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  11. Immunoassay separation technique

    International Nuclear Information System (INIS)

    1977-01-01

    A method for effecting the immunoassay of a multiplicity of samples, each possibly containing an antigen or an antibody to be assayed, is discussed. Each sample is incubated with a solution containing a detectable antigen or antibody to form a multiplicity of mixtures, each mixture containing as components antigen-antibody, non-complexed antigen and non-complexed antibody. At least one of the components of the said mixture is separated by adsorption. There after, quantity of detectable antigen or antibody is detected in one of the non-adsorbed portions of the mixture. An improvement, compared to other techniques, is the continuous and sequential separation of at least one component, which is intended to be separated from each said multiplicity of mixtures

  12. The comparison of cardiovascular risk scores using two methods of substituting missing risk factor data in patient medical records

    Directory of Open Access Journals (Sweden)

    Andrew Dalton

    2011-07-01

    Conclusions A simple method of substituting missing risk factor data can produce reliable estimates of CVD risk scores. Targeted screening for high CVD risk, using pre-existing electronic medical record data, does not require multiple imputation methods in risk estimation.

  13. Hypertension in Pregnancy among Rural Women in Katsina State ...

    African Journals Online (AJOL)

    2017-06-30

    Jun 30, 2017 ... studies to the best of our search were in southern Nigeria. [7,12 ... Nigeria has the worst statistics of maternal death in ... data on socio-demographic, medical history and risk ... multiple imputations for a few missing values.

  14. Multiple k Nearest Neighbor Query Processing in Spatial Network Databases

    DEFF Research Database (Denmark)

    Xuegang, Huang; Jensen, Christian Søndergaard; Saltenis, Simonas

    2006-01-01

    This paper concerns the efficient processing of multiple k nearest neighbor queries in a road-network setting. The assumed setting covers a range of scenarios such as the one where a large population of mobile service users that are constrained to a road network issue nearest-neighbor queries...... for points of interest that are accessible via the road network. Given multiple k nearest neighbor queries, the paper proposes progressive techniques that selectively cache query results in main memory and subsequently reuse these for query processing. The paper initially proposes techniques for the case...... where an upper bound on k is known a priori and then extends the techniques to the case where this is not so. Based on empirical studies with real-world data, the paper offers insight into the circumstances under which the different proposed techniques can be used with advantage for multiple k nearest...

  15. The use of multiple respiratory inhalers requiring different inhalation techniques has an adverse effect on COPD outcomes

    Directory of Open Access Journals (Sweden)

    Bosnic-Anticevich S

    2016-12-01

    Full Text Available Sinthia Bosnic-Anticevich,1 Henry Chrystyn,2 Richard W Costello,3,4 Myrna B Dolovich,5 Monica J Fletcher,6 Federico Lavorini,7 Roberto Rodríguez-Roisin,8 Dermot Ryan,9,10 Simon Wan Yau Ming,2 David B Price2,11 1Woolcock Institute of Medical Research, School of Medical Sciences, University of Sydney and Sydney Local Health District, Sydney, NSW, Australia; 2Observational and Pragmatic Research Institute Pte Ltd, Singapore; 3RCSI Medicine, Royal College of Surgeons, 4RCSI Education & Research Centre, Beaumont Hospital, Beaumont, Dublin, Ireland; 5Department of Medicine, Respirology, McMaster University, ON, Canada; 6Education for Health, Warwick, UK; 7Department of Experimental and Clinical Medicine, University of Florence, Florence, Italy; 8Respiratory Institute, Hospital Clinic, Universitat de Barcelona, Barcelona, Spain; 9Optimum Patient Care, Cambridge, 10Centre for Population Health Sciences, University of Edinburgh, Edinburgh, 11Academic Primary Care, University of Aberdeen, Aberdeen, UK Background: Patients with COPD may be prescribed multiple inhalers as part of their treatment regimen, which require different inhalation techniques. Previous literature has shown that the effectiveness of inhaled treatment can be adversely affected by incorrect inhaler technique. Prescribing a range of device types could worsen this problem, leading to poorer outcomes in COPD patients, but the impact is not yet known. Aims: To compare clinical outcomes of COPD patients who use devices requiring similar inhalation technique with those who use devices with mixed techniques. Methods: A matched cohort design was used, with 2 years of data from the Optimum Patient Care Research Database. Matching variables were established from a baseline year of follow-up data, and two cohorts were formed: a “similar-devices cohort” and a “mixed-devices cohort”. COPD-related events were recorded during an outcome year of follow-up. The primary outcome measure was an

  16. Inventory verification measurements using neutron multiplicity counting

    International Nuclear Information System (INIS)

    Ensslin, N.; Foster, L.A.; Harker, W.C.; Krick, M.S.; Langner, D.G.

    1998-01-01

    This paper describes a series of neutron multiplicity measurements of large plutonium samples at the Los Alamos Plutonium Facility. The measurements were corrected for bias caused by neutron energy spectrum shifts and nonuniform multiplication, and are compared with calorimetry/isotopics. The results show that multiplicity counting can increase measurement throughput and yield good verification results for some inventory categories. The authors provide recommendations on the future application of the technique to inventory verification

  17. Expected precision of neutron multiplicity measurements of waste drums

    International Nuclear Information System (INIS)

    Ensslin, N.; Krick, M.S.; Menlove, H.O.

    1995-01-01

    DOE facilities are beginning to apply passive neutron multiplicity counting techniques to the assay of plutonium scrap and residues. There is also considerable interest in applying this new measurement technique to 208-liter waste drums. The additional information available from multiplicity counting could flag the presence of shielding materials or improve assay accuracy by correcting for matrix effects such as (α,n) induced fission or detector efficiency variations. The potential for multiplicity analysis of waste drums, and the importance of better detector design, can be estimated by calculating the expected assay precision using a Figure of Merit code for assay variance. This paper reports results obtained as a function of waste drum content and detector characteristics. We find that multiplicity analysis of waste drums is feasible if a high-efficiency neutron counter is used. However, results are significantly poorer if the multiplicity analysis must be used to solve for detection efficiency

  18. A new islanding detection technique for multiple mini hydro based on rate of change of reactive power and load connecting strategy

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Bakar, A.H.A.; Karimi, M.

    2013-01-01

    Highlights: • The requirement of DG interconnection with existing power system is discussed. • Various islanding detection techniques are discussed with their merits and demerits. • New islanding detection strategy is proposed for multiple mini hydro type DGs. • The proposed strategy is based on dq/dt and load connecting strategy. • The effectiveness of strategy is verified on various other cases. - Abstract: The interconnection of distributed generation (DG) into distribution networks is undergoing a rapid global expansion. It enhances the system’s reliability, while simultaneously reduces pollution problems related to the generation of electrical power. To fully utilize the benefits of DGs, certain technical issues need to be addressed. One of the most important issues in this context is islanding detection. This paper presents a new islanding detection technique that is suitable for multiple mini-hydro type DG units. The proposed strategy is based on the rate of change of reactive power and load connecting strategy to detect islanding within the system. For a large power mismatch, islanding is detected by rate of change of reactive power only. However, for a close power mismatch, the rate of change of reactive power initiates a load connecting strategy, which in turn alters the load on the distribution network. This load variation in the distribution network causes a variation in the rate of change of reactive power, which is utilized to distinguish islanding and other events. The simulation results show that the proposed strategy is effective in detecting islanding occurrence in a distribution network

  19. One-dimensional multiple-well oscillators: A time-dependent

    Indian Academy of Sciences (India)

    ... quantum mechanical multiple-well oscillators. An imaginary-time evolution technique, coupled with the minimization of energy expectation value to reach a global minimum, subject to orthogonality constraint (for excited states) has been employed. Pseudodegeneracy in symmetric, deep multiple-well potentials, probability ...

  20. Gamma spectroscopic studies of the neutron-deficient g-g nucleus 74Kr by means of a neutron multiplicity measurement technique

    International Nuclear Information System (INIS)

    Roth, J.

    1981-01-01

    The g-g nucleus 74 Kr was studied by means of the reaction 58 Ni ( 19 F, p2n#betta#) 74 Kr. In order to make gamma spectroscopic studies at neutron deficient nuclei like 74 Kr a neutron multiplicity measurement technique was developed. Beside #betta# single spectra, #betta# excitation functions, #betta#-#betta# coincidences, #betta# angular distributions, and lifetime measurements by means of this technique all measurements in coincidence with up to two neutrons were taken up. From these measurement data an extended term scheme with 17 newly found excited states could be extracted. To all levels spins and parities could be assigned. From the four energetically lowest levels of the yrast cascade the mean lifetimes could be determined. A double backbending in the sequence of the yrast cascade was interpreted as crossing of the g 9/2 bands. The irregularities in the lower part of the yrast band correspond to the shape consistence picture. The results were considered in connection with the systematics of the even krypton isotopes and compared with a two-quasiparticle-plas-rotor model calculation. (HSI)

  1. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  2. Immunochip analyses identify a novel risk locus for primary biliary cirrhosis at 13q14, multiple independent associations at four established risk loci and epistasis between 1p31 and 7q32 risk variants

    Science.gov (United States)

    Juran, Brian D.; Hirschfield, Gideon M.; Invernizzi, Pietro; Atkinson, Elizabeth J.; Li, Yafang; Xie, Gang; Kosoy, Roman; Ransom, Michael; Sun, Ye; Bianchi, Ilaria; Schlicht, Erik M.; Lleo, Ana; Coltescu, Catalina; Bernuzzi, Francesca; Podda, Mauro; Lammert, Craig; Shigeta, Russell; Chan, Landon L.; Balschun, Tobias; Marconi, Maurizio; Cusi, Daniele; Heathcote, E. Jenny; Mason, Andrew L.; Myers, Robert P.; Milkiewicz, Piotr; Odin, Joseph A.; Luketic, Velimir A.; Bacon, Bruce R.; Bodenheimer, Henry C.; Liakina, Valentina; Vincent, Catherine; Levy, Cynthia; Franke, Andre; Gregersen, Peter K.; Bossa, Fabrizio; Gershwin, M. Eric; deAndrade, Mariza; Amos, Christopher I.; Lazaridis, Konstantinos N.; Seldin, Michael F.; Siminovitch, Katherine A.

    2012-01-01

    To further characterize the genetic basis of primary biliary cirrhosis (PBC), we genotyped 2426 PBC patients and 5731 unaffected controls from three independent cohorts using a single nucleotide polymorphism (SNP) array (Immunochip) enriched for autoimmune disease risk loci. Meta-analysis of the genotype data sets identified a novel disease-associated locus near the TNFSF11 gene at 13q14, provided evidence for association at six additional immune-related loci not previously implicated in PBC and confirmed associations at 19 of 22 established risk loci. Results of conditional analyses also provided evidence for multiple independent association signals at four risk loci, with haplotype analyses suggesting independent SNP effects at the 2q32 and 16p13 loci, but complex haplotype driven effects at the 3q25 and 6p21 loci. By imputing classical HLA alleles from this data set, four class II alleles independently contributing to the association signal from this region were identified. Imputation of genotypes at the non-HLA loci also provided additional associations, but none with stronger effects than the genotyped variants. An epistatic interaction between the IL12RB2 risk locus at 1p31and the IRF5 risk locus at 7q32 was also identified and suggests a complementary effect of these loci in predisposing to disease. These data expand the repertoire of genes with potential roles in PBC pathogenesis that need to be explored by follow-up biological studies. PMID:22936693

  3. Optimal Design of Low-Density SNP Arrays for Genomic Prediction: Algorithm and Applications.

    Directory of Open Access Journals (Sweden)

    Xiao-Lin Wu

    Full Text Available Low-density (LD single nucleotide polymorphism (SNP arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for the optimal design of LD SNP chips. A multiple-objective, local optimization (MOLO algorithm was developed for design of optimal LD SNP chips that can be imputed accurately to medium-density (MD or high-density (HD SNP genotypes for genomic prediction. The objective function facilitates maximization of non-gap map length and system information for the SNP chip, and the latter is computed either as locus-averaged (LASE or haplotype-averaged Shannon entropy (HASE and adjusted for uniformity of the SNP distribution. HASE performed better than LASE with ≤1,000 SNPs, but required considerably more computing time. Nevertheless, the differences diminished when >5,000 SNPs were selected. Optimization was accomplished conditionally on the presence of SNPs that were obligated to each chromosome. The frame location of SNPs on a chip can be either uniform (evenly spaced or non-uniform. For the latter design, a tunable empirical Beta distribution was used to guide location distribution of frame SNPs such that both ends of each chromosome were enriched with SNPs. The SNP distribution on each chromosome was finalized through the objective function that was locally and empirically maximized. This MOLO algorithm was capable of selecting a set of approximately evenly-spaced and highly-informative SNPs, which in turn led to increased imputation accuracy compared with selection solely of evenly-spaced SNPs. Imputation accuracy increased with LD chip size, and imputation error rate was extremely low for chips with ≥3,000 SNPs. Assuming that genotyping or imputation error occurs at random, imputation error rate can be viewed as the upper limit for genomic prediction error. Our results show that about 25% of imputation error rate was propagated to genomic prediction in an Angus

  4. Selecting a Sustainable Disinfection Technique for Wastewater Reuse Projects

    Directory of Open Access Journals (Sweden)

    Jorge Curiel-Esparza

    2014-09-01

    Full Text Available This paper presents an application of the Analytical Hierarchy Process (AHP by integrating a Delphi process for selecting the best sustainable disinfection technique for wastewater reuse projects. The proposed methodology provides project managers a tool to evaluate problems with multiple criteria and multiple alternatives which involve non-commeasurable decision criteria, with expert opinions playing a major role in the selection of these treatment technologies. Five disinfection techniques for wastewater reuse have been evaluated for each of the nine criteria weighted according to the opinions of consulted experts. Finally, the VIKOR method has been applied to determine a compromise solution, and to establish the stability of the results. Therefore, the expert system proposed to select the optimal disinfection alternative is a hybrid method combining the AHP with the Delphi method and the VIKOR technique, which is shown to be appropriate in realistic scenarios where multiple stakeholders are involved in the selection of a sustainable disinfection technique for wastewater reuse projects.

  5. Regression modeling strategies with applications to linear models, logistic and ordinal regression, and survival analysis

    CERN Document Server

    Harrell , Jr , Frank E

    2015-01-01

    This highly anticipated second edition features new chapters and sections, 225 new references, and comprehensive R software. In keeping with the previous edition, this book is about the art and science of data analysis and predictive modeling, which entails choosing and using multiple tools. Instead of presenting isolated techniques, this text emphasizes problem solving strategies that address the many issues arising when developing multivariable models using real data and not standard textbook examples. It includes imputation methods for dealing with missing data effectively, methods for fitting nonlinear relationships and for making the estimation of transformations a formal part of the modeling process, methods for dealing with "too many variables to analyze and not enough observations," and powerful model validation techniques based on the bootstrap.  The reader will gain a keen understanding of predictive accuracy, and the harm of categorizing continuous predictors or outcomes.  This text realistically...

  6. Characterization of forsythoside A metabolites in rats by a combination of UHPLC-LTQ-Orbitrap mass spectrometer with multiple data processing techniques.

    Science.gov (United States)

    Wang, Fei; Cao, Guang-Shang; Li, Yun; Xu, Lu-Lu; Wang, Zhi-Bin; Liu, Ying; Lu, Jian-Qiu; Zhang, Jia-Yu

    2018-05-01

    Forsythoside A (FTA), the main active constituent isolated from Fructus Forsythiae, has various biological functions including anti-oxidant, anti-viral and anti-microbial activities. However, while research on FTA has been mainly focused on the treatment of diseases on a material basis, FTA metabolites in vivo have not been comprehensively evaluated. Here, a rapid and sensitive method using a UHPLC-LTQ-Orbitrap mass spectrometer with multiple data processing techniques including high-resolution extracted ion chromatograms, multiple mass defect filters and diagnostic product ions was developed for the screening and identification of FTA metabolites in rats. As the result, a total of 43 metabolites were identified in biological samples including 42 metabolites in urine, 22 metabolites in plasma and 15 metabolites in feces. These results demonstrated that FTA underwent a series of in vivo metabolic reactions including methylation, dimethylation, sulfation, glucuronidation, diglucuronidation, cysteine conjugation and their composite reactions. The research enhanced our understanding of FTA metabolism and built a foundation for further toxicity and safety studies. Copyright © 2017 John Wiley & Sons, Ltd.

  7. A modified discrete algebraic reconstruction technique for multiple grey image reconstruction for limited angle range tomography.

    Science.gov (United States)

    Liang, Zhiting; Guan, Yong; Liu, Gang; Chen, Xiangyu; Li, Fahu; Guo, Pengfei; Tian, Yangchao

    2016-03-01

    The `missing wedge', which is due to a restricted rotation range, is a major challenge for quantitative analysis of an object using tomography. With prior knowledge of the grey levels, the discrete algebraic reconstruction technique (DART) is able to reconstruct objects accurately with projections in a limited angle range. However, the quality of the reconstructions declines as the number of grey levels increases. In this paper, a modified DART (MDART) was proposed, in which each independent region of homogeneous material was chosen as a research object, instead of the grey values. The grey values of each discrete region were estimated according to the solution of the linear projection equations. The iterative process of boundary pixels updating and correcting the grey values of each region was executed alternately. Simulation experiments of binary phantoms as well as multiple grey phantoms show that MDART is capable of achieving high-quality reconstructions with projections in a limited angle range. The interesting advancement of MDART is that neither prior knowledge of the grey values nor the number of grey levels is necessary.

  8. Locating non-volcanic tremor along the San Andreas Fault using a multiple array source imaging technique

    Science.gov (United States)

    Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.

    2010-01-01

    Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.

  9. Optimal assignment of multiple utilities in heat exchange networks

    International Nuclear Information System (INIS)

    Salama, A.I.A.

    2009-01-01

    Existing numerical geometry-based techniques, developed by [A.I.A. Salama, Numerical techniques for determining heat energy targets in pinch analysis, Computers and Chemical Engineering 29 (2005) 1861-1866; A.I.A. Salama, Determination of the optimal heat energy targets in heat pinch analysis using a geometry-based approach, Computers and Chemical Engineering 30 (2006) 758-764], have been extended to optimally assign multiple utilities in heat exchange network (HEN). These techniques utilize the horizontal shift between the cold composite curve (CC) and the stationary hot CC to determine the HEN optimal energy targets, grand composite curve (GCC), and the complement grand composite curve (CGCC). The proposed numerical technique developed in this paper is direct and simultaneously determines the optimal heat-energy targets and optimally assigns multiple utilities as compared with an existing technique based on sequential assignment of multiple utilities. The technique starts by arranging in an ascending order the HEN stream and target temperatures, and the resulting set is labelled T. Furthermore, the temperature sets where multiple utilities are introduced are arranged in an ascending order and are labelled T ic and T ih for the cold and hot sides, respectively. The graphical presentation of the results is facilitated by the insertion at each multiple-utility temperature a perturbed temperature equals the insertion temperature minus a small perturbation. Furthermore, using the heat exchanger network (HEN) minimum temperature-differential approach (ΔT min ) and stream heat-capacity flow rates, the presentation is facilitated by using the conventional temperature shift of the HEN CCs. The set of temperature-shifted stream and target temperatures and perturbed temperatures in the overlap range between the CCs is labelled T ol . Using T ol , a simple formula employing enthalpy-flow differences between the hot composite curve CC h and the cold composite curve CC c is

  10. Interleaved Practice with Multiple Representations: Analyses with Knowledge Tracing Based Techniques

    Science.gov (United States)

    Rau, Martina A.; Pardos, Zachary A.

    2012-01-01

    The goal of this paper is to use Knowledge Tracing to augment the results obtained from an experiment that investigated the effects of practice schedules using an intelligent tutoring system for fractions. Specifically, this experiment compared different practice schedules of multiple representations of fractions: representations were presented to…

  11. Multiple sclerosis risk sharing scheme: two year results of clinical cohort study with historical comparator.

    Science.gov (United States)

    Boggild, Mike; Palace, Jackie; Barton, Pelham; Ben-Shlomo, Yoav; Bregenzer, Thomas; Dobson, Charles; Gray, Richard

    2009-12-02

    To generate evidence on the longer term cost effectiveness of disease modifying treatments in patients with relapsing-remitting multiple sclerosis. Prospective cohort study with historical comparator. Specialist multiple sclerosis clinics in 70 centres in the United Kingdom. Patients with relapsing-remitting multiple sclerosis who started treatment from May 2002 to April 2005 under the UK risk sharing scheme. Treatment with interferon beta or glatiramer acetate in accordance with guidelines of the UK Association of British Neurologists. Observed utility weighted progression in disability at two years' follow-up assessed on the expanded disability status scale (EDSS) compared with that expected by applying the progression rates in a comparator dataset, modified for patients receiving treatment by multiplying by the hazard ratio derived separately for each disease modifying treatment from the randomised trials. In the primary per protocol analysis, progression in disability was worse than that predicted and worse than that in the untreated comparator dataset ("deviation score" of 113%; excess in mean disability status scale 0.28). In sensitivity analyses, however, the deviation score varied from -72% (using raw baseline disability status scale scores, rather than applying a "no improvement" algorithm) to 156% (imputing missing data for year two from progression rates for year one). It is too early to reach any conclusion about the cost effectiveness of disease modifying treatments from this first interim analysis. Important methodological issues, including the need for additional comparator datasets, the potential bias from missing data, and the impact of the "no improvement" rule, will need to be addressed and long term follow-up of all patients is essential to secure meaningful results. Future analyses of the cohort are likely to be more informative, not least because they will be less sensitive to short term fluctuations in disability.

  12. Impact of the 'non-destructive' multiple-readout on the Lorentzian noise

    International Nuclear Information System (INIS)

    Guazzoni, C.; Gatti, E.; Geraci, A.

    2006-01-01

    In this paper, we discuss the effect of 'non-destructive' multiple-readout on the Lorentzian noise. To our knowledge, it is the first time that such interaction is investigated. We have studied the peculiarities of the shape of the optimum weighting function for the multiple-readout technique in the presence of Lorentzian noise and other noise sources. The impact of the Lorentzian noise on the resolution achievable with the multiple-readout technique is analyzed in detail with respect to the interaction between the oscillation time and the characteristic time constant of the Lorentzian noise

  13. Multiple M2-branes and the embedding tensor

    NARCIS (Netherlands)

    Bergshoeff, Eric A.; de Roo, Mees; Hohm, Olaf

    2008-01-01

    We show that the Bagger-Lambert theory of multiple M2-branes fits into the general construction of maximally supersymmetric gauge theories using the embedding tensor technique. We apply the embedding tensor technique in order to systematically obtain the consistent gaugings of N = 8 superconformal

  14. Free Surface Downgoing VSP Multiple Imaging

    Science.gov (United States)

    Maula, Fahdi; Dac, Nguyen

    2018-03-01

    The common usage of a vertical seismic profile is to capture the reflection wavefield (upgoing wavefield) so that it can be used for further well tie or other interpretations. Borehole Seismic (VSP) receivers capture the reflection from below the well trajectory, traditionally no seismic image information above trajectory. The non-traditional way of processing the VSP multiple can be used to expand the imaging above the well trajectory. This paper presents the case study of using VSP downgoing multiples for further non-traditional imaging applications. In general, VSP processing, upgoing and downgoing arrivals are separated during processing. The up-going wavefield is used for subsurface illumination, whereas the downgoing wavefield and multiples are normally excluded from the processing. In a situation where the downgoing wavefield passes the reflectors several times (multiple), the downgoing wavefield carries reflection information. Its benefit is that it can be used for seismic tie up to seabed, and possibility for shallow hazards identifications. One of the concepts of downgoing imaging is widely known as mirror-imaging technique. This paper presents a case study from deep water offshore Vietnam. The case study is presented to demonstrate the robustness of the technique, and the limitations encountered during its processing.

  15. Blower-door techniques for measuring interzonal leakage

    Energy Technology Data Exchange (ETDEWEB)

    Hult, Erin L.; Sherman, Max H.; Walker, Iain

    2013-01-01

    Abstract The standard blower door test methods, such as ASTM E779, describe how to use a single blower door to determine the total leakage of a single-zone structure such as a detached single-family home. There are no standard test methods for measuring interzonal leakage in a two-zone or multi-zone building envelope such as might be encountered in with an attached garage or in a multifamily building. Some practitioners have been using techniques that involve making multiple measurements with a single blower door as well as combined measurements using multiple blower doors. Even for just two zones there are dozens of combinations of one-door and two-door test protocols that could conceivably be used to determine the interzonal air tightness. We examined many of these two-zone configurations using both simulation and measured data to estimate the accuracy and precision of each technique for realistic measurement scenarios. We also considered the impact of taking measurements at a single pressure versus over multiple pressures. We compared the various techniques and evaluated them for specific uses. Some techniques work better in one leakage regime; some are more sensitive to wind and other noise; some are more suited to determining only a subset of the leakage values. This paper makes recommendations on which techniques to use or not use for various cases and provides data that could be used to develop future test methods.

  16. Efficient Processing of Multiple DTW Queries in Time Series Databases

    DEFF Research Database (Denmark)

    Kremer, Hardy; Günnemann, Stephan; Ivanescu, Anca-Maria

    2011-01-01

    . In many of today’s applications, however, large numbers of queries arise at any given time. Existing DTW techniques do not process multiple DTW queries simultaneously, a serious limitation which slows down overall processing. In this paper, we propose an efficient processing approach for multiple DTW...... for multiple DTW queries....

  17. Cryptanalysis of the public key encryption based on multiple chaotic systems

    International Nuclear Information System (INIS)

    Zhang Linhua

    2008-01-01

    Recently, Ranjan proposed a novel public key encryption technique based on multiple chaotic systems [Phys Lett 2005;95]. Unfortunately, Wang soon gave a successful attack on its special case based on Parseval's theorem [Wang K, Pei W, Zhou L, et al. Security of public key encryption technique based on multiple chaotic system. Phys Lett A, in press]. In this letter, we give an improved example which can avoid the attack and point out that Wang cannot find the essential drawback of the technique. However, further experimental result shows Ruanjan's encryption technique is inefficient, and detailed theoretic analysis shows that the complexity to break the cryptosystem is overestimated

  18. Craniospinal irradiation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Scarlatescu, Ioana, E-mail: scarlatescuioana@gmail.com; Avram, Calin N. [Faculty of Physics, West University of Timisoara, Bd. V. Parvan 4, 300223 Timisoara (Romania); Virag, Vasile [County Hospital “Gavril Curteanu” - Oradea (Romania)

    2015-12-07

    In this paper we present one treatment plan for irradiation cases which involve a complex technique with multiple beams, using the 3D conformational technique. As the main purpose of radiotherapy is to administrate a precise dose into the tumor volume and protect as much as possible all the healthy tissues around it, for a case diagnosed with a primitive neuro ectoderm tumor, we have developed a new treatment plan, by controlling one of the two adjacent fields used at spinal field, in a way that avoids the fields superposition. Therefore, the risk of overdose is reduced by eliminating the field divergence.

  19. Spacecraft Multiple Array Communication System Performance Analysis

    Science.gov (United States)

    Hwu, Shian U.; Desilva, Kanishka; Sham, Catherine C.

    2010-01-01

    The Communication Systems Simulation Laboratory (CSSL) at the NASA Johnson Space Center is tasked to perform spacecraft and ground network communication system simulations, design validation, and performance verification. The CSSL has developed simulation tools that model spacecraft communication systems and the space and ground environment in which the tools operate. In this paper, a spacecraft communication system with multiple arrays is simulated. Multiple array combined technique is used to increase the radio frequency coverage and data rate performance. The technique is to achieve phase coherence among the phased arrays to combine the signals at the targeting receiver constructively. There are many technical challenges in spacecraft integration with a high transmit power communication system. The array combining technique can improve the communication system data rate and coverage performances without increasing the system transmit power requirements. Example simulation results indicate significant performance improvement can be achieved with phase coherence implementation.

  20. Multiple reflection Michelson interferometer with picometer resolution.

    Science.gov (United States)

    Pisani, Marco

    2008-12-22

    A Michelson interferometer based on an optical set-up allowing multiple reflection between two plane mirrors performs the multiplication of the optical path by a factor N, proportionally increasing the resolution of the measurement. A multiplication factor of almost two orders of magnitude has been demonstrated with a simple set-up. The technique can be applied to any interferometric measurement where the classical interferometer limits due to fringe nonlinearities and quantum noise are an issue. Applications in precision engineering, vibration analysis, nanometrology, and spectroscopy are foreseen.

  1. Application of stepwise multiple regression techniques to inversion of Nimbus 'IRIS' observations.

    Science.gov (United States)

    Ohring, G.

    1972-01-01

    Exploratory studies with Nimbus-3 infrared interferometer-spectrometer (IRIS) data indicate that, in addition to temperature, such meteorological parameters as geopotential heights of pressure surfaces, tropopause pressure, and tropopause temperature can be inferred from the observed spectra with the use of simple regression equations. The technique of screening the IRIS spectral data by means of stepwise regression to obtain the best radiation predictors of meteorological parameters is validated. The simplicity of application of the technique and the simplicity of the derived linear regression equations - which contain only a few terms - suggest usefulness for this approach. Based upon the results obtained, suggestions are made for further development and exploitation of the stepwise regression analysis technique.

  2. “TOTTO-CHAN”: INSIGHTS INTO MULTIPLE INTELLIGENCES-BASED ENGLISH TEACHING TO YOUNG LEARNERS

    Directory of Open Access Journals (Sweden)

    K. M. Widi Hadiyanti

    2017-04-01

    Full Text Available Children are unique individuals who have their own ways to learn about the world and solve problems. It is supported by Gardner‘s ideas about multiple intelligences. Gardner (1993, 1998 suggested several kinds of intelligences. In their attempt to learn about the world, children make use of all resources, including their multiple intelligences. The application of multiple intelligences in teaching young learner seems to be apparent in Tetsuko Kuroyanagi‘s autobiographical memoir, ―Totto-chan: The Little Girl at the Window‖. This study identified and analyzed the techniques used to apply multiple intelligences in teaching young learners in Totto-chan‘s elementary school, Tomoe Gakuen. Based on the identification and analysis conducted in ―Totto-chan‖, this study elaborated the techniques in teaching English for young learners. In particular, this study proposed teaching techniques for four language skills. This study will be of great benefit for English teachers for young learners to enrich their teaching techniques that accord with the children nature of development.

  3. Missing data in randomized clinical trials for weight loss: scope of the problem, state of the field, and performance of statistical methods.

    Directory of Open Access Journals (Sweden)

    Mai A Elobeid

    2009-08-01

    Full Text Available Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods.We searched PubMed and Cochrane databases (2000-2006 for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout rates being approximated by an exponential decay curve (e(-lambdat where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100 and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive.Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last

  4. Channel coding techniques for wireless communications

    CERN Document Server

    Deergha Rao, K

    2015-01-01

    The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...

  5. Classification of Single and Multiple Disturbances in Electric Signals

    Directory of Open Access Journals (Sweden)

    Ribeiro Moisés Vidal

    2007-01-01

    Full Text Available This paper discusses and presents a different perspective for classifying single and multiple disturbances in electric signals, such as voltage and current ones. Basically, the principle of divide to conquer is applied to decompose the electric signals into what we call primitive signals or components from which primitive patterns can be independently recognized. A technique based on such concept is introduced to demonstrate the effectiveness of such idea. This technique decomposes the electric signals into three main primitive components. In each primitive component, few high-order-statistics- (HOS- based features are extracted. Then, Bayes' theory-based techniques are applied to verify the ocurrence or not of single or multiple disturbances in the electric signals. The performance analysis carried out on a large number of data indicates that the proposed technique outperforms the performance attained by the technique introduced by He and Starzyk. Additionally, the numerical results verify that the proposed technique is capable of offering interesting results when it is applied to classify several sets of disturbances if one cycle of the main frequency is considered, at least.

  6. An Application of Matrix Multiplication

    Indian Academy of Sciences (India)

    IAS Admin

    intelligence, image processing, mathematical modeling, optimization techniques, environmental science, mathematical linguistics, graph theory applications to biological networks, social networks, electrical engineering. .... diagonal of A are 0; and if G has no multiple edges, then all the entries of A are either 1 or 0.

  7. Fast matrix multiplication and its algebraic neighbourhood

    Science.gov (United States)

    Pan, V. Ya.

    2017-11-01

    Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.

  8. HARMONIC ANALYSIS OF SVPWM INVERTER USING MULTIPLE-PULSES METHOD

    Directory of Open Access Journals (Sweden)

    Mehmet YUMURTACI

    2009-01-01

    Full Text Available Space Vector Modulation (SVM technique is a popular and an important PWM technique for three phases voltage source inverter in the control of Induction Motor. In this study harmonic analysis of Space Vector PWM (SVPWM is investigated using multiple-pulses method. Multiple-Pulses method calculates the Fourier coefficients of individual positive and negative pulses of the output PWM waveform and adds them together using the principle of superposition to calculate the Fourier coefficients of the all PWM output signal. Harmonic magnitudes can be calculated directly by this method without linearization, using look-up tables or Bessel functions. In this study, the results obtained in the application of SVPWM for values of variable parameters are compared with the results obtained with the multiple-pulses method.

  9. Application of the Modified Source Multiplication (MSM) technique to subcritical reactivity worth measurements in thermal and fast reactor systems

    International Nuclear Information System (INIS)

    Blaise, P.; Fougeras, P.; Mellier, F.

    2009-01-01

    The Amplified Source Multiplication (ASM) method and its improved Modified Source Multiplication (MSM) method have been widely used in the CEA's EOLE and MASURCA critical facilities over the past decades for the determination of reactivity worths by using fission chambers in subcritical configurations. They have been successfully applied to absorber (single or clusters) worth measurement in both thermal and fast spectra, or for (sodium or water) void reactivity worths. The ASM methodology, which is the basic technique to estimate a reactivity worth, uses relatively simple relationships between count rates of efficient miniature fission chambers located in slightly subcritical reference and perturbed configurations. If this method works quite well for small reactivity variation (a few effective delayed neutron fraction), its raw results needs to be corrected to take into account the flux perturbation in the fission chamber. This is performed by applying to the measurement a correction factor called MSM. Its characteristics is to take into account the local space and energy variation of the spectrum in the fission chamber, through standard perturbation theory applied to neutron transport calculation in the perturbed configuration. The proposed paper describes in details both methodologies, with their associated uncertainties. Applications on absorber cluster worth in the MISTRAL-4 full MOX mock-up core and the last core loaded in MASURCA show the importance of the MSM correction on raw data. (authors)

  10. ANGSD

    DEFF Research Database (Denmark)

    Korneliussen, Thorfinn Sand; Albrechtsen, Anders; Nielsen, Rasmus

    2014-01-01

    is available at http://www.popgen.dk/angsd. The program is tested and validated on GNU/Linux systems. The program facilitates multiple input formats including BAM and imputed beagle genotype probability files. The program allow the user to choose between combinations of existing methods and can perform...

  11. Single-instruction multiple-data execution

    CERN Document Server

    Hughes, Christopher J

    2015-01-01

    Having hit power limitations to even more aggressive out-of-order execution in processor cores, many architects in the past decade have turned to single-instruction-multiple-data (SIMD) execution to increase single-threaded performance. SIMD execution, or having a single instruction drive execution of an identical operation on multiple data items, was already well established as a technique to efficiently exploit data parallelism. Furthermore, support for it was already included in many commodity processors. However, in the past decade, SIMD execution has seen a dramatic increase in the set of

  12. Multiple inert gas elimination technique by micropore membrane inlet mass spectrometry--a comparison with reference gas chromatography.

    Science.gov (United States)

    Kretzschmar, Moritz; Schilling, Thomas; Vogt, Andreas; Rothen, Hans Ulrich; Borges, João Batista; Hachenberg, Thomas; Larsson, Anders; Baumgardner, James E; Hedenstierna, Göran

    2013-10-15

    The mismatching of alveolar ventilation and perfusion (VA/Q) is the major determinant of impaired gas exchange. The gold standard for measuring VA/Q distributions is based on measurements of the elimination and retention of infused inert gases. Conventional multiple inert gas elimination technique (MIGET) uses gas chromatography (GC) to measure the inert gas partial pressures, which requires tonometry of blood samples with a gas that can then be injected into the chromatograph. The method is laborious and requires meticulous care. A new technique based on micropore membrane inlet mass spectrometry (MMIMS) facilitates the handling of blood and gas samples and provides nearly real-time analysis. In this study we compared MIGET by GC and MMIMS in 10 piglets: 1) 3 with healthy lungs; 2) 4 with oleic acid injury; and 3) 3 with isolated left lower lobe ventilation. The different protocols ensured a large range of normal and abnormal VA/Q distributions. Eight inert gases (SF6, krypton, ethane, cyclopropane, desflurane, enflurane, diethyl ether, and acetone) were infused; six of these gases were measured with MMIMS, and six were measured with GC. We found close agreement of retention and excretion of the gases and the constructed VA/Q distributions between GC and MMIMS, and predicted PaO2 from both methods compared well with measured PaO2. VA/Q by GC produced more widely dispersed modes than MMIMS, explained in part by differences in the algorithms used to calculate VA/Q distributions. In conclusion, MMIMS enables faster measurement of VA/Q, is less demanding than GC, and produces comparable results.

  13. TDM/FM/FDMA - A modulation technique for multiple-beam satellites which precludes cochannel interference and allows non-uniform geographic distribution of user channels

    Science.gov (United States)

    Springett, J. C.

    1982-01-01

    The technique outlined in this paper is intended to eliminate the problems of cochannel interference and uniform geographic distribution of user channels which arise in conventional designs for a multiple spot beam communication satellite to serve mobile telephony users across the CONUS. By time multiplexing FM/FDMA signal ensembles so that only those beams operating on distinct frequency subbands are allowed to transmit concurrently, cochannel interference arising from simultaneous frequency subband reuse is precluded. Thus, time disjoint frequency reuse is accomplished over a repetitive sequence of fixed time slots. By assigning different size subbands to each time slot, a market of nonuniform users can be accommodated. The technique results in a greatly simplified antenna feed system design for the satellite, at a cost of imposing the need for time slot synchronization on the mobile FM receivers whose ability for rejecting adjacent channel interference is somewhat diminished.

  14. TDM/FM/FDMA - A modulation technique for multiple-beam satellites which precludes cochannel interference and allows non-uniform geographic distribution of user channels

    Science.gov (United States)

    Springett, J. C.

    The technique outlined in this paper is intended to eliminate the problems of cochannel interference and uniform geographic distribution of user channels which arise in conventional designs for a multiple spot beam communication satellite to serve mobile telephony users across the CONUS. By time multiplexing FM/FDMA signal ensembles so that only those beams operating on distinct frequency subbands are allowed to transmit concurrently, cochannel interference arising from simultaneous frequency subband reuse is precluded. Thus, time disjoint frequency reuse is accomplished over a repetitive sequence of fixed time slots. By assigning different size subbands to each time slot, a market of nonuniform users can be accommodated. The technique results in a greatly simplified antenna feed system design for the satellite, at a cost of imposing the need for time slot synchronization on the mobile FM receivers whose ability for rejecting adjacent channel interference is somewhat diminished.

  15. Estimation of missing values in solar radiation data using piecewise interpolation methods: Case study at Penang city

    International Nuclear Information System (INIS)

    Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu

    2015-01-01

    The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique

  16. Estimation of missing values in solar radiation data using piecewise interpolation methods: Case study at Penang city

    Science.gov (United States)

    Zainudin, Mohd Lutfi; Saaban, Azizan; Bakar, Mohd Nazari Abu

    2015-12-01

    The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device's development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.

  17. Estimation of missing values in solar radiation data using piecewise interpolation methods: Case study at Penang city

    Energy Technology Data Exchange (ETDEWEB)

    Zainudin, Mohd Lutfi, E-mail: mdlutfi07@gmail.com [School of Quantitative Sciences, UUMCAS, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia); Institut Matematik Kejuruteraan (IMK), Universiti Malaysia Perlis, 02600 Arau, Perlis (Malaysia); Saaban, Azizan, E-mail: azizan.s@uum.edu.my [School of Quantitative Sciences, UUMCAS, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia); Bakar, Mohd Nazari Abu, E-mail: mohdnazari@perlis.uitm.edu.my [Faculty of Applied Science, Universiti Teknologi Mara, 02600 Arau, Perlis (Malaysia)

    2015-12-11

    The solar radiation values have been composed by automatic weather station using the device that namely pyranometer. The device is functions to records all the radiation values that have been dispersed, and these data are very useful for it experimental works and solar device’s development. In addition, for modeling and designing on solar radiation system application is needed for complete data observation. Unfortunately, lack for obtained the complete solar radiation data frequently occur due to several technical problems, which mainly contributed by monitoring device. Into encountering this matter, estimation missing values in an effort to substitute absent values with imputed data. This paper aimed to evaluate several piecewise interpolation techniques likes linear, splines, cubic, and nearest neighbor into dealing missing values in hourly solar radiation data. Then, proposed an extendable work into investigating the potential used of cubic Bezier technique and cubic Said-ball method as estimator tools. As result, methods for cubic Bezier and Said-ball perform the best compare to another piecewise imputation technique.

  18. Analysis and prediction of Multiple-Site Damage (MSD) fatigue crack growth

    Science.gov (United States)

    Dawicke, D. S.; Newman, J. C., Jr.

    1992-08-01

    A technique was developed to calculate the stress intensity factor for multiple interacting cracks. The analysis was verified through comparison with accepted methods of calculating stress intensity factors. The technique was incorporated into a fatigue crack growth prediction model and used to predict the fatigue crack growth life for multiple-site damage (MSD). The analysis was verified through comparison with experiments conducted on uniaxially loaded flat panels with multiple cracks. Configuration with nearly equal and unequal crack distribution were examined. The fatigue crack growth predictions agreed within 20 percent of the experimental lives for all crack configurations considered.

  19. Analysis and prediction of Multiple-Site Damage (MSD) fatigue crack growth

    Science.gov (United States)

    Dawicke, D. S.; Newman, J. C., Jr.

    1992-01-01

    A technique was developed to calculate the stress intensity factor for multiple interacting cracks. The analysis was verified through comparison with accepted methods of calculating stress intensity factors. The technique was incorporated into a fatigue crack growth prediction model and used to predict the fatigue crack growth life for multiple-site damage (MSD). The analysis was verified through comparison with experiments conducted on uniaxially loaded flat panels with multiple cracks. Configuration with nearly equal and unequal crack distribution were examined. The fatigue crack growth predictions agreed within 20 percent of the experimental lives for all crack configurations considered.

  20. Spectral amplitude coding OCDMA using and subtraction technique.

    Science.gov (United States)

    Hasoon, Feras N; Aljunid, S A; Samad, M D A; Abdullah, Mohamad Khazani; Shaari, Sahbudin

    2008-03-20

    An optical decoding technique is proposed for a spectral-amplitude-coding-optical code division multiple access, namely, the AND subtraction technique. The theory is being elaborated and experimental results have been done by comparing a double-weight code against the existing code, Hadamard. We have proved that the and subtraction technique gives better bit error rate performance than the conventional complementary subtraction technique against the received power level.

  1. Generating Cognitive Dissonance in Student Interviews through Multiple Representations

    Science.gov (United States)

    Linenberger, Kimberly J.; Bretz, Stacey Lowery

    2012-01-01

    This study explores what students understand about enzyme-substrate interactions, using multiple representations of the phenomenon. In this paper we describe our use of the 3 Phase-Single Interview Technique with multiple representations to generate cognitive dissonance within students in order to uncover misconceptions of enzyme-substrate…

  2. Combining results of multiple search engines in proteomics.

    Science.gov (United States)

    Shteynberg, David; Nesvizhskii, Alexey I; Moritz, Robert L; Deutsch, Eric W

    2013-09-01

    A crucial component of the analysis of shotgun proteomics datasets is the search engine, an algorithm that attempts to identify the peptide sequence from the parent molecular ion that produced each fragment ion spectrum in the dataset. There are many different search engines, both commercial and open source, each employing a somewhat different technique for spectrum identification. The set of high-scoring peptide-spectrum matches for a defined set of input spectra differs markedly among the various search engine results; individual engines each provide unique correct identifications among a core set of correlative identifications. This has led to the approach of combining the results from multiple search engines to achieve improved analysis of each dataset. Here we review the techniques and available software for combining the results of multiple search engines and briefly compare the relative performance of these techniques.

  3. Combining Results of Multiple Search Engines in Proteomics*

    Science.gov (United States)

    Shteynberg, David; Nesvizhskii, Alexey I.; Moritz, Robert L.; Deutsch, Eric W.

    2013-01-01

    A crucial component of the analysis of shotgun proteomics datasets is the search engine, an algorithm that attempts to identify the peptide sequence from the parent molecular ion that produced each fragment ion spectrum in the dataset. There are many different search engines, both commercial and open source, each employing a somewhat different technique for spectrum identification. The set of high-scoring peptide-spectrum matches for a defined set of input spectra differs markedly among the various search engine results; individual engines each provide unique correct identifications among a core set of correlative identifications. This has led to the approach of combining the results from multiple search engines to achieve improved analysis of each dataset. Here we review the techniques and available software for combining the results of multiple search engines and briefly compare the relative performance of these techniques. PMID:23720762

  4. Use of missing data methods in longitudinal studies: the persistence of bad practices in developmental psychology.

    Science.gov (United States)

    Jelicić, Helena; Phelps, Erin; Lerner, Richard M

    2009-07-01

    Developmental science rests on describing, explaining, and optimizing intraindividual changes and, hence, empirically requires longitudinal research. Problems of missing data arise in most longitudinal studies, thus creating challenges for interpreting the substance and structure of intraindividual change. Using a sample of reports of longitudinal studies obtained from three flagship developmental journals-Child Development, Developmental Psychology, and Journal of Research on Adolescence-we examined the number of longitudinal studies reporting missing data and the missing data techniques used. Of the 100 longitudinal studies sampled, 57 either reported having missing data or had discrepancies in sample sizes reported for different analyses. The majority of these studies (82%) used missing data techniques that are statistically problematic (either listwise deletion or pairwise deletion) and not among the methods recommended by statisticians (i.e., the direct maximum likelihood method and the multiple imputation method). Implications of these results for developmental theory and application, and the need for understanding the consequences of using statistically inappropriate missing data techniques with actual longitudinal data sets, are discussed.

  5. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    Science.gov (United States)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  6. Multiple biopsy probe sampling enabled minimally invasive electrical impedance tomography

    International Nuclear Information System (INIS)

    Shini, Mohanad; Rubinsky, Boris

    2008-01-01

    Biopsies are a reliable method for examining tissues and organs inside the body, in particular for detection of tumors. However, a single biopsy produces only limited information on the site from which it is taken. Therefore, tumor detection now employs multiple biopsy samplings to examine larger volumes of tissue. Nevertheless, even with multiple biopsies, the information remains discrete, while the costs of biopsy increase. Here we propose and evaluate the feasibility of using minimally invasive medical imaging as a means to overcome the limitations of discrete biopsy sampling. The minimally invasive medical imaging technique employs the biopsy probe as electrodes for measurements of electrical impedance tomography relevant data during each biopsy sampling. The data from multiple samplings are combined and used to produce an EIT image of the tissue. Two- and three-dimensional mathematical simulations confirm that the minimally invasive medical imaging technique can produce electrical impedance tomography images of the tissues between the biopsy probe insertion sites. We show that these images can detect tumors that would be missed with multiple biopsy samplings only, and that the technique may facilitate the detection of tumors with fewer biopsies, thereby reducing the cost of cancer detection

  7. Calculation of the flux attenuation and multiple scattering correction factors in time of flight technique for double differential cross section measurements

    International Nuclear Information System (INIS)

    Martin, G.; Coca, M.; Capote, R.

    1996-01-01

    Using Monte Carlo method technique , a computer code which simulates the time of flight experiment to measure double differential cross section was developed. The correction factor for flux attenuation and multiple scattering, that make a deformation to the measured spectrum, were calculated. The energy dependence of the correction factor was determined and a comparison with other works is shown. Calculations for Fe 56 at two different scattering angles were made. We also reproduce the experiment performed at the Nuclear Analysis Laboratory for C 12 at 25 celsius degree and the calculated correction factor for the is measured is shown. We found a linear relation between the scatter size and the correction factor for flux attenuation

  8. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    Science.gov (United States)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  9. A faster technique for rendering meshes in multiple display systems

    Science.gov (United States)

    Hand, Randall E.; Moorhead, Robert J., II

    2003-05-01

    Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.

  10. Integrative missing value estimation for microarray data.

    Science.gov (United States)

    Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine

    2006-10-12

    Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.

  11. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  12. Effect of multiple scattering on lidar measurements

    International Nuclear Information System (INIS)

    Cohen, A.

    1977-01-01

    The lidar equation in its standard form involves the assumption that the scattered irradiance reaching the lidar receiver has been only singly scattered. However, in the cases of scattering from clouds and thick aerosol layers, it is shown that multiple scattering cannot be neglected. An experimental method for the detection of multiple scattering by depolarization measurement techniques is discussed. One method of theoretical calculations of double-scattering is presented and discussed

  13. Growth techniques for monolithic YBCO solenoidal magnets

    International Nuclear Information System (INIS)

    Scruggs, S.J.; Putman, P.T.; Fang, H.; Alessandrini, M.; Salama, K.

    2006-01-01

    The possibility of growing large single domain YBCO solenoids by the use of a large seed has been investigated. There are two known methods for producing a similar solenoid. This first is a conventional top seeded melt growth process followed by a post processing machining step to create the bore. The second involves using multiple seeds spaced around the magnet bore. The appeal of the new technique lies in decreasing processing time compared to the single seed technique, while avoiding alignment problems found in the multiple seeding technique. By avoiding these problems, larger diameter monoliths can be produced. Large diameter monoliths are beneficial because the maximum magnetic field produced by a trapped field magnet is proportional to the radius of the sample. Furthermore, the availability of trapped field magnets with large diameter could enable their use in applications that traditionally have been considered to require wound electromagnets, such as beam bending magnets for particle accelerators or electric propulsion. A comparison of YBCO solenoids grown by the use of a large seed and grown by the use of two small seeds simulating multiple seeding is made. Trapped field measurements as well as microstructure evaluation were used in characterization of each solenoid. Results indicate that high quality growth occurs only in the vicinity of the seeds for the multiple seeded sample, while the sample with the large seeded exhibited high quality growth throughout the entire sample

  14. Growth techniques for monolithic YBCO solenoidal magnets

    Energy Technology Data Exchange (ETDEWEB)

    Scruggs, S.J. [Texas Center for Superconductivity at University of Houston, 4800 Calhoun, Houston, TX 77204 (United States)]. E-mail: Sscruggs2@uh.edu; Putman, P.T. [Texas Center for Superconductivity at University of Houston, 4800 Calhoun, Houston, TX 77204 (United States); Fang, H. [Texas Center for Superconductivity at University of Houston, 4800 Calhoun, Houston, TX 77204 (United States); Alessandrini, M. [Texas Center for Superconductivity at University of Houston, 4800 Calhoun, Houston, TX 77204 (United States); Salama, K. [Texas Center for Superconductivity at University of Houston, 4800 Calhoun, Houston, TX 77204 (United States)

    2006-10-01

    The possibility of growing large single domain YBCO solenoids by the use of a large seed has been investigated. There are two known methods for producing a similar solenoid. This first is a conventional top seeded melt growth process followed by a post processing machining step to create the bore. The second involves using multiple seeds spaced around the magnet bore. The appeal of the new technique lies in decreasing processing time compared to the single seed technique, while avoiding alignment problems found in the multiple seeding technique. By avoiding these problems, larger diameter monoliths can be produced. Large diameter monoliths are beneficial because the maximum magnetic field produced by a trapped field magnet is proportional to the radius of the sample. Furthermore, the availability of trapped field magnets with large diameter could enable their use in applications that traditionally have been considered to require wound electromagnets, such as beam bending magnets for particle accelerators or electric propulsion. A comparison of YBCO solenoids grown by the use of a large seed and grown by the use of two small seeds simulating multiple seeding is made. Trapped field measurements as well as microstructure evaluation were used in characterization of each solenoid. Results indicate that high quality growth occurs only in the vicinity of the seeds for the multiple seeded sample, while the sample with the large seeded exhibited high quality growth throughout the entire sample.

  15. X-ray topography and multiple diffraction

    International Nuclear Information System (INIS)

    Chang, S.-L.

    1983-01-01

    A short summary on X-ray topography, which is based on the dynamical theory of X-ray diffraction, is made. The applications and properties related to the use of the multiple diffraction technique are analized and discussed. (L.C.) [pt

  16. Better experiences with quality of care predict well-being of patients with chronic obstructive pulmonary disease in the Netherlands

    Directory of Open Access Journals (Sweden)

    Jane Murray Cramm

    2015-06-01

    Full Text Available Objective: This study was conducted to (1 identify improvements in care quality and well-being of patients with chronic obstructive pulmonary disease in the Netherlands and (2 investigate the longitudinal relationship between these factors. Methods: This longitudinal study was conducted among patients diagnosed with chronic obstructive pulmonary disease enrolled in the Kennemer Lucht care programme in the Netherlands. Biomarker data (lung capacity were collected at patients’ health care practices in 2012. Complete case analysis was conducted, and the multiple imputation technique allowed us to report pooled results from imputed datasets. Results: Surveys were filled out by 548/1303 (42% patients at T0 (2012 and 569/996 (57% remaining participants at T1. Quality of care improved significantly (p < 0.05. Analyses adjusted for well-being at T0, age, educational level, marital status, gender, lung function and health behaviours showed that patients’ assessments of the quality of chronic care delivery at T0 (p < 0.01 and changes therein (p < 0.001 predicted patients’ well-being at T1. Conclusion: These results clearly show that the quality of care and changes therein are important for the well-being of patients with chronic obstructive pulmonary disease in the primary care setting. Practice implications: To improve quality of care for chronically ill patients, multicomponent interventions may be needed.

  17. The role of chemometrics in single and sequential extraction assays: a review. Part II. Cluster analysis, multiple linear regression, mixture resolution, experimental design and other techniques.

    Science.gov (United States)

    Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo

    2011-03-04

    Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Curability of Multiple Myeloma

    Directory of Open Access Journals (Sweden)

    Raymond Alexanian

    2012-01-01

    Full Text Available Among 792 patients with multiple myeloma treated from 1987 to 2010 and assessed after 18 months, there were 167 patients with complete remission. For those 60 patients treated between 1987–1998 and with long followup, the latest relapse occurred after 11.8 years, so that 13 patients have remained in sustained complete remission for longer than 12 years (range 12–22 years. These results suggest that 3% of all patients treated during that period may be cured of multiple myeloma. In addition to immunofixation, more sensitive techniques for the detection of residual disease should be applied more consistently in patients with apparent complete remission in order to identify those with potential cure.

  19. Diversity Performance Analysis on Multiple HAP Networks

    Science.gov (United States)

    Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue

    2015-01-01

    One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102

  20. Diversity Performance Analysis on Multiple HAP Networks

    Directory of Open Access Journals (Sweden)

    Feihong Dong

    2015-06-01

    Full Text Available One of the main design challenges in wireless sensor networks (WSNs is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV. In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF and cumulative distribution function (CDF of the received signal-to-noise ratio (SNR are derived. In addition, the average symbol error rate (ASER with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques.

  1. Fast damage imaging using the time-reversal technique in the frequency–wavenumber domain

    International Nuclear Information System (INIS)

    Zhu, R; Huang, G L; Yuan, F G

    2013-01-01

    The time-reversal technique has been successfully used in structural health monitoring (SHM) for quantitative imaging of damage. However, the technique is very time-consuming when it is implemented in the time domain. In this paper, we study the technique in the frequency–wavenumber (f–k) domain for fast real-time imaging of multiple damage sites in plates using scattered flexural plate waves. Based on Mindlin plate theory, the time reversibility of dispersive flexural waves in an isotropic plate is theoretically investigated in the f–k domain. A fast damage imaging technique is developed by using the cross-correlation between the back-propagated scattered wavefield and the incident wavefield in the frequency domain. Numerical simulations demonstrate that the proposed technique cannot only localize multiple damage sites but also potentially identify their sizes. Moreover, the time-reversal technique in the f–k domain is about two orders of magnitude faster than the method in the time domain. Finally, experimental testing of an on-line SHM system with a sparse piezoelectric sensor array is conducted for fast multiple damage identification using the proposed technique. (paper)

  2. Use of multiple molecular subtyping techniques to investigate a Legionnaires' disease outbreak due to identical strains at two tourist lodges.

    Science.gov (United States)

    Mamolen, M; Breiman, R F; Barbaree, J M; Gunn, R A; Stone, K M; Spika, J S; Dennis, D T; Mao, S H; Vogt, R L

    1993-10-01

    A multistate outbreak of Legionnaires' disease occurred among nine tour groups of senior citizens returning from stays at one of two lodges in a Vermont resort in October 1987. Interviews and serologic studies of 383 (85%) of the tour members revealed 17 individuals (attack rate, 4.4%) with radiologically documented pneumonia and laboratory evidence of legionellosis. A survey of tour groups staying at four nearby lodges and of Vermont-area medical facilities revealed no additional cases. Environmental investigation of common tour stops revealed no likely aerosol source of Legionella infection outside the lodges. Legionella pneumophila serogroup 1 was isolated from water sources at both implicated lodges, and the monoclonal antibody subtype matched those of the isolates from six patients from whom clinical isolates were obtained. The cultures reacted with monoclonal antibodies MAB1, MAB2, 33G2, and 144C2 to yield a 1,2,5,7 or a Benidorm 030E pattern. The strains were also identical by alloenzyme electrophoresis and DNA ribotyping techniques. The epidemiologic and laboratory data suggest that concurrent outbreaks occurred following exposures to the same L. pneumophila serogroup 1 strain at two separate lodges. Multiple molecular subtyping techniques can provide essential information for epidemiologic investigations of Legionnaires' disease.

  3. Multiple Breed Validation of Five QTL Affecting Mastitis Resistance

    DEFF Research Database (Denmark)

    Vilkki, Johanna; Dolezal, Marlies A; Sahana, Goutam

    to mastitis were identified by GWAS using high-density SNP array in the Finnish Ayrshire and Brown Swiss breeds. These targeted regions were analyzed for polymorphisms from 20X whole-genome sequences of 38 ancestral bulls of the two populations. A set of 384 SNPs were selected based on their ranking from...... (on BTA3, BTA6, BTA8, BTA19, and BTA27) agreed across the breeds, but no identical associated SNPs were detected. Higher power (imputation to bigger population samples) will be needed to confirm results. On BTA6 the results indicate several QTL within a 5 Mb region. The results provide a basis...

  4. Combination of various data analysis techniques for efficient track reconstruction in very high multiplicity events

    Science.gov (United States)

    Siklér, Ferenc

    2017-08-01

    A novel combination of established data analysis techniques for reconstructing charged-particles in high energy collisions is proposed. It uses all information available in a collision event while keeping competing choices open as long as possible. Suitable track candidates are selected by transforming measured hits to a binned, three- or four-dimensional, track parameter space. It is accomplished by the use of templates taking advantage of the translational and rotational symmetries of the detectors. Track candidates and their corresponding hits, the nodes, form a usually highly connected network, a bipartite graph, where we allow for multiple hit to track assignments, edges. In order to get a manageable problem, the graph is cut into very many minigraphs by removing a few of its vulnerable components, edges and nodes. Finally the hits are distributed among the track candidates by exploring a deterministic decision tree. A depth-limited search is performed maximizing the number of hits on tracks, and minimizing the sum of track-fit χ2. Simplified but realistic models of LHC silicon trackers including the relevant physics processes are used to test and study the performance (efficiency, purity, timing) of the proposed method in the case of single or many simultaneous proton-proton collisions (high pileup), and for single heavy-ion collisions at the highest available energies.

  5. MIMO Techniques for Jamming Threat Suppression in Vehicular Networks

    Directory of Open Access Journals (Sweden)

    Dimitrios Kosmanos

    2016-01-01

    Full Text Available Vehicular ad hoc networks have emerged as a promising field of research and development, since they will be able to accommodate a variety of applications, ranging from infotainment to traffic management and road safety. A specific security-related concern that vehicular ad hoc networks face is how to keep communication alive in the presence of radio frequency jamming, especially during emergency situations. Multiple Input Multiple Output techniques are proven to be able to improve some crucial parameters of vehicular communications such as communication range and throughput. In this article, we investigate how Multiple Input Multiple Output techniques can be used in vehicular ad hoc networks as active defense mechanisms in order to avoid jamming threats. For this reason, a variation of spatial multiplexing is proposed, namely, vSP4, which achieves not only high throughput but also a stable diversity gain upon the interference of a malicious jammer.

  6. Genome-wide association study with 1000 genomes imputation identifies signals for nine sex hormone-related phenotypes.

    Science.gov (United States)

    Ruth, Katherine S; Campbell, Purdey J; Chew, Shelby; Lim, Ee Mun; Hadlow, Narelle; Stuckey, Bronwyn G A; Brown, Suzanne J; Feenstra, Bjarke; Joseph, John; Surdulescu, Gabriela L; Zheng, Hou Feng; Richards, J Brent; Murray, Anna; Spector, Tim D; Wilson, Scott G; Perry, John R B

    2016-02-01

    Genetic factors contribute strongly to sex hormone levels, yet knowledge of the regulatory mechanisms remains incomplete. Genome-wide association studies (GWAS) have identified only a small number of loci associated with sex hormone levels, with several reproductive hormones yet to be assessed. The aim of the study was to identify novel genetic variants contributing to the regulation of sex hormones. We performed GWAS using genotypes imputed from the 1000 Genomes reference panel. The study used genotype and phenotype data from a UK twin register. We included 2913 individuals (up to 294 males) from the Twins UK study, excluding individuals receiving hormone treatment. Phenotypes were standardised for age, sex, BMI, stage of menstrual cycle and menopausal status. We tested 7,879,351 autosomal SNPs for association with levels of dehydroepiandrosterone sulphate (DHEAS), oestradiol, free androgen index (FAI), follicle-stimulating hormone (FSH), luteinizing hormone (LH), prolactin, progesterone, sex hormone-binding globulin and testosterone. Eight independent genetic variants reached genome-wide significance (P<5 × 10(-8)), with minor allele frequencies of 1.3-23.9%. Novel signals included variants for progesterone (P=7.68 × 10(-12)), oestradiol (P=1.63 × 10(-8)) and FAI (P=1.50 × 10(-8)). A genetic variant near the FSHB gene was identified which influenced both FSH (P=1.74 × 10(-8)) and LH (P=3.94 × 10(-9)) levels. A separate locus on chromosome 7 was associated with both DHEAS (P=1.82 × 10(-14)) and progesterone (P=6.09 × 10(-14)). This study highlights loci that are relevant to reproductive function and suggests overlap in the genetic basis of hormone regulation.

  7. Hybrid multiple criteria decision-making methods

    DEFF Research Database (Denmark)

    Zavadskas, Edmundas Kazimieras; Govindan, K.; Antucheviciene, Jurgita

    2016-01-01

    Formal decision-making methods can be used to help improve the overall sustainability of industries and organisations. Recently, there has been a great proliferation of works aggregating sustainability criteria by using diverse multiple criteria decision-making (MCDM) techniques. A number of revi...

  8. A Clinical Decision Support System for Breast Cancer Patients

    Science.gov (United States)

    Fernandes, Ana S.; Alves, Pedro; Jarman, Ian H.; Etchells, Terence A.; Fonseca, José M.; Lisboa, Paulo J. G.

    This paper proposes a Web clinical decision support system for clinical oncologists and for breast cancer patients making prognostic assessments, using the particular characteristics of the individual patient. This system comprises three different prognostic modelling methodologies: the clinically widely used Nottingham prognostic index (NPI); the Cox regression modelling and a partial logistic artificial neural network with automatic relevance determination (PLANN-ARD). All three models yield a different prognostic index that can be analysed together in order to obtain a more accurate prognostic assessment of the patient. Missing data is incorporated in the mentioned models, a common issue in medical data that was overcome using multiple imputation techniques. Risk group assignments are also provided through a methodology based on regression trees, where Boolean rules can be obtained expressed with patient characteristics.

  9. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  10. Handling Missing Data in Structural Equation Models in R: A Replication Study for Applied Researchers

    Science.gov (United States)

    Wolgast, Anett; Schwinger, Malte; Hahnel, Carolin; Stiensmeier-Pelster, Joachim

    2017-01-01

    Introduction: Multiple imputation (MI) is one of the most highly recommended methods for replacing missing values in research data. The scope of this paper is to demonstrate missing data handling in SEM by analyzing two modified data examples from educational psychology, and to give practical recommendations for applied researchers. Method: We…

  11. Multiple splenic abscesses

    Directory of Open Access Journals (Sweden)

    Harish Kumar

    2015-01-01

    Full Text Available Splenic abscess is a rare clinical entity with poor prognosis. But owing to imaging technique, diagnosis and prognosis have improved nowadays. Most patients who are presented with splenic abscess are immunocompromised due to predisposing risk factors like diabetes mellitus, intravenous drug abuse, trauma, bacterial endocarditis, human immunodeficiency virus (HIV, chemotherapy, or steroids. Here, we are presenting a rarer case of multiple splenic abscesses with its complication in an immunocompetent healthy adult male without any risk factor.

  12. An Improved Wake Vortex Tracking Algorithm for Multiple Aircraft

    Science.gov (United States)

    Switzer, George F.; Proctor, Fred H.; Ahmad, Nashat N.; LimonDuparcmeur, Fanny M.

    2010-01-01

    The accurate tracking of vortex evolution from Large Eddy Simulation (LES) data is a complex and computationally intensive problem. The vortex tracking requires the analysis of very large three-dimensional and time-varying datasets. The complexity of the problem is further compounded by the fact that these vortices are embedded in a background turbulence field, and they may interact with the ground surface. Another level of complication can arise, if vortices from multiple aircrafts are simulated. This paper presents a new technique for post-processing LES data to obtain wake vortex tracks and wake intensities. The new approach isolates vortices by defining "regions of interest" (ROI) around each vortex and has the ability to identify vortex pairs from multiple aircraft. The paper describes the new methodology for tracking wake vortices and presents application of the technique for single and multiple aircraft.

  13. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  14. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jacob Laigaard

    1991-01-01

    responses simulated by two SDOF ARMA models loaded by the same band-limited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  15. Estimation of Correlation Functions by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Krenk, Steen; Jensen, Jakob Laigaard

    1992-01-01

    responses simulated by two SDOF ARMA models loaded by the same bandlimited white noise. The speed and the accuracy of the RDD technique is compared to the Fast Fourier Transform (FFT) technique. The RDD technique does not involve multiplications, but only additions. Therefore, the technique is very fast......The Random Decrement (RDD) Technique is a versatile technique for characterization of random signals in the time domain. In this paper a short review of the theoretical basis is given, and the technique is illustrated by estimating auto-correlation functions and cross-correlation functions on modal...

  16. Helical Tomotherapy for Whole-Brain Irradiation With Integrated Boost to Multiple Brain Metastases: Evaluation of Dose Distribution Characteristics and Comparison With Alternative Techniques

    International Nuclear Information System (INIS)

    Levegrün, Sabine; Pöttgen, Christoph; Wittig, Andrea; Lübcke, Wolfgang; Abu Jawad, Jehad; Stuschke, Martin

    2013-01-01

    Purpose: To quantitatively evaluate dose distribution characteristics achieved with helical tomotherapy (HT) for whole-brain irradiation (WBRT) with integrated boost (IB) to multiple brain metastases in comparison with alternative techniques. Methods and Materials: Dose distributions for 23 patients with 81 metastases treated with WBRT (30 Gy/10 fractions) and IB (50 Gy) were analyzed. The median number of metastases per patient (N mets ) was 3 (range, 2-8). Mean values of the composite planning target volume of all metastases per patient (PTV mets ) and of the individual metastasis planning target volume (PTV ind met ) were 8.7 ± 8.9 cm 3 (range, 1.3-35.5 cm 3 ) and 2.5 ± 4.5 cm 3 (range, 0.19-24.7 cm 3 ), respectively. Dose distributions in PTV mets and PTV ind met were evaluated with respect to dose conformity (conformation number [CN], RTOG conformity index [PITV]), target coverage (TC), and homogeneity (homogeneity index [HI], ratio of maximum dose to prescription dose [MDPD]). The dependence of dose conformity on target size and N mets was investigated. The dose distribution characteristics were benchmarked against alternative irradiation techniques identified in a systematic literature review. Results: Mean ± standard deviation of dose distribution characteristics derived for PTV mets amounted to CN = 0.790 ± 0.101, PITV = 1.161 ± 0.154, TC = 0.95 ± 0.01, HI = 0.142 ± 0.022, and MDPD = 1.147 ± 0.029, respectively, demonstrating high dose conformity with acceptable homogeneity. Corresponding numbers for PTV ind met were CN = 0.708 ± 0.128, PITV = 1.174 ± 0.237, TC = 0.90 ± 0.10, HI = 0.140 ± 0.027, and MDPD = 1.129 ± 0.030, respectively. The target size had a statistically significant influence on dose conformity to PTV mets (CN = 0.737 for PTV mets ≤4.32 cm 3 vs CN = 0.848 for PTV mets >4.32 cm 3 , P=.006), in contrast to N mets . The achieved dose conformity to PTV mets , assessed by both CN and PITV, was in all investigated volume strata

  17. Exploration of machine learning techniques in predicting multiple sclerosis disease course

    OpenAIRE

    Zhao, Yijun; Healy, Brian C.; Rotstein, Dalia; Guttmann, Charles R. G.; Bakshi, Rohit; Weiner, Howard L.; Brodley, Carla E.; Chitnis, Tanuja

    2017-01-01

    Objective To explore the value of machine learning methods for predicting multiple sclerosis disease course. Methods 1693 CLIMB study patients were classified as increased EDSS?1.5 (worsening) or not (non-worsening) at up to five years after baseline visit. Support vector machines (SVM) were used to build the classifier, and compared to logistic regression (LR) using demographic, clinical and MRI data obtained at years one and two to predict EDSS at five years follow-up. Results Baseline data...

  18. Collectively loading programs in a multiple program multiple data environment

    Science.gov (United States)

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Gooding, Thomas M.; Miller, Samuel J.

    2016-11-08

    Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the program needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program.

  19. Comparison of HapMap and 1000 Genomes Reference Panels in a Large-Scale Genome-Wide Association Study.

    Directory of Open Access Journals (Sweden)

    Paul S de Vries

    Full Text Available An increasing number of genome-wide association (GWA studies are now using the higher resolution 1000 Genomes Project reference panel (1000G for imputation, with the expectation that 1000G imputation will lead to the discovery of additional associated loci when compared to HapMap imputation. In order to assess the improvement of 1000G over HapMap imputation in identifying associated loci, we compared the results of GWA studies of circulating fibrinogen based on the two reference panels. Using both HapMap and 1000G imputation we performed a meta-analysis of 22 studies comprising the same 91,953 individuals. We identified six additional signals using 1000G imputation, while 29 loci were associated using both HapMap and 1000G imputation. One locus identified using HapMap imputation was not significant using 1000G imputation. The genome-wide significance threshold of 5×10-8 is based on the number of independent statistical tests using HapMap imputation, and 1000G imputation may lead to further independent tests that should be corrected for. When using a stricter Bonferroni correction for the 1000G GWA study (P-value < 2.5×10-8, the number of loci significant only using HapMap imputation increased to 4 while the number of loci significant only using 1000G decreased to 5. In conclusion, 1000G imputation enabled the identification of 20% more loci than HapMap imputation, although the advantage of 1000G imputation became less clear when a stricter Bonferroni correction was used. More generally, our results provide insights that are applicable to the implementation of other dense reference panels that are under development.

  20. Modern imaging techniques in patients with multiple myeloma

    International Nuclear Information System (INIS)

    Bannas, Peter; Adam, G.; Derlin, T.; Kroeger, N.

    2013-01-01

    Imaging studies are essential for both diagnosis and initial staging of multiple myeloma, as well as for differentiation from other monoclonal plasma cell diseases. Apart from conventional radiography, a variety of newer imaging modalities including whole-body low-dose-CT, whole-body MRI and 18F-FDG PET/CT may be used for detection of osseous and extraosseous myeloma manifestations. Despite of known limitations such as limited sensitivity and specificity and the inability to detect extraosseous lesions, conventional radiography still remains the gold standard for staging newly diagnosed myeloma, partly due to its wide availability and low costs. Whole-body low-dose CT is increasingly used due to its higher sensitivity for the detection of osseous lesions and its ability to diagnose extraosseous lesions, and is replacing conventional radiography at selected centres. The highest sensitivity for both detection of bone marrow disease and extraosseous lesions can be achieved with whole-body MRI or 18F-FDG PET/CT. Diffuse bone marrow infiltration may be visualized by whole-body MRI with high sensitivity. Whole-body MRI is at least recommended in all patients with normal conventional radiography and in all patients with an apparently solitary plasmacytoma of bone. To obtain the most precise readings, optimized examination protocols and dedicated radiologists and nuclear medicine physicians familiar with the complex and variable morphologies of myeloma lesions are required. (orig.)