WorldWideScience

Sample records for evaluate imputation reliability

  1. Comprehensive evaluation of imputation performance in African Americans.

    Science.gov (United States)

    Chanda, Pritam; Yuhki, Naoya; Li, Man; Bader, Joel S; Hartz, Alex; Boerwinkle, Eric; Kao, W H Linda; Arking, Dan E

    2012-07-01

    Imputation of genome-wide single-nucleotide polymorphism (SNP) arrays to a larger known reference panel of SNPs has become a standard and an essential part of genome-wide association studies. However, little is known about the behavior of imputation in African Americans with respect to the different imputation algorithms, the reference population(s) and the reference SNP panels used. Genome-wide SNP data (Affymetrix 6.0) from 3207 African American samples in the Atherosclerosis Risk in Communities Study (ARIC) was used to systematically evaluate imputation quality and yield. Imputation was performed with the imputation algorithms MACH, IMPUTE and BEAGLE using several combinations of three reference panels of HapMap III (ASW, YRI and CEU) and 1000 Genomes Project (pilot 1 YRI June 2010 release, EUR and AFR August 2010 and June 2011 releases) panels with SNP data on chromosomes 18, 20 and 22. About 10% of the directly genotyped SNPs from each chromosome were masked, and SNPs common between the reference panels were used for evaluating the imputation quality using two statistical metrics-concordance accuracy and Cohen's kappa (κ) coefficient. The dependencies of these metrics on the minor allele frequencies (MAF) and specific genotype categories (minor allele homozygotes, heterozygotes and major allele homozygotes) were thoroughly investigated to determine the best panel and method for imputation in African Americans. In addition, the power to detect imputed SNPs associated with simulated phenotypes was studied using the mean genotype of each masked SNP in the imputed data. Our results indicate that the genotype concordances after stratification into each genotype category and Cohen's κ coefficient are considerably better equipped to differentiate imputation performance compared with the traditionally used total concordance statistic, and both statistics improved with increasing MAF irrespective of the imputation method. We also find that both MACH and IMPUTE

  2. Evaluation of the imputation performance of the program IMPUTE in an admixed sample from Mexico City using several model designs

    Directory of Open Access Journals (Sweden)

    Krithika S

    2012-05-01

    Full Text Available Abstract Background We explored the imputation performance of the program IMPUTE in an admixed sample from Mexico City. The following issues were evaluated: (a the impact of different reference panels (HapMap vs. 1000 Genomes on imputation; (b potential differences in imputation performance between single-step vs. two-step (phasing and imputation approaches; (c the effect of different INFO score thresholds on imputation performance and (d imputation performance in common vs. rare markers. Methods The sample from Mexico City comprised 1,310 individuals genotyped with the Affymetrix 5.0 array. We randomly masked 5% of the markers directly genotyped on chromosome 12 (n = 1,046 and compared the imputed genotypes with the microarray genotype calls. Imputation was carried out with the program IMPUTE. The concordance rates between the imputed and observed genotypes were used as a measure of imputation accuracy and the proportion of non-missing genotypes as a measure of imputation efficacy. Results The single-step imputation approach produced slightly higher concordance rates than the two-step strategy (99.1% vs. 98.4% when using the HapMap phase II combined panel, but at the expense of a lower proportion of non-missing genotypes (85.5% vs. 90.1%. The 1,000 Genomes reference sample produced similar concordance rates to the HapMap phase II panel (98.4% for both datasets, using the two-step strategy. However, the 1000 Genomes reference sample increased substantially the proportion of non-missing genotypes (94.7% vs. 90.1%. Rare variants ( Conclusions The program IMPUTE had an excellent imputation performance for common alleles in an admixed sample from Mexico City, which has primarily Native American (62% and European (33% contributions. Genotype concordances were higher than 98.4% using all the imputation strategies, in spite of the fact that no Native American samples are present in the HapMap and 1000 Genomes reference panels. The best balance of

  3. Effect of imputing markers from a low-density chip on the reliability of genomic breeding values in Holstein populations

    DEFF Research Database (Denmark)

    Dassonneville, R; Brøndum, Rasmus Froberg; Druet, T

    2011-01-01

    The purpose of this study was to investigate the imputation error and loss of reliability of direct genomic values (DGV) or genomically enhanced breeding values (GEBV) when using genotypes imputed from a 3,000-marker single nucleotide polymorphism (SNP) panel to a 50,000-marker SNP panel. Data co...

  4. An Overview and Evaluation of Recent Machine Learning Imputation Methods Using Cardiac Imaging Data.

    Science.gov (United States)

    Liu, Yuzhe; Gopalakrishnan, Vanathi

    2017-03-01

    Many clinical research datasets have a large percentage of missing values that directly impacts their usefulness in yielding high accuracy classifiers when used for training in supervised machine learning. While missing value imputation methods have been shown to work well with smaller percentages of missing values, their ability to impute sparse clinical research data can be problem specific. We previously attempted to learn quantitative guidelines for ordering cardiac magnetic resonance imaging during the evaluation for pediatric cardiomyopathy, but missing data significantly reduced our usable sample size. In this work, we sought to determine if increasing the usable sample size through imputation would allow us to learn better guidelines. We first review several machine learning methods for estimating missing data. Then, we apply four popular methods (mean imputation, decision tree, k-nearest neighbors, and self-organizing maps) to a clinical research dataset of pediatric patients undergoing evaluation for cardiomyopathy. Using Bayesian Rule Learning (BRL) to learn ruleset models, we compared the performance of imputation-augmented models versus unaugmented models. We found that all four imputation-augmented models performed similarly to unaugmented models. While imputation did not improve performance, it did provide evidence for the robustness of our learned models.

  5. Evaluation of Multiple Imputation in Missing Data Analysis: An Application on Repeated Measurement Data in Animal Science

    Directory of Open Access Journals (Sweden)

    Gazel Ser

    2015-12-01

    Full Text Available The purpose of this study was to evaluate the performance of multiple imputation method in case that missing observation structure is at random and completely at random from the approach of general linear mixed model. The application data of study was consisted of a total 77 heads of Norduz ram lambs at 7 months of age. After slaughtering, pH values measured at five different time points were determined as dependent variable. In addition, hot carcass weight, muscle glycogen level and fasting durations were included as independent variables in the model. In the dependent variable without missing observation, two missing observation structures including Missing Completely at Random (MCAR and Missing at Random (MAR were created by deleting the observations at certain rations (10% and 25%. After that, in data sets that have missing observation structure, complete data sets were obtained using MI (multiple imputation. The results obtained by applying general linear mixed model to the data sets that were completed using MI method were compared to the results regarding complete data. In the mixed model which was applied to the complete data and MI data sets, results whose covariance structures were the same and parameter estimations and standard estimations were rather close to the complete data are obtained. As a result, in this study, it was ensured that reliable information was obtained in mixed model in case of choosing MI as imputation method in missing observation structure and rates of both cases.

  6. A comprehensive evaluation of popular proteomics software workflows for label-free proteome quantification and imputation.

    Science.gov (United States)

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2017-05-31

    Label-free mass spectrometry (MS) has developed into an important tool applied in various fields of biological and life sciences. Several software exist to process the raw MS data into quantified protein abundances, including open source and commercial solutions. Each software includes a set of unique algorithms for different tasks of the MS data processing workflow. While many of these algorithms have been compared separately, a thorough and systematic evaluation of their overall performance is missing. Moreover, systematic information is lacking about the amount of missing values produced by the different proteomics software and the capabilities of different data imputation methods to account for them.In this study, we evaluated the performance of five popular quantitative label-free proteomics software workflows using four different spike-in data sets. Our extensive testing included the number of proteins quantified and the number of missing values produced by each workflow, the accuracy of detecting differential expression and logarithmic fold change and the effect of different imputation and filtering methods on the differential expression results. We found that the Progenesis software performed consistently well in the differential expression analysis and produced few missing values. The missing values produced by the other software decreased their performance, but this difference could be mitigated using proper data filtering or imputation methods. Among the imputation methods, we found that the local least squares (lls) regression imputation consistently increased the performance of the software in the differential expression analysis, and a combination of both data filtering and local least squares imputation increased performance the most in the tested data sets. © The Author 2017. Published by Oxford University Press.

  7. MVIAeval: a web tool for comprehensively evaluating the performance of a new missing value imputation algorithm.

    Science.gov (United States)

    Wu, Wei-Sheng; Jhou, Meng-Jhun

    2017-01-13

    Missing value imputation is important for microarray data analyses because microarray data with missing values would significantly degrade the performance of the downstream analyses. Although many microarray missing value imputation algorithms have been developed, an objective and comprehensive performance comparison framework is still lacking. To solve this problem, we previously proposed a framework which can perform a comprehensive performance comparison of different existing algorithms. Also the performance of a new algorithm can be evaluated by our performance comparison framework. However, constructing our framework is not an easy task for the interested researchers. To save researchers' time and efforts, here we present an easy-to-use web tool named MVIAeval (Missing Value Imputation Algorithm evaluator) which implements our performance comparison framework. MVIAeval provides a user-friendly interface allowing users to upload the R code of their new algorithm and select (i) the test datasets among 20 benchmark microarray (time series and non-time series) datasets, (ii) the compared algorithms among 12 existing algorithms, (iii) the performance indices from three existing ones, (iv) the comprehensive performance scores from two possible choices, and (v) the number of simulation runs. The comprehensive performance comparison results are then generated and shown as both figures and tables. MVIAeval is a useful tool for researchers to easily conduct a comprehensive and objective performance evaluation of their newly developed missing value imputation algorithm for microarray data or any data which can be represented as a matrix form (e.g. NGS data or proteomics data). Thus, MVIAeval will greatly expedite the progress in the research of missing value imputation algorithms.

  8. Impact of pre-imputation SNP-filtering on genotype imputation results.

    Science.gov (United States)

    Roshyara, Nab Raj; Kirsten, Holger; Horn, Katrin; Ahnert, Peter; Scholz, Markus

    2014-08-12

    Imputation of partially missing or unobserved genotypes is an indispensable tool for SNP data analyses. However, research and understanding of the impact of initial SNP-data quality control on imputation results is still limited. In this paper, we aim to evaluate the effect of different strategies of pre-imputation quality filtering on the performance of the widely used imputation algorithms MaCH and IMPUTE. We considered three scenarios: imputation of partially missing genotypes with usage of an external reference panel, without usage of an external reference panel, as well as imputation of completely un-typed SNPs using an external reference panel. We first created various datasets applying different SNP quality filters and masking certain percentages of randomly selected high-quality SNPs. We imputed these SNPs and compared the results between the different filtering scenarios by using established and newly proposed measures of imputation quality. While the established measures assess certainty of imputation results, our newly proposed measures focus on the agreement with true genotypes. These measures showed that pre-imputation SNP-filtering might be detrimental regarding imputation quality. Moreover, the strongest drivers of imputation quality were in general the burden of missingness and the number of SNPs used for imputation. We also found that using a reference panel always improves imputation quality of partially missing genotypes. MaCH performed slightly better than IMPUTE2 in most of our scenarios. Again, these results were more pronounced when using our newly defined measures of imputation quality. Even a moderate filtering has a detrimental effect on the imputation quality. Therefore little or no SNP filtering prior to imputation appears to be the best strategy for imputing small to moderately sized datasets. Our results also showed that for these datasets, MaCH performs slightly better than IMPUTE2 in most scenarios at the cost of increased computing

  9. Reliability evaluation of power systems

    CERN Document Server

    Billinton, Roy

    1996-01-01

    The Second Edition of this well-received textbook presents over a decade of new research in power system reliability-while maintaining the general concept, structure, and style of the original volume. This edition features new chapters on the growing areas of Monte Carlo simulation and reliability economics. In addition, chapters cover the latest developments in techniques and their application to real problems. The text also explores the progress occurring in the structure, planning, and operation of real power systems due to changing ownership, regulation, and access. This work serves as a companion volume to Reliability Evaluation of Engineering Systems: Second Edition (1992).

  10. RELIABILITY EVALUATION OF PRIMARY CELLS

    African Journals Online (AJOL)

    Dr Obe

    ABSTRACT. Evaluation of the reliability of a primary cell took place in three stages: 192 cells went through a slow-discharged test. A designed experiment was conducted on 144 cells; there were three factors in the experiment: Storage temperature (three levels), thermal shock (two levels) and date code (two levels). 16 cells ...

  11. Multiple imputation of missing covariates with non-linear effects and interactions: an evaluation of statistical methods.

    Science.gov (United States)

    Seaman, Shaun R; Bartlett, Jonathan W; White, Ian R

    2012-04-10

    Multiple imputation is often used for missing data. When a model contains as covariates more than one function of a variable, it is not obvious how best to impute missing values in these covariates. Consider a regression with outcome Y and covariates X and X2. In 'passive imputation' a value X* is imputed for X and then X2 is imputed as (X*)2. A recent proposal is to treat X2 as 'just another variable' (JAV) and impute X and X2 under multivariate normality. We use simulation to investigate the performance of three methods that can easily be implemented in standard software: 1) linear regression of X on Y to impute X then passive imputation of X2; 2) the same regression but with predictive mean matching (PMM); and 3) JAV. We also investigate the performance of analogous methods when the analysis involves an interaction, and study the theoretical properties of JAV. The application of the methods when complete or incomplete confounders are also present is illustrated using data from the EPIC Study. JAV gives consistent estimation when the analysis is linear regression with a quadratic or interaction term and X is missing completely at random. When X is missing at random, JAV may be biased, but this bias is generally less than for passive imputation and PMM. Coverage for JAV was usually good when bias was small. However, in some scenarios with a more pronounced quadratic effect, bias was large and coverage poor. When the analysis was logistic regression, JAV's performance was sometimes very poor. PMM generally improved on passive imputation, in terms of bias and coverage, but did not eliminate the bias. Given the current state of available software, JAV is the best of a set of imperfect imputation methods for linear regression with a quadratic or interaction effect, but should not be used for logistic regression.

  12. 16 CFR 1115.11 - Imputed knowledge.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Imputed knowledge. 1115.11 Section 1115.11... PRODUCT HAZARD REPORTS General Interpretation § 1115.11 Imputed knowledge. (a) In evaluating whether or... care to ascertain the truth of complaints or other representations. This includes the knowledge a firm...

  13. Model checking in multiple imputation: an overview and case study

    Directory of Open Access Journals (Sweden)

    Cattram D. Nguyen

    2017-08-01

    Full Text Available Abstract Background Multiple imputation has become very popular as a general-purpose method for handling missing data. The validity of multiple-imputation-based analyses relies on the use of an appropriate model to impute the missing values. Despite the widespread use of multiple imputation, there are few guidelines available for checking imputation models. Analysis In this paper, we provide an overview of currently available methods for checking imputation models. These include graphical checks and numerical summaries, as well as simulation-based methods such as posterior predictive checking. These model checking techniques are illustrated using an analysis affected by missing data from the Longitudinal Study of Australian Children. Conclusions As multiple imputation becomes further established as a standard approach for handling missing data, it will become increasingly important that researchers employ appropriate model checking approaches to ensure that reliable results are obtained when using this method.

  14. Model checking in multiple imputation: an overview and case study.

    Science.gov (United States)

    Nguyen, Cattram D; Carlin, John B; Lee, Katherine J

    2017-01-01

    Multiple imputation has become very popular as a general-purpose method for handling missing data. The validity of multiple-imputation-based analyses relies on the use of an appropriate model to impute the missing values. Despite the widespread use of multiple imputation, there are few guidelines available for checking imputation models. In this paper, we provide an overview of currently available methods for checking imputation models. These include graphical checks and numerical summaries, as well as simulation-based methods such as posterior predictive checking. These model checking techniques are illustrated using an analysis affected by missing data from the Longitudinal Study of Australian Children. As multiple imputation becomes further established as a standard approach for handling missing data, it will become increasingly important that researchers employ appropriate model checking approaches to ensure that reliable results are obtained when using this method.

  15. Evaluation of MHTGR fuel reliability

    Energy Technology Data Exchange (ETDEWEB)

    Wichner, R.P. [Oak Ridge National Lab., TN (United States); Barthold, W.P. [Barthold Associates, Inc., Knoxville, TN (United States)

    1992-07-01

    Modular High-Temperature Gas-Cooled Reactor (MHTGR) concepts that house the reactor vessel in a tight but unsealed reactor building place heightened importance on the reliability of the fuel particle coatings as fission product barriers. Though accident consequence analyses continue to show favorable results, the increased dependence on one type of barrier, in addition to a number of other factors, has caused the Nuclear Regulatory Commission (NRC) to consider conservative assumptions regarding fuel behavior. For this purpose, the concept termed ``weak fuel`` has been proposed on an interim basis. ``Weak fuel`` is a penalty imposed on consequence analyses whereby the fuel is assumed to respond less favorably to environmental conditions than predicted by behavioral models. The rationale for adopting this penalty, as well as conditions that would permit its reduction or elimination, are examined in this report. The evaluation includes an examination of possible fuel-manufacturing defects, quality-control procedures for defect detection, and the mechanisms by which fuel defects may lead to failure.

  16. Reliability evaluation for offshore wind farms

    DEFF Research Database (Denmark)

    Zhao, Menghua; Blåbjerg, Frede; Chen, Zhe

    2005-01-01

    In this paper, a new reliability index - Loss Of Generation Ratio Probability (LOGRP) is proposed for evaluating the reliability of an electrical system for offshore wind farms, which emphasizes the design of wind farms rather than the adequacy for specific load demand. A practical method...... to calculate LOGRP of offshore wind farms is proposed and evaluated....

  17. Comparing performance of modern genotype imputation methods in different ethnicities

    Science.gov (United States)

    Roshyara, Nab Raj; Horn, Katrin; Kirsten, Holger; Ahnert, Peter; Scholz, Markus

    2016-10-01

    A variety of modern software packages are available for genotype imputation relying on advanced concepts such as pre-phasing of the target dataset or utilization of admixed reference panels. In this study, we performed a comprehensive evaluation of the accuracy of modern imputation methods on the basis of the publicly available POPRES samples. Good quality genotypes were masked and re-imputed by different imputation frameworks: namely MaCH, IMPUTE2, MaCH-Minimac, SHAPEIT-IMPUTE2 and MaCH-Admix. Results were compared to evaluate the relative merit of pre-phasing and the usage of admixed references. We showed that the pre-phasing framework SHAPEIT-IMPUTE2 can overestimate the certainty of genotype distributions resulting in the lowest percentage of correctly imputed genotypes in our case. MaCH-Minimac performed better than SHAPEIT-IMPUTE2. Pre-phasing always reduced imputation accuracy. IMPUTE2 and MaCH-Admix, both relying on admixed-reference panels, showed comparable results. MaCH showed superior results if well-matched references were available (Nei’s GST ≤ 0.010). For small to medium datasets, frameworks using genetically closest reference panel are recommended if the genetic distance between target and reference data set is small. Our results are valid for small to medium data sets. As shown on a larger data set of population based German samples, the disadvantage of pre-phasing decreases for larger sample sizes.

  18. Public Undertakings and Imputability

    DEFF Research Database (Denmark)

    Ølykke, Grith Skovgaard

    2013-01-01

    Oeresund tender for the provision of passenger transport by railway. From the start, the services were provided at a loss, and in the end a part of DSBFirst was wound up. In order to frame the problems illustrated by this case, the jurisprudence-based imputability requirement in the definition of State aid...... exercised by the State, imputability to the State, and the State’s fulfilment of the Market Economy Investor Principle. Furthermore, it is examined whether, in the absence of imputability, public undertakings’ market behaviour is subject to the Market Economy Investor Principle, and it is concluded...

  19. Scale Reliability Evaluation with Heterogeneous Populations

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…

  20. Distribution system reliability evaluation using credibility theory

    African Journals Online (AJOL)

    Xufeng Xu, Joydeep Mitra

    trapezoidal fuzzy numbers have been used to express uncertainties in Lei et al, 2005; Yuan et al, 2007 have used interval algorithm to deal with the uncertainty of component data to calculate the interval reliability indices. Most of fuzzy methods for reliability evaluation of distribution system are based on fuzzy set theory.

  1. Study on segmented distribution for reliability evaluation

    Directory of Open Access Journals (Sweden)

    Huaiyuan Li

    2017-02-01

    Full Text Available In practice, the failure rate of most equipment exhibits different tendencies at different stages and even its failure rate curve behaves a multimodal trace during its life cycle. As a result, traditionally evaluating the reliability of equipment with a single model may lead to severer errors. However, if lifetime is divided into several different intervals according to the characteristics of its failure rate, piecewise fitting can more accurately approximate the failure rate of equipment. Therefore, in this paper, failure rate is regarded as a piecewise function, and two kinds of segmented distribution are put forward to evaluate reliability. In order to estimate parameters in the segmented reliability function, Bayesian estimation and maximum likelihood estimation (MLE of the segmented distribution are discussed in this paper. Since traditional information criterion is not suitable for the segmented distribution, an improved information criterion is proposed to test and evaluate the segmented reliability model in this paper. After a great deal of testing and verification, the segmented reliability model and its estimation methods presented in this paper are proven more efficient and accurate than the traditional non-segmented single model, especially when the change of the failure rate is time-phased or multimodal. The significant performance of the segmented reliability model in evaluating reliability of proximity sensors of leading-edge flap in civil aircraft indicates that the segmented distribution and its estimation method in this paper could be useful and accurate.

  2. MOV reliability evaluation and periodic verification scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Bunte, B.D.

    1996-12-01

    The purpose of this paper is to establish a periodic verification testing schedule based on the expected long term reliability of gate or globe motor operated valves (MOVs). The methodology in this position paper determines the nominal (best estimate) design margin for any MOV based on the best available information pertaining to the MOVs design requirements, design parameters, existing hardware design, and present setup. The uncertainty in this margin is then determined using statistical means. By comparing the nominal margin to the uncertainty, the reliability of the MOV is estimated. The methodology is appropriate for evaluating the reliability of MOVs in the GL 89-10 program. It may be used following periodic testing to evaluate and trend MOV performance and reliability. It may also be used to evaluate the impact of proposed modifications and maintenance activities such as packing adjustments. In addition, it may be used to assess the impact of new information of a generic nature which impacts safety related MOVs.

  3. Molgenis-impute: imputation pipeline in a box.

    Science.gov (United States)

    Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A

    2015-08-19

    Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional

  4. Cost reduction for web-based data imputation

    KAUST Repository

    Li, Zhixu

    2014-01-01

    Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity of these keywords and the data complexity on the Web, different queries may retrieve different answers to the same absent field value. To decide the most probable right answer to each absent filed value, existing method issues quite a few available imputation queries for each absent value, and then vote on deciding the most probable right answer. As a result, we have to issue a large number of imputation queries for filling all absent values in an incomplete data set, which brings a large overhead. In this paper, we work on reducing the cost of Web-based Data Imputation in two aspects: First, we propose a query execution scheme which can secure the most probable right answer to an absent field value by issuing as few imputation queries as possible. Second, we recognize and prune queries that probably will fail to return any answers a priori. Our extensive experimental evaluation shows that our proposed techniques substantially reduce the cost of Web-based Imputation without hurting its high imputation accuracy. © 2014 Springer International Publishing Switzerland.

  5. Missing value imputation for epistatic MAPs

    LENUS (Irish Health Repository)

    Ryan, Colm

    2010-04-20

    Abstract Background Epistatic miniarray profiling (E-MAPs) is a high-throughput approach capable of quantifying aggravating or alleviating genetic interactions between gene pairs. The datasets resulting from E-MAP experiments typically take the form of a symmetric pairwise matrix of interaction scores. These datasets have a significant number of missing values - up to 35% - that can reduce the effectiveness of some data analysis techniques and prevent the use of others. An effective method for imputing interactions would therefore increase the types of possible analysis, as well as increase the potential to identify novel functional interactions between gene pairs. Several methods have been developed to handle missing values in microarray data, but it is unclear how applicable these methods are to E-MAP data because of their pairwise nature and the significantly larger number of missing values. Here we evaluate four alternative imputation strategies, three local (Nearest neighbor-based) and one global (PCA-based), that have been modified to work with symmetric pairwise data. Results We identify different categories for the missing data based on their underlying cause, and show that values from the largest category can be imputed effectively. We compare local and global imputation approaches across a variety of distinct E-MAP datasets, showing that both are competitive and preferable to filling in with zeros. In addition we show that these methods are effective in an E-MAP from a different species, suggesting that pairwise imputation techniques will be increasingly useful as analogous epistasis mapping techniques are developed in different species. We show that strongly alleviating interactions are significantly more difficult to predict than strongly aggravating interactions. Finally we show that imputed interactions, generated using nearest neighbor methods, are enriched for annotations in the same manner as measured interactions. Therefore our method potentially

  6. Advancing Usability Evaluation through Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.

  7. Reliability of radiographic evaluation for acromial morphology

    Energy Technology Data Exchange (ETDEWEB)

    Bright, A.S.; Torpey, B.; Codd, T.; McFarland, E.G. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Orthopaedics; Magid, D. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Radiology

    1997-12-01

    Objetive. Bigliani`s classification system of acromial morphology utilizing the standard outlet radiograph has become in accepted method for evaluating patients with rotator cuff disease. This study evaluates the interobserver and intraobserver reliability of Bigliani`s classification system using observers at various levels of training. Patients and design. Supraspinatus outlet view radiographs of 40 patients (aged 18-78 years) with shoulder pain were reviewed twice, 4 months apart, in a masked protocol by six reviewers, including two attending (fellowship-trained) shoulder surgeons, an attending musculoskeletal radiologist, an orthopedic surgery sports fellow, and two orthopedic residents (PGY-2 and PGY-5). The reviewers were given standard diagrams of the Bigliani classification system and were asked to classify each film as a type I, II, or III acromion. Interobserver reliability and intraobserver repeatability values were calculated using kappa statistic analysis (0-0.2 slight, 0.21-0.4 fair, 0.41-0.6 moderate, 0.61-0.8 substantial, and 0.8-1.0 excellent). Results and conclusion. For each of the two readings, all six observers agreed only 18% of the time. Kappa values for pairwise comparison of interobserver reliability among the six observers ranged from 0.01 to 0.75 (mean 0.35), and intraobserver repeatability ranged from 0.26 (PGY-5 resident) to 0.80 (fellowship-trained surgeon), with a mean of 0.55. Intraobserver repeatability was not significantly different for the different levels of expertise. More definitive criteria are needed to distinguish and classify the acromion. (orig.) With 1 fig., 2 tabs., 31 refs.

  8. Clustering with Missing Values: No Imputation Required

    Science.gov (United States)

    Wagstaff, Kiri

    2004-01-01

    Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.

  9. Multiple Imputation of Squared Terms

    NARCIS (Netherlands)

    Vink, G.; Buuren, S. van

    2013-01-01

    We propose a new multiple imputation technique for imputing squares. Current methods yield either unbiased regression estimates or preserve data relations. No method, however, seems to deliver both, which limits researchers in the implementation of regression analysis in the presence of missing

  10. Reliability evaluation of a MEMS scanner

    Science.gov (United States)

    Lani, S.; Marozau, Y.; Dadras, M.

    2017-02-01

    Previously, the realization and closed loop control of a MEMS scanner integrating position sensors made with piezoresistive sensors was presented. It consisted of a silicon compliant membrane with integrated position sensors, on which a mirror and a magnet were assembled. This device was mounted on a PCB containing coils for electromagnetic actuation. In this work, the reliability of such system was evaluated through thermal and mechanical analysis. The objective of thermal analysis was to evaluate the lifetime of the MEMS scanner and is consisting of temperature cycling (-40°C to 100°C) and accelerated electrical endurance (100°C with power supplied to all electrical components). The objective of mechanical analysis was to assess the resistance of the system to mechanical stress and is consisting of mechanical shock and vibration. A high speed camera has been used to observe the behavior of the MEMS scanner. The use of shock stopper to improve the mechanical resistance has been evaluated and had demonstrated a resistance increase from 250g to 900g. The minimum shock resistance required for the system is 500g for transportation and 1000g for portative devices.

  11. Restrictive Imputation of Incomplete Survey Data

    NARCIS (Netherlands)

    Vink, G.|info:eu-repo/dai/nl/323348793

    2015-01-01

    This dissertation focuses on finding plausible imputations when there is some restriction posed on the imputation model. In these restrictive situations, current imputation methodology does not lead to satisfactory imputations. The restrictions, and the resulting missing data problems are real-life

  12. Evaluation of reliability worth in an electric power system

    Energy Technology Data Exchange (ETDEWEB)

    Billinton, Roy [Saskatchewan Univ., Saskatoon, SK (Canada). Power System Research Group

    1994-12-31

    This paper illustrates the application of basic power system reliability evaluation techniques to the quantification for reliability worth. The approach presented links customer interruption cost estimates with predictable indices of power system reliability. The technique is illustrated by application in the areas of generation, composite generation and transmission, and distribution system assessment using a hypothetical test system. (author)

  13. Multiply-Imputed Synthetic Data: Advice to the Imputer

    Directory of Open Access Journals (Sweden)

    Loong Bronwyn

    2017-12-01

    Full Text Available Several statistical agencies have started to use multiply-imputed synthetic microdata to create public-use data in major surveys. The purpose of doing this is to protect the confidentiality of respondents’ identities and sensitive attributes, while allowing standard complete-data analyses of microdata. A key challenge, faced by advocates of synthetic data, is demonstrating that valid statistical inferences can be obtained from such synthetic data for non-confidential questions. Large discrepancies between observed-data and synthetic-data analytic results for such questions may arise because of uncongeniality; that is, differences in the types of inputs available to the imputer, who has access to the actual data, and to the analyst, who has access only to the synthetic data. Here, we discuss a simple, but possibly canonical, example of uncongeniality when using multiple imputation to create synthetic data, which specifically addresses the choices made by the imputer. An initial, unanticipated but not surprising, conclusion is that non-confidential design information used to impute synthetic data should be released with the confidential synthetic data to allow users of synthetic data to avoid possible grossly conservative inferences.

  14. Reliability and Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Cizmar, Dean; Sørensen, John Dalsgaard; Kirkegaard, Poul Henning

    In the last few decades there have been intensely research concerning reliability of timber structures. This is primarily because there is an increased focus on society on sustainability and environmental aspects. Modern timber as a building material is also being competitive compared to concrete...

  15. thermal power stations' reliability evaluation in a hydrothermal system

    African Journals Online (AJOL)

    Dr Obe

    A quantitative tool for the evaluation of thermal power stations reliability in a hydrothermal system is presented. A reliable power station is one which would supply the required power within its installed capacity at any time within the specified voltage and frequency limits. Required for this evaluation are the station's installed ...

  16. Multiple imputation and its application

    CERN Document Server

    Carpenter, James

    2013-01-01

    A practical guide to analysing partially observed data. Collecting, analysing and drawing inferences from data is central to research in the medical and social sciences. Unfortunately, it is rarely possible to collect all the intended data. The literature on inference from the resulting incomplete  data is now huge, and continues to grow both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enable researchers to apply these methods. This book focuses on a particular statistical method for analysing and drawing inferences from incomplete data, called Multiple Imputation (MI). MI is attractive because it is both practical and widely applicable. The authors aim is to clarify the issues raised by missing data, describing the rationale for MI, the relationship between the various imputation models and associated algorithms and its application to increasingly complex data structures. Multiple Imputation and its Application: Discusses the issues ...

  17. Flexible Imputation of Missing Data

    CERN Document Server

    van Buuren, Stef

    2012-01-01

    Missing data form a problem in every scientific discipline, yet the techniques required to handle them are complicated and often lacking. One of the great ideas in statistical science--multiple imputation--fills gaps in the data with plausible values, the uncertainty of which is coded in the data itself. It also solves other problems, many of which are missing data problems in disguise. Flexible Imputation of Missing Data is supported by many examples using real data taken from the author's vast experience of collaborative research, and presents a practical guide for handling missing data unde

  18. Performance of genotype imputations using data from the 1000 Genomes Project.

    Science.gov (United States)

    Sung, Yun Ju; Wang, Lihua; Rankinen, Tuomo; Bouchard, Claude; Rao, D C

    2012-01-01

    Genotype imputations based on 1000 Genomes (1KG) Project data have the advantage of imputing many more SNPs than imputations based on HapMap data. It also provides an opportunity to discover associations with relatively rare variants. Recent investigations are increasingly using 1KG data for genotype imputations, but only limited evaluations of the performance of this approach are available. In this paper, we empirically evaluated imputation performance using 1KG data by comparing imputation results to those using the HapMap Phase II data that have been widely used. We used three reference panels: the CEU panel consisting of 120 haplotypes from HapMap II and 1KG data (June 2010 release) and the EUR panel consisting of 566 haplotypes also from 1KG data (August 2010 release). We used Illumina 324,607 autosomal SNPs genotyped in 501 individuals of European ancestry. Our most important finding was that both 1KG reference panels provided much higher imputation yield than the HapMap II panel. There were more than twice as many successfully imputed SNPs as there were using the HapMap II panel (6.7 million vs. 2.5 million). Our second most important finding was that accuracy using both 1KG panels was high and almost identical to accuracy using the HapMap II panel. Furthermore, after removing SNPs with MACH Rsq Project is still underway, we expect that later versions will provide even better imputation performance. Copyright © 2011 S. Karger AG, Basel.

  19. Imputation and quality control steps for combining multiple genome-wide datasets

    Directory of Open Access Journals (Sweden)

    Shefali S Verma

    2014-12-01

    Full Text Available The electronic MEdical Records and GEnomics (eMERGE network brings together DNA biobanks linked to electronic health records (EHRs from multiple institutions. Approximately 52,000 DNA samples from distinct individuals have been genotyped using genome-wide SNP arrays across the nine sites of the network. The eMERGE Coordinating Center and the Genomics Workgroup developed a pipeline to impute and merge genomic data across the different SNP arrays to maximize sample size and power to detect associations with a variety of clinical endpoints. The 1000 Genomes cosmopolitan reference panel was used for imputation. Imputation results were evaluated using the following metrics: accuracy of imputation, allelic R2 (estimated correlation between the imputed and true genotypes, and the relationship between allelic R2 and minor allele frequency. Computation time and memory resources required by two different software packages (BEAGLE and IMPUTE2 were also evaluated. A number of challenges were encountered due to the complexity of using two different imputation software packages, multiple ancestral populations, and many different genotyping platforms. We present lessons learned and describe the pipeline implemented here to impute and merge genomic data sets. The eMERGE imputed dataset will serve as a valuable resource for discovery, leveraging the clinical data that can be mined from the EHR.

  20. The Ability of Different Imputation Methods to Preserve the Significant Genes and Pathways in Cancer

    Directory of Open Access Journals (Sweden)

    Rosa Aghdam

    2017-12-01

    Full Text Available Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/.

  1. Reliability evaluation of deregulated electric power systems for planning applications

    Energy Technology Data Exchange (ETDEWEB)

    Ehsani, A. [Electrical Engineering Department, Sharif University of Technology, PO Box 11365-8639, Tehran (Iran, Islamic Republic of)], E-mail: aehsani80@yahoo.com; Ranjbar, A.M. [Electrical Engineering Department, Sharif University of Technology, PO Box 11365-8639, Tehran (Iran, Islamic Republic of); Jafari, A. [Niroo Research Institute, PO Box 14665/517, Tehran (Iran, Islamic Republic of); Fotuhi-Firuzabad, M. [Electrical Engineering Department, Sharif University of Technology, PO Box 11365-8639, Tehran (Iran, Islamic Republic of)

    2008-10-15

    In a deregulated electric power utility industry in which a competitive electricity market can influence system reliability, market risks cannot be ignored. This paper (1) proposes an analytical probabilistic model for reliability evaluation of competitive electricity markets and (2) develops a methodology for incorporating the market reliability problem into HLII reliability studies. A Markov state space diagram is employed to evaluate the market reliability. Since the market is a continuously operated system, the concept of absorbing states is applied to it in order to evaluate the reliability. The market states are identified by using market performance indices and the transition rates are calculated by using historical data. The key point in the proposed method is the concept that the reliability level of a restructured electric power system can be calculated using the availability of the composite power system (HLII) and the reliability of the electricity market. Two case studies are carried out over Roy Billinton Test System (RBTS) to illustrate interesting features of the proposed methodology.

  2. JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY

    Science.gov (United States)

    The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.

  3. Evaluation of reliability modeling tools for advanced fault tolerant systems

    Science.gov (United States)

    Baker, Robert; Scheper, Charlotte

    1986-01-01

    The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.

  4. Motion Reliability Modeling and Evaluation for Manipulator Path Planning Task

    Directory of Open Access Journals (Sweden)

    Tong Li

    2015-01-01

    Full Text Available Motion reliability as a criterion can reflect the accuracy of manipulator in completing operations. Since path planning task takes a significant role in operations of manipulator, the motion reliability evaluation of path planning task is discussed in the paper. First, a modeling method for motion reliability is proposed by taking factors related to position accuracy of manipulator into account. In the model, multidimensional integral for PDF is carried out to calculate motion reliability. Considering the complex of multidimensional integral, the approach of equivalent extreme value is introduced, with which multidimensional integral is converted into one dimensional integral for convenient calculation. Then a method based on the maximum entropy principle is proposed for model calculation. With the method, the PDF can be obtained efficiently at the state of maximum entropy. As a result, the evaluation of motion reliability can be achieved by one dimensional integral for PDF. Simulations on a particular path planning task are carried out, with which the feasibility and effectiveness of the proposed methods are verified. In addition, the modeling method which takes the factors related to position accuracy into account can represent the contributions of these factors to motion reliability. And the model calculation method can achieve motion reliability evaluation with high precision and efficiency.

  5. Reliability evaluation of microgrid considering incentive-based demand response

    Science.gov (United States)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  6. A novel reliability evaluation method for large engineering systems

    Directory of Open Access Journals (Sweden)

    Reda Farag

    2016-06-01

    Full Text Available A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first- or second-order reliability method (FORM/SORM will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM developed by the author and his research team using FORM, response surface method (RSM, an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples.

  7. Evaluation of Information Requirements of Reliability Methods in Engineering Design

    DEFF Research Database (Denmark)

    Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema

    2010-01-01

    . For that reason, new methods are needed to assist assessing robustness and reliability at early design stages. A specific taxonomy on robustness and reliability information in design could support classifying available design information to orient new techniques assessing innovative designs.......This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...

  8. Assessment of genotype imputation performance using 1000 Genomes in African American studies.

    Directory of Open Access Journals (Sweden)

    Dana B Hancock

    Full Text Available Genotype imputation, used in genome-wide association studies to expand coverage of single nucleotide polymorphisms (SNPs, has performed poorly in African Americans compared to less admixed populations. Overall, imputation has typically relied on HapMap reference haplotype panels from Africans (YRI, European Americans (CEU, and Asians (CHB/JPT. The 1000 Genomes project offers a wider range of reference populations, such as African Americans (ASW, but their imputation performance has had limited evaluation. Using 595 African Americans genotyped on Illumina's HumanHap550v3 BeadChip, we compared imputation results from four software programs (IMPUTE2, BEAGLE, MaCH, and MaCH-Admix and three reference panels consisting of different combinations of 1000 Genomes populations (February 2012 release: (1 3 specifically selected populations (YRI, CEU, and ASW; (2 8 populations of diverse African (AFR or European (AFR descent; and (3 all 14 available populations (ALL. Based on chromosome 22, we calculated three performance metrics: (1 concordance (percentage of masked genotyped SNPs with imputed and true genotype agreement; (2 imputation quality score (IQS; concordance adjusted for chance agreement, which is particularly informative for low minor allele frequency [MAF] SNPs; and (3 average r2hat (estimated correlation between the imputed and true genotypes, for all imputed SNPs. Across the reference panels, IMPUTE2 and MaCH had the highest concordance (91%-93%, but IMPUTE2 had the highest IQS (81%-83% and average r2hat (0.68 using YRI+ASW+CEU, 0.62 using AFR+EUR, and 0.55 using ALL. Imputation quality for most programs was reduced by the addition of more distantly related reference populations, due entirely to the introduction of low frequency SNPs (MAF≤2% that are monomorphic in the more closely related panels. While imputation was optimized by using IMPUTE2 with reference to the ALL panel (average r2hat = 0.86 for SNPs with MAF>2%, use of the ALL

  9. Evaluating web sites: reliable child health resources for parents.

    Science.gov (United States)

    Golterman, Linda; Banasiak, Nancy C

    2011-01-01

    This article describes a framework for evaluating the quality of health care information on the Internet and identifies strategies for accessing reliable child health resources. A number of methods are reviewed, including how to evaluate Web sites for quality using the Health Information Technology Institute evaluation criteria, how to identify trustworthy Web sites accredited by Health On the Net Foundation Code of Conduct, and the use of portals to access prescreened Web sites by organizations, such as the Medical Library Association. Pediatric nurses can use one or all of these strategies to develop a list of reliable Web sites as a supplement to patient and family teaching.

  10. Evaluation of Stock Management Strategies Reliability at Dependent Demand

    Directory of Open Access Journals (Sweden)

    Lukinskiy Valery

    2017-03-01

    Full Text Available For efficiently increasing the logistic systems, the core specialists’ attention has to be directed to reducing costs and increasing supply chains reliability. A decent attention to costs reduction has already been paid, so it can be stated that in this way there is a significant progress. But the problem of reliability evaluation is still insufficiently explored, particularly, in such an important sphere as inventory management at the dependent demand.

  11. Performance of genotype imputation for rare variants identified in exons and flanking regions of genes.

    Directory of Open Access Journals (Sweden)

    Li Li

    Full Text Available Genotype imputation has the potential to assess human genetic variation at a lower cost than assaying the variants using laboratory techniques. The performance of imputation for rare variants has not been comprehensively studied. We utilized 8865 human samples with high depth resequencing data for the exons and flanking regions of 202 genes and Genome-Wide Association Study (GWAS data to characterize the performance of genotype imputation for rare variants. We evaluated reference sets ranging from 100 to 3713 subjects for imputing into samples typed for the Affymetrix (500K and 6.0 and Illumina 550K GWAS panels. The proportion of variants that could be well imputed (true r(2>0.7 with a reference panel of 3713 individuals was: 31% (Illumina 550K or 25% (Affymetrix 500K with MAF (Minor Allele Frequency less than or equal 0.001, 48% or 35% with 0.0010.05. The performance for common SNPs (MAF>0.05 within exons and flanking regions is comparable to imputation of more uniformly distributed SNPs. The performance for rare SNPs (0.01imputation for extending the assessment of common variants identified in humans via targeted exon resequencing into additional samples with GWAS data, but imputation of very rare variants (MAF< = 0.005 will require reference panels with thousands of subjects.

  12. Incorporating Cyber Layer Failures in Composite Power System Reliability Evaluations

    Directory of Open Access Journals (Sweden)

    Yuqi Han

    2015-08-01

    Full Text Available This paper proposes a novel approach to analyze the impacts of cyber layer failures (i.e., protection failures and monitoring failures on the reliability evaluation of composite power systems. The reliability and availability of the cyber layer and its protection and monitoring functions with various topologies are derived based on a reliability block diagram method. The availability of the physical layer components are modified via a multi-state Markov chain model, in which the component protection and monitoring strategies, as well as the cyber layer topology, are simultaneously considered. Reliability indices of composite power systems are calculated through non-sequential Monte-Carlo simulation. Case studies demonstrate that operational reliability downgrades in cyber layer function failure situations. Moreover, protection function failures have more significant impact on the downgraded reliability than monitoring function failures do, and the reliability indices are especially sensitive to the change of the cyber layer function availability in the range from 0.95 to 1.

  13. Multiple Imputations for Linear Regression Models

    OpenAIRE

    Brownstone, David

    1991-01-01

    Rubin (1987) has proposed multiple imputations as a general method for estimation in the presence of missing data. Rubin’s results only strictly apply to Bayesian models, but Schenker and Welsh (1988) directly prove the consistency  multiple imputations inference~ when there are missing values of the dependent variable in linear regression models. This paper extends and modifies Schenker and Welsh’s theorems to give conditions where multiple imputations yield consistent inferences for bo...

  14. The validity and reliability of attending evaluations of medicine residents

    OpenAIRE

    Jackson, Jeffrey L.; Cynthia Kay; Michael Frank

    2015-01-01

    Objectives: To assess the reliability and validity of faculty evaluations of medicine residents. Methods: We conducted a retrospective study (2004–2012) involving 228 internal medicine residency graduates at the Medical College of Wisconsin who were evaluated by 334 attendings. Measures included evaluations of residents by attendings, based on six competencies and interns and residents’ performance on the American Board of Internal Medicine certification exam and annual in-service training ex...

  15. Improving accuracy of rare variant imputation with a two-step imputation approach

    DEFF Research Database (Denmark)

    Kreiner-Møller, Eskil; Medina-Gomez, Carolina; Uitterlinden, André G

    2015-01-01

    not being comprehensively scrutinized. Next-generation arrays ensuring sufficient coverage together with new reference panels, as the 1000 Genomes panel, are emerging to facilitate imputation of low frequent single-nucleotide polymorphisms (minor allele frequency (MAF) two-step......, the concordance rate between calls of imputed and true genotypes was found to be significantly higher for heterozygotes (Ptwo-step approach in our setting improves imputation quality compared with traditional direct imputation noteworthy...

  16. Composite system reliability evaluation by stochastic calculation of system operation

    Energy Technology Data Exchange (ETDEWEB)

    Haubrick, H.-J.; Hinz, H.-J.; Landeck, E. [Dept. of Power Systems and Power Economics (Germany)

    1994-12-31

    This report describes a new developed probabilistic approach for steady-state composite system reliability evaluation and its exemplary application to a bulk power test system. The new computer program called PHOENIX takes into consideration transmission limitations, outages of lines and power stations and, as a central element, a highly sophisticated model to the dispatcher performing remedial actions after disturbances. The kernel of the new method is a procedure for optimal power flow calculation that has been specially adapted for the use in reliability evaluations under the above mentioned conditions. (author) 11 refs., 8 figs., 1 tab.

  17. An imputation-based genome-wide association study on traits related to male reproduction in a White Duroc × Erhualian F2 population.

    Science.gov (United States)

    Zhao, Xueyan; Zhao, Kewei; Ren, Jun; Zhang, Feng; Jiang, Chao; Hong, Yuan; Jiang, Kai; Yang, Qiang; Wang, Chengbin; Ding, Nengshui; Huang, Lusheng; Zhang, Zhiyan; Xing, Yuyun

    2016-05-01

    Boar reproductive traits are economically important for the pig industry. Here we conducted a genome-wide association study (GWAS) for 13 reproductive traits measured on 205 F2 boars at day 300 using 60 K single nucleotide polymorphism (SNP) data imputed from a reference panel of 1200 pigs in a White Duroc × Erhualian F2 intercross population. We identified 10 significant loci for seven traits on eight pig chromosomes (SSC). Two loci surpassed the genome-wide significance level, including one for epididymal weight around 60.25 Mb on SSC7 and one for semen temperature around 43.69 Mb on SSC4. Four of the 10 significant loci that we identified were consistent with previously reported quantitative trait loci for boar reproduction traits. We highlighted several interesting candidate genes at these loci, including APN, TEP1, PARP2, SPINK1 and PDE1C. To evaluate the imputation accuracy, we further genotyped nine GWAS top SNPs using PCR restriction fragment length polymorphism or Sanger sequencing. We found an average of 91.44% of genotype concordance, 95.36% of allelic concordance and 0.85 of r(2) correlation between imputed and real genotype data. This indicates that our GWAS mapping results based on imputed SNP data are reliable, providing insights into the genetic basis of boar reproductive traits. © 2015 Japanese Society of Animal Science.

  18. Reliability Evaluation for Clustered WSNs under Malware Propagation.

    Science.gov (United States)

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying

    2016-06-10

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  19. Reliability of resting intramuscular fiber conduction velocity evaluation.

    Science.gov (United States)

    Methenitis, S; Karandreas, N; Terzis, G

    2017-05-06

    Characterization of the least number of muscle fibers analyzed for a quick and reliable, evaluation of intramuscular fiber conduction velocity (MFCV) is of importance for sport scientists. The aim of this study was to evaluate the reliability of vastus lateralis' intramuscular MFCV measuring either 25 or 50 different muscle fibers per participant, as well as to compare intramuscular MFCV measured in 25 (C25 ), 50 (C50 ), or 140 (C140 ) muscle fibers. Resting vastus lateralis' MFCV was measured in 21 young healthy males (age 22.1±2.4 years) using intramuscular microelectrodes in different days. Test-retest reliability of MFCV's parameters was calculated for C25 and C50 , while MFCV was compared among C25 , C50 , and C140 . Significant differences of MFCV parameters were observed between C25 condition and those of C50 and C140 . The differences in MFCV values between conditions C50 and C140 were non-significant. A close correlation was found for MFCV between C50 and C140 (r=0.884-0.988, P=.000). All reliability measures of MFCV measured with 50 fibers were high (eg, ICC=0.813-0.980, P=.000), in contrast to C25 (eg, ICC=0.023-0.580 P>.05). In conclusion, an average of 50 different fibers per subject is sufficient to provide a quick and reliable intramuscular evaluation of vastus lateralis MFCV. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. 3D-MICE: integration of cross-sectional and longitudinal imputation for multi-analyte longitudinal clinical data.

    Science.gov (United States)

    Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M

    2017-11-30

    A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.

  1. Reliability and Validity of Speech Evaluation in Adductor Spasmodic Dysphonia.

    Science.gov (United States)

    Yanagida, Saori; Nishizawa, Noriko; Hashimoto, Ryusaku; Mizoguchi, Kenji; Hatakeyama, Hiromitsu; Homma, Akihiro; Fukuda, Satoshi

    2017-08-09

    The aim of this study was to evaluate speech in patients with adductor spasmodic dysphonia (ADSD) by perceptual evaluations and acoustic measures, and to examine the reliability and validity of these measures. Twenty-four patients with ADSD and 24 healthy volunteers were included in the study. Speech materials consisted of three sentences constructed from serial voiced syllables to elicit abductor voice breaks. Three otolaryngologists rated the degree of voice symptoms using a visual analog scale (VAS). VAS sheets with five 100-mm horizontal lines were given to each rater. The ends of the lines were labeled normal vs severe, and the five lines were labeled as overall severity of each of the four speech symptoms (strangulation, interruption, tremor and strained speech). Nine words were selected for acoustic analysis, and abnormal acoustic events were classified into one of the three categories. To evaluate the intra- and inter-rater and intermeasurer reliabilities of the VAS scores or acoustic measures, Pearson r correlations were calculated. To examine the validity of perceptual evaluations and acoustic measures, the sensitivity and the specificity were calculated. Pearson r correlation coefficients for overall severity showed the highest intra- and inter-rater reliabilities. For acoustic events, the intrameasurer reliabilities were r = .645 (frequency shifts), r = .969 (aperiodic segments), and r = 1.0 (phonation breaks), and the intermeasurer reliability ranged from r = .102 to r = 1.0. Perceptual evaluation showed high sensitivity (91.7%) and specificity (100%), whereas acoustic analysis showed low sensitivity (70.8%) and high specificity (100%). Both perceptual evaluation and acoustic measures alone were found likely to overlook patients with true ADSD. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  2. Evaluation of Willingness to Pay for Reliable and Sustainable ...

    African Journals Online (AJOL)

    Evaluation of Willingness to Pay for Reliable and Sustainable household Water Use in Ilorin, Nigeria. ... consumers are willing to pay an average sum of N737.22 per month for improved water supply services and; gender, water quality and household income level have significant impact on WTP at 5% level of significance.

  3. Reliability Evaluation of Primary Cells | Anyaka | Nigerian Journal of ...

    African Journals Online (AJOL)

    Evaluation of the reliability of a primary cell took place in three stages: 192 cells went through a slow-discharged test. A designed experiment was conducted on 144 cells; there were three factors in the experiment: Storage temperature (three levels), thermal shock (two levels) and date code (two levels). 16 cells ...

  4. Sponsorship Evaluation Scale (SES): a validity and reliability study ...

    African Journals Online (AJOL)

    The evaluation of consumer response to sport sponsorship is limited in the academic literature. This research was aimed to conduct a dimensionality, validity and reliability study of the Speed and Thompson Sponsorship Questionnaire in Turkey (2000). Eight hundred and fifty-two (852) university students participated in the ...

  5. Distribution system reliability evaluation using credibility theory | Xu ...

    African Journals Online (AJOL)

    This paper describes a new method of using credibility theory to evaluate distribution system reliability. The advantage of this method lies in its ability to account for both objective and subjective uncertainty by integrating stochastic and fuzzy approaches. Equipment failures are modeled as random events, while the ...

  6. Requirements for an evaluation infrastructure for reliable pervasive healthcare research

    DEFF Research Database (Denmark)

    Wagner, Stefan Rahr; Toftegaard, Thomas Skjødeberg; Bertelsen, Olav W.

    2012-01-01

    The need for a non-intrusive evaluation infrastructure platform to support research on reliable pervasive healthcare in the unsupervised setting is analyzed and challenges and possibilities are identified. A list of requirements is presented and a solution is suggested that would allow researchers...

  7. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  8. Evaluation of aileron actuator reliability with censored data

    Directory of Open Access Journals (Sweden)

    Li Huaiyuan

    2015-08-01

    Full Text Available For the purpose of enhancing reliability of aileron of Airbus new-generation A350XWB, an evaluation of aileron reliability on the basis of maintenance data is presented in this paper. Practical maintenance data contains large number of censoring samples, information uncertainty of which makes it hard to evaluate reliability of aileron actuator. Considering that true lifetime of censoring sample has identical distribution with complete sample, if censoring sample is transformed into complete sample, conversion frequency of censoring sample can be estimated according to frequency of complete sample. On the one hand, standard life table estimation and product limit method are improved on the basis of such conversion frequency, enabling accurate estimation of various censoring samples. On the other hand, by taking such frequency as one of the weight factors and integrating variance of order statistics under standard distribution, weighted least square estimation is formed for accurately estimating various censoring samples. Large amounts of experiments and simulations show that reliabilities of improved life table and improved product limit method are closer to the true value and more conservative; moreover, weighted least square estimate (WLSE, with conversion frequency of censoring sample and variances of order statistics as the weights, can still estimate accurately with high proportion of censored data in samples. Algorithm in this paper has good effect and can accurately estimate the reliability of aileron actuator even with small sample and high censoring rate. This research has certain significance in theory and engineering practice.

  9. RELIABILITY OF CERTAIN TESTS FOR EVALUATION OF JUDO TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Slavko Obadov

    2007-05-01

    Full Text Available The sample included 106 judokas. Assessment of the level of mastership of judo techniques was carried out by evaluation of fi ve competent studies. Each subject performed a technique three times and each performance was evaluated by the judges. In order to evaluate measurement of each technique, Cronbach’s coeffi cient of reliability  was calculated. During the procedure the subjects's results were also transformed to factor scores i.e. the results of each performer at the main component of evaluation in the fi ve studies. These factor scores could be used in the subsequent procedure of multivariant statistical analysis.

  10. Reliability evaluation of distribution systems containing renewable distributed generations

    Science.gov (United States)

    Alkuhayli, Abdulaziz Abddullah

    Reliability evaluation of distribution networks, including islanded microgrid cases, is presented. The Monte Carlo simulation algorithm is applied to a test network. The network includes three types of distributed energy resources solar photovoltaic (PV), wind turbine (WT) and gas turbine (GT). These distributed generators contribute to supply part of the load during grid-connected mode, but supply the entire load during islanded microgrid operation. PV and WT stochastic models have been used to simulate the randomness of these resources. This study shows that the implementation of distributed generations can improve the reliability of the distribution networks.

  11. Reliability Evaluation considering Structures of a Large Scale Wind Farm

    DEFF Research Database (Denmark)

    Shin, Je-Seok; Cha, Seung-Tae; Wu, Qiuwei

    2012-01-01

    Wind energy is one of the most widely used renewable energy resources. Wind power has been connected to the grid as large scale wind farm which is made up of dozens of wind turbines, and the scale of wind farm is more increased recently. Due to intermittent and variable wind source, reliability...... wind farm which is able to enhance a capability of delivering a power instead of controlling an uncontrollable output of wind power. Therefore, this paper introduces a method to evaluate the reliability depending upon structures of wind farm and to reflect the result to the planning stage of wind farm....

  12. Standard and Robust Methods in Regression Imputation

    Science.gov (United States)

    Moraveji, Behjat; Jafarian, Koorosh

    2014-01-01

    The aim of this paper is to provide an introduction of new imputation algorithms for estimating missing values from official statistics in larger data sets of data pre-processing, or outliers. The goal is to propose a new algorithm called IRMI (iterative robust model-based imputation). This algorithm is able to deal with all challenges like…

  13. Environmental education curriculum evaluation questionnaire: A reliability and validity study

    Science.gov (United States)

    Minner, Daphne Diane

    The intention of this research project was to bridge the gap between social science research and application to the environmental domain through the development of a theoretically derived instrument designed to give educators a template by which to evaluate environmental education curricula. The theoretical base for instrument development was provided by several developmental theories such as Piaget's theory of cognitive development, Developmental Systems Theory, Life-span Perspective, as well as curriculum research within the area of environmental education. This theoretical base fueled the generation of a list of components which were then translated into a questionnaire with specific questions relevant to the environmental education domain. The specific research question for this project is: Can a valid assessment instrument based largely on human development and education theory be developed that reliably discriminates high, moderate, and low quality in environmental education curricula? The types of analyses conducted to answer this question were interrater reliability (percent agreement, Cohen's Kappa coefficient, Pearson's Product-Moment correlation coefficient), test-retest reliability (percent agreement, correlation), and criterion-related validity (correlation). Face validity and content validity were also assessed through thorough reviews. Overall results indicate that 29% of the questions on the questionnaire demonstrated a high level of interrater reliability and 43% of the questions demonstrated a moderate level of interrater reliability. Seventy-one percent of the questions demonstrated a high test-retest reliability and 5% a moderate level. Fifty-five percent of the questions on the questionnaire were reliable (high or moderate) both across time and raters. Only eight questions (8%) did not show either interrater or test-retest reliability. The global overall rating of high, medium, or low quality was reliable across both coders and time, indicating

  14. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  15. Reliability of tethered swimming evaluation in age group swimmers.

    Science.gov (United States)

    Amaro, Nuno; Marinho, Daniel A; Batalha, Nuno; Marques, Mário C; Morouço, Pedro

    2014-06-28

    The aim of the present study was to examine the reliability of tethered swimming in the evaluation of age group swimmers. The sample was composed of 8 male national level swimmers with at least 4 years of experience in competitive swimming. Each swimmer performed two 30 second maximal intensity tethered swimming tests, on separate days. Individual force-time curves were registered to assess maximum force, mean force and the mean impulse of force. Both consistency and reliability were very strong, with Cronbach's Alpha values ranging from 0.970 to 0.995. All the applied metrics presented a very high agreement between tests, with the mean impulse of force presenting the highest. These results indicate that tethered swimming can be used to evaluate age group swimmers. Furthermore, better comprehension of the swimmers ability to effectively exert force in the water can be obtained using the impulse of force.

  16. Reliability of Tethered Swimming Evaluation in Age Group Swimmers

    Directory of Open Access Journals (Sweden)

    Amaro Nuno

    2014-07-01

    Full Text Available The aim of the present study was to examine the reliability of tethered swimming in the evaluation of age group swimmers. The sample was composed of 8 male national level swimmers with at least 4 years of experience in competitive swimming. Each swimmer performed two 30 second maximal intensity tethered swimming tests, on separate days. Individual force-time curves were registered to assess maximum force, mean force and the mean impulse of force. Both consistency and reliability were very strong, with Cronbach's Alpha values ranging from 0.970 to 0.995. All the applied metrics presented a very high agreement between tests, with the mean impulse of force presenting the highest. These results indicate that tethered swimming can be used to evaluate age group swimmers. Furthermore, better comprehension of the swimmers ability to effectively exert force in the water can be obtained using the impulse of force.

  17. Genotype Imputation for Latinos Using the HapMap and 1000 Genomes Project Reference Panels.

    Science.gov (United States)

    Gao, Xiaoyi; Haritunians, Talin; Marjoram, Paul; McKean-Cowdin, Roberta; Torres, Mina; Taylor, Kent D; Rotter, Jerome I; Gauderman, William J; Varma, Rohit

    2012-01-01

    Genotype imputation is a vital tool in genome-wide association studies (GWAS) and meta-analyses of multiple GWAS results. Imputation enables researchers to increase genomic coverage and to pool data generated using different genotyping platforms. HapMap samples are often employed as the reference panel. More recently, the 1000 Genomes Project resource is becoming the primary source for reference panels. Multiple GWAS and meta-analyses are targeting Latinos, the most populous, and fastest growing minority group in the US. However, genotype imputation resources for Latinos are rather limited compared to individuals of European ancestry at present, largely because of the lack of good reference data. One choice of reference panel for Latinos is one derived from the population of Mexican individuals in Los Angeles contained in the HapMap Phase 3 project and the 1000 Genomes Project. However, a detailed evaluation of the quality of the imputed genotypes derived from the public reference panels has not yet been reported. Using simulation studies, the Illumina OmniExpress GWAS data from the Los Angles Latino Eye Study and the MACH software package, we evaluated the accuracy of genotype imputation in Latinos. Our results show that the 1000 Genomes Project AMR + CEU + YRI reference panel provides the highest imputation accuracy for Latinos, and that also including Asian samples in the panel can reduce imputation accuracy. We also provide the imputation accuracy for each autosomal chromosome using the 1000 Genomes Project panel for Latinos. Our results serve as a guide to future imputation based analysis in Latinos.

  18. Genotype Imputation for Latinos Using the HapMap and 1000 Genomes Project Reference Panels

    Directory of Open Access Journals (Sweden)

    Xiaoyi eGao

    2012-06-01

    Full Text Available Genotype imputation is a vital tool in genome-wide association studies (GWAS and meta-analyses of multiple GWAS results. Imputation enables researchers to increase genomic coverage and to pool data generated using different genotyping platforms. HapMap samples are often employed as the reference panel. More recently, the 1000 Genomes Project resource is becoming the primary source for reference panels. Multiple GWAS and meta-analyses are targeting Latinos, the most populous and fastest growing minority group in the US. However, genotype imputation resources for Latinos are rather limited compared to individuals of European ancestry at present, largely because of the lack of good reference data. One choice of reference panel for Latinos is one derived from the population of Mexican individuals in Los Angeles contained in the HapMap Phase 3 project and the 1000 Genomes Project. However, a detailed evaluation of the quality of the imputed genotypes derived from the public reference panels has not yet been reported. Using simulation studies, the Illumina OmniExpress GWAS data from the Los Angles Latino Eye Study and the MACH software package, we evaluated the accuracy of genotype imputation in Latinos. Our results show that the 1000 Genomes Project AMR+CEU+YRI reference panel provides the highest imputation accuracy for Latinos, and that also including Asian samples in the panel can reduce imputation accuracy. We also provide the imputation accuracy for each autosomal chromosome using the 1000 Genomes Project panel for Latinos. Our results serve as a guide to future imputation-based analysis in Latinos.

  19. How to evaluate objective video quality metrics reliably

    DEFF Research Database (Denmark)

    Korhonen, Jari; Burini, Nino; You, Junyong

    2012-01-01

    The typical procedure for evaluating the performance of different objective quality metrics and indices involves comparisons between subjective quality ratings and the quality indices obtained using the objective metrics in question on the known video sequences. Several correlation indicators can...... as processing of subjective data. We also suggest some general guidelines for researchers to make comparison studies of objective video quality metrics more reliable and useful for the practitioners in the field....

  20. The validity and reliability of attending evaluations of medicine residents.

    Science.gov (United States)

    Jackson, Jeffrey L; Kay, Cynthia; Frank, Michael

    2015-01-01

    To assess the reliability and validity of faculty evaluations of medicine residents. We conducted a retrospective study (2004-2012) involving 228 internal medicine residency graduates at the Medical College of Wisconsin who were evaluated by 334 attendings. Measures included evaluations of residents by attendings, based on six competencies and interns and residents' performance on the American Board of Internal Medicine certification exam and annual in-service training examination. All residents had at least one in-service training examination result and 80% allowed the American Board of Internal Medicine to release their scores. Attending evaluations had good consistency (Cronbach's α = 0.96). There was poor construct validity with modest inter-rater reliability and evidence that attendings were rating residents on a single factor rather than the six competencies intended to be measured. There was poor predictive validity as attending ratings correlated weakly with performance on the in-service training examination or American Board of Internal Medicine certification exam. We conclude that attending evaluations are poor measures for assessing progress toward competency. It may be time to move beyond evaluations that rely on global, end-of-rotation appraisals.

  1. The validity and reliability of attending evaluations of medicine residents

    Directory of Open Access Journals (Sweden)

    Jeffrey L Jackson

    2015-06-01

    Full Text Available Objectives: To assess the reliability and validity of faculty evaluations of medicine residents. Methods: We conducted a retrospective study (2004–2012 involving 228 internal medicine residency graduates at the Medical College of Wisconsin who were evaluated by 334 attendings. Measures included evaluations of residents by attendings, based on six competencies and interns and residents’ performance on the American Board of Internal Medicine certification exam and annual in-service training examination. All residents had at least one in-service training examination result and 80% allowed the American Board of Internal Medicine to release their scores. Results: Attending evaluations had good consistency (Cronbach’s α = 0.96. There was poor construct validity with modest inter-rater reliability and evidence that attendings were rating residents on a single factor rather than the six competencies intended to be measured. There was poor predictive validity as attending ratings correlated weakly with performance on the in-service training examination or American Board of Internal Medicine certification exam. Conclusion: We conclude that attending evaluations are poor measures for assessing progress toward competency. It may be time to move beyond evaluations that rely on global, end-of-rotation appraisals.

  2. Composite system reliability evaluation using sequential Monte Carlo simulation

    Science.gov (United States)

    Jonnavithula, Annapoorani

    Monte Carlo simulation methods can be effectively used to assess the adequacy of composite power system networks. The sequential simulation approach is the most fundamental technique available and can be used to provide a wide range of indices. It can also be used to provide estimates which can serve as benchmarks against which other approximate techniques can be compared. The focus of this research work is on the reliability evaluation of composite generation and transmission systems with special reference to frequency and duration related indices and estimated power interruption costs at each load bus. One of the main objectives is to use the sequential simulation method to create a comprehensive technique for composite system adequacy evaluation. This thesis recognizes the need for an accurate representation of the load model at the load buses which depends on the mix of customer sectors at each bus. Chronological hourly load curves are developed in this thesis, recognizing the individual load profiles of the customers at each load bus. Reliability worth considerations are playing an ever increasing role in power system planning and operation. Different methods for bus outage cost evaluation are proposed in this thesis. It may not be computationally feasible to use the sequential simulation method with time varying loads at each bus in large electric power system networks. Time varying load data may also not be available at each bus. This research work uses the sequential methodology as a fundamental technique to calibrate other non sequential methods such as the state sampling and state transition sampling techniques. Variance reduction techniques that improve the efficiency of the sequential simulation procedure are investigated as a part of this research work. Pertinent features that influence reliability worth assessment are also incorporated. All the proposed methods in this thesis are illustrated by application to two reliability test systems. In addition

  3. An Imputation Model for Dropouts in Unemployment Data

    Directory of Open Access Journals (Sweden)

    Nilsson Petra

    2016-09-01

    Full Text Available Incomplete unemployment data is a fundamental problem when evaluating labour market policies in several countries. Many unemployment spells end for unknown reasons; in the Swedish Public Employment Service’s register as many as 20 percent. This leads to an ambiguity regarding destination states (employment, unemployment, retired, etc.. According to complete combined administrative data, the employment rate among dropouts was close to 50 for the years 1992 to 2006, but from 2007 the employment rate has dropped to 40 or less. This article explores an imputation approach. We investigate imputation models estimated both on survey data from 2005/2006 and on complete combined administrative data from 2005/2006 and 2011/2012. The models are evaluated in terms of their ability to make correct predictions. The models have relatively high predictive power.

  4. Reliability of repeated forensic evaluations of legal sanity.

    Science.gov (United States)

    Kacperska, Iwona; Heitzman, Janusz; Bąk, Tomasz; Leśko, Anna Walczyna; Opio, Małgorzata

    2016-01-01

    Criminal responsibility evaluation is a very complex and controversial issue due to the gravity of its consequences. Polish legislation allows courts to request multiple sanity evaluations. The purpose of this study was to assess the extent of agreement on sanity evaluations in written evidence provided by experts of criminal cases in Poland. A total of 381 forensic evaluation reports addressing 117 criminal defendants were analysed. In sixty eight cases, there was more than one forensic evaluation report containing an assessment of legal sanity, including forty one cases containing two assessments of criminal responsibility, seventeen containing three assessments, eight containing four assessments and two containing five assessments. We found that in 47% of the cases containing more than one sanity assessment, the initial criminal responsibility assessment was changed after a subsequent forensic evaluation. The agreement between repeated criminal responsibility evaluations was found to be fair. This study found a strong correlation between the number of forensic reports and the number of contradictory sanity assessments. There were fewer forensic opinions involved in the cases in which the same conclusion regarding criminal responsibility was reached in subsequent forensic evaluation reports compared to the cases in which more forensic opinions were involved. There is a clear need for further research in this area, and it is necessary to standardise criminal responsibility evaluations in order to improve their reliability and to shorten the legal proceedings. Copyright © 2015. Published by Elsevier Ltd.

  5. Genotype imputation for African Americans using data from HapMap phase II versus 1000 genomes projects.

    Science.gov (United States)

    Sung, Yun J; Gu, C Charles; Tiwari, Hemant K; Arnett, Donna K; Broeckel, Ulrich; Rao, Dabeeru C

    2012-07-01

    Genotype imputation provides imputation of untyped single nucleotide polymorphisms (SNPs) that are present on a reference panel such as those from the HapMap Project. It is popular for increasing statistical power and comparing results across studies using different platforms. Imputation for African American populations is challenging because their linkage disequilibrium blocks are shorter and also because no ideal reference panel is available due to admixture. In this paper, we evaluated three imputation strategies for African Americans. The intersection strategy used a combined panel consisting of SNPs polymorphic in both CEU and YRI. The union strategy used a panel consisting of SNPs polymorphic in either CEU or YRI. The merge strategy merged results from two separate imputations, one using CEU and the other using YRI. Because recent investigators are increasingly using the data from the 1000 Genomes (1KG) Project for genotype imputation, we evaluated both 1KG-based imputations and HapMap-based imputations. We used 23,707 SNPs from chromosomes 21 and 22 on Affymetrix SNP Array 6.0 genotyped for 1,075 HyperGEN African Americans. We found that 1KG-based imputations provided a substantially larger number of variants than HapMap-based imputations, about three times as many common variants and eight times as many rare and low-frequency variants. This higher yield is expected because the 1KG panel includes more SNPs. Accuracy rates using 1KG data were slightly lower than those using HapMap data before filtering, but slightly higher after filtering. The union strategy provided the highest imputation yield with next highest accuracy. The intersection strategy provided the lowest imputation yield but the highest accuracy. The merge strategy provided the lowest imputation accuracy. We observed that SNPs polymorphic only in CEU had much lower accuracy, reducing the accuracy of the union strategy. Our findings suggest that 1KG-based imputations can facilitate discovery of

  6. Reliability evaluation of nonlinear design space in pharmaceutical product development.

    Science.gov (United States)

    Hayashi, Yoshihiro; Kikuchi, Shingo; Onuki, Yoshinori; Takayama, Kozo

    2012-01-01

    Formulation design space of indomethacin tablets was investigated using a nonlinear response surface method incorporating multivariate spline interpolation (RSM-S). In this study, a resampling method with replacement was applied to evaluate the reliability of border on the design space estimated by RSM-S. The quantities of lactose, cornstarch, and microcrystalline cellulose were chosen as the formulation factors. Response surfaces were estimated using RSM-S, and the nonlinear design space was defined under the restriction of more than 3 kgf hardness and more than 70% dissolution 30 min before and after an accelerated test. The accuracy of the resampling method was elucidated and high correlation coefficients were produced. However, the distribution of the border on the design space generated by the resampling method was far from normal, and the confidence interval of the border was estimated using a nonparametric percentile technique. Consequently, the reliability of the design space was decreased by approaching the edge of the experimental design. RSM-S and this resampling method might be useful for estimating the reliability of nonlinear design space. Copyright © 2011 Wiley-Liss, Inc.

  7. Reliability evaluation of oil pipelines operating in aggressive environment

    Science.gov (United States)

    Magomedov, R. M.; Paizulaev, M. M.; Gebel, E. S.

    2017-08-01

    In connection with modern increased requirements for ecology and safety, the development of diagnostic services complex is obligatory and necessary enabling to ensure the reliable operation of the gas transportation infrastructure. Estimation of oil pipelines technical condition should be carried out not only to establish the current values of the equipment technological parameters in operation, but also to predict the dynamics of changes in the physical and mechanical characteristics of the material, the appearance of defects, etc. to ensure reliable and safe operation. In the paper, existing Russian and foreign methods for evaluation of the oil pipelines reliability are considered, taking into account one of the main factors leading to the appearance of crevice in the pipeline material, i.e. change the shape of its cross-section, - corrosion. Without compromising the generality of the reasoning, the assumption of uniform corrosion wear for the initial rectangular cross section has been made. As a result a formula for calculation the probability of failure-free operation was formulated. The proposed mathematical model makes it possible to predict emergency situations, as well as to determine optimal operating conditions for oil pipelines.

  8. Accuracy of estimation of genomic breeding values in pigs using low-density genotypes and imputation.

    Science.gov (United States)

    Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P

    2014-04-16

    Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for

  9. Consequences of splitting whole-genome sequencing effort over multiple breeds on imputation accuracy.

    Science.gov (United States)

    Bouwman, Aniek C; Veerkamp, Roel F

    2014-10-03

    The aim of this study was to determine the consequences of splitting sequencing effort over multiple breeds for imputation accuracy from a high-density SNP chip towards whole-genome sequence. Such information would assist for instance numerical smaller cattle breeds, but also pig and chicken breeders, who have to choose wisely how to spend their sequencing efforts over all the breeds or lines they evaluate. Sequence data from cattle breeds was used, because there are currently relatively many individuals from several breeds sequenced within the 1,000 Bull Genomes project. The advantage of whole-genome sequence data is that it carries the causal mutations, but the question is whether it is possible to impute the causal variants accurately. This study therefore focussed on imputation accuracy of variants with low minor allele frequency and breed specific variants. Imputation accuracy was assessed for chromosome 1 and 29 as the correlation between observed and imputed genotypes. For chromosome 1, the average imputation accuracy was 0.70 with a reference population of 20 Holstein, and increased to 0.83 when the reference population was increased by including 3 other dairy breeds with 20 animals each. When the same amount of animals from the Holstein breed were added the accuracy improved to 0.88, while adding the 3 other breeds to the reference population of 80 Holstein improved the average imputation accuracy marginally to 0.89. For chromosome 29, the average imputation accuracy was lower. Some variants benefitted from the inclusion of other breeds in the reference population, initially determined by the MAF of the variant in each breed, but even Holstein specific variants did gain imputation accuracy from the multi-breed reference population. This study shows that splitting sequencing effort over multiple breeds and combining the reference populations is a good strategy for imputation from high-density SNP panels towards whole-genome sequence when reference

  10. Imputation and variable selection in linear regression models with missing covariates.

    Science.gov (United States)

    Yang, Xiaowei; Belin, Thomas R; Boscardin, W John

    2005-06-01

    Across multiply imputed data sets, variable selection methods such as stepwise regression and other criterion-based strategies that include or exclude particular variables typically result in models with different selected predictors, thus presenting a problem for combining the results from separate complete-data analyses. Here, drawing on a Bayesian framework, we propose two alternative strategies to address the problem of choosing among linear regression models when there are missing covariates. One approach, which we call "impute, then select" (ITS) involves initially performing multiple imputation and then applying Bayesian variable selection to the multiply imputed data sets. A second strategy is to conduct Bayesian variable selection and missing data imputation simultaneously within one Gibbs sampling process, which we call "simultaneously impute and select" (SIAS). The methods are implemented and evaluated using the Bayesian procedure known as stochastic search variable selection for multivariate normal data sets, but both strategies offer general frameworks within which different Bayesian variable selection algorithms could be used for other types of data sets. A study of mental health services utilization among children in foster care programs is used to illustrate the techniques. Simulation studies show that both ITS and SIAS outperform complete-case analysis with stepwise variable selection and that SIAS slightly outperforms ITS.

  11. A multi breed reference improves genotype imputation accuracy in Nordic Red cattle

    DEFF Research Database (Denmark)

    Brøndum, Rasmus Froberg; Ma, Peipei; Lund, Mogens Sandø

    2012-01-01

    the subsequent effect of the imputed HD data on the reliability of genomic prediction. HD genotype data was available for 247 Danish, 210 Swedish and 249 Finnish Red bulls, and for 546 Holstein bulls. A subset 50 of bulls from each of the Nordic Red populations was selected for validation. After quality control...... 612,615 SNPs on chromosome 1-29 remained for analysis. Validation was done by masking markers in true HD data and imputing them using Beagle v. 3.3 and a reference group of either national Red, combined Red or combined Red and Holstein bulls. Results show a decrease in allele error rate from 2.64, 1......The objective of this study was to investigate if a multi breed reference would improve genotype imputation accuracy from 50K to high density (HD) single nucleotide polymorphism (SNP) marker data in Nordic Red Dairy Cattle, compared to using only a single breed reference, and to check...

  12. A multi breed reference improves genotype imputation accuracy in Nordic Red cattle

    DEFF Research Database (Denmark)

    Brøndum, Rasmus Froberg; Ma, Peipei; Lund, Mogens Sandø

    the subsequent effect of the imputed HD data on the reliability of genomic prediction. HD genotype data was available for 247 Danish, 210 Swedish and 249 Finnish Red bulls, and for 546 Holstein bulls. A subset 50 of bulls from each of the Nordic Red populations was selected for validation. After quality control...... 612,615 SNPs on chromosome 1-29 remained for analysis. Validation was done by masking markers in true HD data and imputing them using Beagle v. 3.3 and a reference group of either national Red, combined Red or combined Red and Holstein bulls. Results show a decrease in allele error rate from 2.64, 1......The objective of this study was to investigate if a multi breed reference would improve genotype imputation accuracy from 50K to high density (HD) single nucleotide polymorphism (SNP) marker data in Nordic Red Dairy Cattle, compared to using only a single breed reference, and to check...

  13. On multivariate imputation and forecasting of decadal wind speed missing data.

    Science.gov (United States)

    Wesonga, Ronald

    2015-01-01

    This paper demonstrates the application of multiple imputations by chained equations and time series forecasting of wind speed data. The study was motivated by the high prevalence of missing wind speed historic data. Findings based on the fully conditional specification under multiple imputations by chained equations, provided reliable wind speed missing data imputations. Further, the forecasting model shows, the smoothing parameter, alpha (0.014) close to zero, confirming that recent past observations are more suitable for use to forecast wind speeds. The maximum decadal wind speed for Entebbe International Airport was estimated to be 17.6 metres per second at a 0.05 level of significance with a bound on the error of estimation of 10.8 metres per second. The large bound on the error of estimations confirms the dynamic tendencies of wind speed at the airport under study.

  14. Methods for reliability evaluation of trust and reputation systems

    Science.gov (United States)

    Janiszewski, Marek B.

    2016-09-01

    Trust and reputation systems are a systematic approach to build security on the basis of observations of node's behaviour. Exchange of node's opinions about other nodes is very useful to indicate nodes which act selfishly or maliciously. The idea behind trust and reputation systems gets significance because of the fact that conventional security measures (based on cryptography) are often not sufficient. Trust and reputation systems can be used in various types of networks such as WSN, MANET, P2P and also in e-commerce applications. Trust and reputation systems give not only benefits but also could be a thread itself. Many attacks aim at trust and reputation systems exist, but such attacks still have not gain enough attention of research teams. Moreover, joint effects of many of known attacks have been determined as a very interesting field of research. Lack of an acknowledged methodology of evaluation of trust and reputation systems is a serious problem. This paper aims at presenting various approaches of evaluation such systems. This work also contains a description of generalization of many trust and reputation systems which can be used to evaluate reliability of such systems in the context of preventing various attacks.

  15. Saturated linkage map construction in Rubus idaeus using genotyping by sequencing and genome-independent imputation

    Directory of Open Access Journals (Sweden)

    Ward Judson A

    2013-01-01

    Full Text Available Abstract Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry. Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation

  16. Reliability Evaluation for Optimizing Electricity Supply in a Developing Country

    Directory of Open Access Journals (Sweden)

    Mark Ndubuka NWOHU

    2007-09-01

    Full Text Available The reliability standards for electricity supply in a developing country, like Nigeria, have to be determined on past engineering principles and practice. Because of the high demand of electrical power due to rapid development, industrialization and rural electrification; the economic, social and political climate in which the electric power supply industry now operates should be critically viewed to ensure that the production of electrical power should be augmented and remain uninterrupted. This paper presents an economic framework that can be used to optimize electric power system reliability. Finally the cost models are investigated to take into account the economic analysis of system reliability, which can be periodically updated to improve overall reliability of electric power system.

  17. Reliability Evaluation Of The City Transport Buses Under Actual Conditions

    Directory of Open Access Journals (Sweden)

    Rymarz Joanna

    2015-12-01

    Full Text Available The purpose of this paper was to present a reliability comparison of two types of city transport buses. Case study on the example of the well-known brands of city buses: Solaris Urbino 12 and Mercedes-Benz 628 Conecto L used at Municipal Transport Company in Lublin was presented in details. A reliability index for the most failure parts and complex systems for the period of time failures was determined. The analysis covered damages of the following systems: engine, electrical system, pneumatic system, brake system, driving system, central heating and air-conditioning and doors. Reliability was analyzed based on Weibull model. It has been demonstrated, that during the operation significant reliability differences occur between the buses produced nowadays.

  18. Imputation of adverse drug reactions: Causality assessment in hospitals.

    Science.gov (United States)

    Varallo, Fabiana Rossi; Planeta, Cleopatra S; Herdeiro, Maria Teresa; Mastroianni, Patricia de Carvalho

    2017-01-01

    Different algorithms have been developed to standardize the causality assessment of adverse drug reactions (ADR). Although most share common characteristics, the results of the causality assessment are variable depending on the algorithm used. Therefore, using 10 different algorithms, the study aimed to compare inter-rater and multi-rater agreement for ADR causality assessment and identify the most consistent to hospitals. Using ten causality algorithms, four judges independently assessed the first 44 cases of ADRs reported during the first year of implementation of a risk management service in a medium complexity hospital in the state of Sao Paulo (Brazil). Owing to variations in the terminology used for causality, the equivalent imputation terms were grouped into four categories: definite, probable, possible and unlikely. Inter-rater and multi-rater agreement analysis was performed by calculating the Cohen´s and Light´s kappa coefficients, respectively. None of the algorithms showed 100% reproducibility in the causal imputation. Fair inter-rater and multi-rater agreement was found. Emanuele (1984) and WHO-UMC (2010) algorithms showed a fair rate of agreement between the judges (k = 0.36). Although the ADR causality assessment algorithms were poorly reproducible, our data suggest that WHO-UMC algorithm is the most consistent for imputation in hospitals, since it allows evaluating the quality of the report. However, to improve the ability of assessing the causality using algorithms, it is necessary to include criteria for the evaluation of drug-related problems, which may be related to confounding variables that underestimate the causal association.

  19. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    Science.gov (United States)

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  20. Students' Evaluation Strategies in a Web Research Task: Are They Sensitive to Relevance and Reliability?

    Science.gov (United States)

    Rodicio, Héctor García

    2015-01-01

    When searching and using resources on the Web, students have to evaluate Web pages in terms of relevance and reliability. This evaluation can be done in a more or less systematic way, by either considering deep or superficial cues of relevance and reliability. The goal of this study was to examine how systematic students are when evaluating Web…

  1. Reliability and performance evaluation of stainless and mild steel ...

    African Journals Online (AJOL)

    Reliability and performance of stainless and mild steel products in methanolic and aqueous sodium chloride media have been investigated. Weight-loss and pre-exposure methods were used. There was a higher rate of weight-loss of mild steels and stainless steels in 1% HCl methanolic solution than in aqueous NaCl ...

  2. Reliability of FAMACHA© chart for the evaluation of anaemia in ...

    African Journals Online (AJOL)

    The reliability of FAMACHA© chart for identifying anaemic goats was compared with Packed Cell Volume (PCV). The colour of the lower eyelids was graded with FAMACHA© chart based on FAMACHA© scores (FS) of 1-5. The animals were scored from severely anaemic (white or FS 5) through moderately anaemic (pink or ...

  3. Imputation approaches for animal movement modeling

    Science.gov (United States)

    Scharf, Henry; Hooten, Mevin B.; Johnson, Devin S.

    2017-01-01

    The analysis of telemetry data is common in animal ecological studies. While the collection of telemetry data for individual animals has improved dramatically, the methods to properly account for inherent uncertainties (e.g., measurement error, dependence, barriers to movement) have lagged behind. Still, many new statistical approaches have been developed to infer unknown quantities affecting animal movement or predict movement based on telemetry data. Hierarchical statistical models are useful to account for some of the aforementioned uncertainties, as well as provide population-level inference, but they often come with an increased computational burden. For certain types of statistical models, it is straightforward to provide inference if the latent true animal trajectory is known, but challenging otherwise. In these cases, approaches related to multiple imputation have been employed to account for the uncertainty associated with our knowledge of the latent trajectory. Despite the increasing use of imputation approaches for modeling animal movement, the general sensitivity and accuracy of these methods have not been explored in detail. We provide an introduction to animal movement modeling and describe how imputation approaches may be helpful for certain types of models. We also assess the performance of imputation approaches in two simulation studies. Our simulation studies suggests that inference for model parameters directly related to the location of an individual may be more accurate than inference for parameters associated with higher-order processes such as velocity or acceleration. Finally, we apply these methods to analyze a telemetry data set involving northern fur seals (Callorhinus ursinus) in the Bering Sea. Supplementary materials accompanying this paper appear online.

  4. Genotype Imputation with Thousands of Genomes

    Science.gov (United States)

    Howie, Bryan; Marchini, Jonathan; Stephens, Matthew

    2011-01-01

    Genotype imputation is a statistical technique that is often used to increase the power and resolution of genetic association studies. Imputation methods work by using haplotype patterns in a reference panel to predict unobserved genotypes in a study dataset, and a number of approaches have been proposed for choosing subsets of reference haplotypes that will maximize accuracy in a given study population. These panel selection strategies become harder to apply and interpret as sequencing efforts like the 1000 Genomes Project produce larger and more diverse reference sets, which led us to develop an alternative framework. Our approach is built around a new approximation that uses local sequence similarity to choose a custom reference panel for each study haplotype in each region of the genome. This approximation makes it computationally efficient to use all available reference haplotypes, which allows us to bypass the panel selection step and to improve accuracy at low-frequency variants by capturing unexpected allele sharing among populations. Using data from HapMap 3, we show that our framework produces accurate results in a wide range of human populations. We also use data from the Malaria Genetic Epidemiology Network (MalariaGEN) to provide recommendations for imputation-based studies in Africa. We demonstrate that our approximation improves efficiency in large, sequence-based reference panels, and we discuss general computational strategies for modern reference datasets. Genome-wide association studies will soon be able to harness the power of thousands of reference genomes, and our work provides a practical way for investigators to use this rich information. New methodology from this study is implemented in the IMPUTE2 software package. PMID:22384356

  5. A technical survey on issues of the quantitative evaluation of software reliability

    Energy Technology Data Exchange (ETDEWEB)

    Park, J. K; Sung, T. Y.; Eom, H. S.; Jeong, H. S.; Park, J. H.; Kang, H. G.; Lee, K. Y.

    2000-04-01

    To develop the methodology for evaluating the software reliability included in digital instrumentation and control system (I and C), many kinds of methodologies/techniques that have been proposed from the software reliability engineering fuel are analyzed to identify the strong and week points of them. According to analysis results, methodologies/techniques that can be directly applied for the evaluation of the software reliability are not exist. Thus additional researches to combine the most appropriate methodologies/techniques from existing ones would be needed to evaluate the software reliability. (author)

  6. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)

    2015-12-15

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles.

  7. Final report : testing and evaluation for solar hot water reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Caudell, Thomas P. (University of New Mexico, Albuquerque, NM); He, Hongbo (University of New Mexico, Albuquerque, NM); Menicucci, David F. (Building Specialists, Inc., Albuquerque, NM); Mammoli, Andrea A. (University of New Mexico, Albuquerque, NM); Burch, Jay (National Renewable Energy Laboratory, Golden CO)

    2011-07-01

    Solar hot water (SHW) systems are being installed by the thousands. Tax credits and utility rebate programs are spurring this burgeoning market. However, the reliability of these systems is virtually unknown. Recent work by Sandia National Laboratories (SNL) has shown that few data exist to quantify the mean time to failure of these systems. However, there is keen interest in developing new techniques to measure SHW reliability, particularly among utilities that use ratepayer money to pay the rebates. This document reports on an effort to develop and test new, simplified techniques to directly measure the state of health of fielded SHW systems. One approach was developed by the National Renewable Energy Laboratory (NREL) and is based on the idea that the performance of the solar storage tank can reliably indicate the operational status of the SHW systems. Another approach, developed by the University of New Mexico (UNM), uses adaptive resonance theory, a type of neural network, to detect and predict failures. This method uses the same sensors that are normally used to control the SHW system. The NREL method uses two additional temperature sensors on the solar tank. The theories, development, application, and testing of both methods are described in the report. Testing was performed on the SHW Reliability Testbed at UNM, a highly instrumented SHW system developed jointly by SNL and UNM. The two methods were tested against a number of simulated failures. The results show that both methods show promise for inclusion in conventional SHW controllers, giving them advanced capability in detecting and predicting component failures.

  8. Assessing the Reliability of Student Evaluations of Teaching: Choosing the Right Coefficient

    Science.gov (United States)

    Morley, Donald

    2014-01-01

    Many of the studies used to support the claim that student evaluations of teaching are reliable measures of teaching effectiveness have frequently calculated inappropriate reliability coefficients. This paper points to three coefficients that would be appropriate depending on if student evaluations were used for formative or summative purposes.…

  9. A Standardized Rubric for Evaluating Webquest Design: Reliability Analysis of ZUNAL Webquest Design Rubric

    Science.gov (United States)

    Unal, Zafer; Bodur, Yasar; Unal, Aslihan

    2012-01-01

    Current literature provides many examples of rubrics that are used to evaluate the quality of web-quest designs. However, reliability of these rubrics has not yet been researched. This is the first study to fully characterize and assess the reliability of a webquest evaluation rubric. The ZUNAL rubric was created to utilize the strengths of the…

  10. SPSS Macros for Assessing the Reliability and Agreement of Student Evaluations of Teaching

    Science.gov (United States)

    Morley, Donald D.

    2009-01-01

    This article reports and demonstrates two SPSS macros for calculating Krippendorff's alpha and intraclass reliability coefficients in repetitive situations where numerous coefficients are needed. Specifically, the reported SPSS macros were used to evaluate the interrater agreement and reliability of student evaluations of teaching in thousands of…

  11. Construction and Evaluation of Reliability and Validity of Reasoning Ability Test

    Science.gov (United States)

    Bhat, Mehraj A.

    2014-01-01

    This paper is based on the construction and evaluation of reliability and validity of reasoning ability test at secondary school students. In this paper an attempt was made to evaluate validity, reliability and to determine the appropriate standards to interpret the results of reasoning ability test. The test includes 45 items to measure six types…

  12. Data driven estimation of imputation error-a strategy for imputation with a reject option

    DEFF Research Database (Denmark)

    Bak, Nikolaj; Hansen, Lars Kai

    2016-01-01

    to be a practical approach to help users using imputation after the informed choice to impute the missing data has been made. To do this all patterns of missing values are simulated in all complete cases, enabling calculation of the "true error" in each of these new cases. The error is then estimated for each case....... The effect of threshold can be estimated using the complete cases. The user can set an a priori relevant threshold for what is acceptable or use cross validation with the final analysis to choose the threshold. The choice can be presented along with argumentation for the choice rather than holding...

  13. Evaluation of nodal reliability risk in a deregulated power system with photovoltaic power penetration

    DEFF Research Database (Denmark)

    Zhao, Qian; Wang, Peng; Goel, Lalit

    2014-01-01

    Owing to the intermittent characteristic of solar radiation, power system reliability may be affected with high photovoltaic (PV) power penetration. To reduce large variation of PV power, additional system balancing reserve would be needed. In deregulated power systems, deployment of reserves...... simulation technique has been proposed to evaluate the reserve deployment and customers' nodal reliability with high PV power penetration. The proposed method can effectively model the chronological aspects and stochastic characteristics of PV power and system operation with high computation efficiency...... considered in the proposed method. Nodal reliability indices and reserve deployment have been evaluated by applying the proposed method to the Institute of Electrical and Electronics Engineers reliability test system....

  14. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    Science.gov (United States)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  15. The rise of multiple imputation: a review of the reporting and implementation of the method in medical research.

    Science.gov (United States)

    Hayati Rezvan, Panteha; Lee, Katherine J; Simpson, Julie A

    2015-04-07

    Missing data are common in medical research, which can lead to a loss in statistical power and potentially biased results if not handled appropriately. Multiple imputation (MI) is a statistical method, widely adopted in practice, for dealing with missing data. Many academic journals now emphasise the importance of reporting information regarding missing data and proposed guidelines for documenting the application of MI have been published. This review evaluated the reporting of missing data, the application of MI including the details provided regarding the imputation model, and the frequency of sensitivity analyses within the MI framework in medical research articles. A systematic review of articles published in the Lancet and New England Journal of Medicine between January 2008 and December 2013 in which MI was implemented was carried out. We identified 103 papers that used MI, with the number of papers increasing from 11 in 2008 to 26 in 2013. Nearly half of the papers specified the proportion of complete cases or the proportion with missing data by each variable. In the majority of the articles (86%) the imputed variables were specified. Of the 38 papers (37%) that stated the method of imputation, 20 used chained equations, 8 used multivariate normal imputation, and 10 used alternative methods. Very few articles (9%) detailed how they handled non-normally distributed variables during imputation. Thirty-nine papers (38%) stated the variables included in the imputation model. Less than half of the papers (46%) reported the number of imputations, and only two papers compared the distribution of imputed and observed data. Sixty-six papers presented the results from MI as a secondary analysis. Only three articles carried out a sensitivity analysis following MI to assess departures from the missing at random assumption, with details of the sensitivity analyses only provided by one article. This review outlined deficiencies in the documenting of missing data and the

  16. Local exome sequences facilitate imputation of less common variants and increase power of genome wide association studies.

    Directory of Open Access Journals (Sweden)

    Peter K Joshi

    Full Text Available The analysis of less common variants in genome-wide association studies promises to elucidate complex trait genetics but is hampered by low power to reliably detect association. We show that addition of population-specific exome sequence data to global reference data allows more accurate imputation, particularly of less common SNPs (minor allele frequency 1-10% in two very different European populations. The imputation improvement corresponds to an increase in effective sample size of 28-38%, for SNPs with a minor allele frequency in the range 1-3%.

  17. Aerosol optical depth as a measure of particulate exposure using imputed censored data, and relationship with childhood asthma hospital admissions for 2004 in athens, Greece.

    Science.gov (United States)

    Higgs, Gary; Sterling, David A; Aryal, Subhash; Vemulapalli, Abhilash; Priftis, Kostas N; Sifakis, Nicolas I

    2015-01-01

    An understanding of human health implications from atmosphere exposure is a priority in both the geographic and the public health domains. The unique properties of geographic tools for remote sensing of the atmosphere offer a distinct ability to characterize and model aerosols in the urban atmosphere for evaluation of impacts on health. Asthma, as a manifestation of upper respiratory disease prevalence, is a good example of the potential interface of geographic and public health interests. The current study focused on Athens, Greece during the year of 2004 and (1) demonstrates a systemized process for aligning data obtained from satellite aerosol optical depth (AOD) with geographic location and time, (2) evaluates the ability to apply imputation methods to censored data, and (3) explores whether AOD data can be used satisfactorily to investigate the association between AOD and health impacts using an example of hospital admission for childhood asthma. This work demonstrates the ability to apply remote sensing data in the evaluation of health outcomes, that the alignment process for remote sensing data is readily feasible, and that missing data can be imputed with a sufficient degree of reliability to develop complete datasets. Individual variables demonstrated small but significant effect levels on hospital admission of children for AOD, nitrogen oxides (NOx), relative humidity (rH), temperature, smoke, and inversely for ozone. However, when applying a multivari-able model, an association with asthma hospital admissions and air quality could not be demonstrated. This work is promising and will be expanded to include additional years.

  18. Comparison of results from different imputation techniques for missing data from an anti-obesity drug trial

    DEFF Research Database (Denmark)

    Jørgensen, Anders W.; Lundstrøm, Lars H; Wetterslev, Jørn

    2014-01-01

    BACKGROUND: In randomised trials of medical interventions, the most reliable analysis follows the intention-to-treat (ITT) principle. However, the ITT analysis requires that missing outcome data have to be imputed. Different imputation techniques may give different results and some may lead to bias....... RESULTS: 561 participants were randomised. Compared to placebo, there was a significantly greater weight loss with topiramate in all analyses: 9.5 kg (SE 1.17) in the complete case analysis (N = 86), 6.8 kg (SE 0.66) using LOCF (N = 561), 6.4 kg (SE 0.90) using MI (N = 561) and 1.5 kg (SE 0.28) using BOCF...... (N = 561). CONCLUSIONS: The different imputation methods gave very different results. Contrary to widely stated claims, LOCF did not produce a conservative (i.e., lower) efficacy estimate compared to MI. Also, LOCF had a lower SE than MI....

  19. Establishing the reliability of natural language processing evaluation through linear regression modelling / E.R. Eiselen.

    OpenAIRE

    Eiselen, Ernst Roald

    2013-01-01

    Determining the quality of natural language applications is one of the most important aspects of technology development. There has, however, been very little work done on establishing how well the methods and measures represent the quality of the technology and how reliable the evaluation results presented in most research are. This study presents a new stepwise evaluation reliability methodology that provides a step-by-step framework for creating predictive models of evaluation metric reliab...

  20. Comparison of Imputation Methods for Handling Missing Categorical Data with Univariate Pattern // Una comparación de métodos de imputación de variables categóricas con patrón univariado

    OpenAIRE

    Torres Munguía, Juan Armando

    2014-01-01

    This paper examines the sample proportions estimates in the presence of univariate missing categorical data. A database about smoking habits (2011 National Addiction Survey of Mexico) was used to create simulated yet realistic datasets at rates 5% and 15% of missingness, each for MCAR, MAR and MNAR mechanisms. Then the performance of six methods for addressing missingness is evaluated: listwise, mode imputation, random imputation, hot-deck, imputation by polytomous regression and random fores...

  1. Customer control and evaluation of service validity and reliability

    NARCIS (Netherlands)

    van Raaij, W. Fred; Pruyn, Adriaan T.H.

    1998-01-01

    A control and attribution model of service production and evaluation is proposed. Service production consists of the stages specification (input), realization (throughput), and outcome (output). Customers may exercise control over all three stages of the service. Critical factors of service

  2. A Review on VSC-HVDC Reliability Modeling and Evaluation Techniques

    Science.gov (United States)

    Shen, L.; Tang, Q.; Li, T.; Wang, Y.; Song, F.

    2017-05-01

    With the fast development of power electronics, voltage-source converter (VSC) HVDC technology presents cost-effective ways for bulk power transmission. An increasing number of VSC-HVDC projects has been installed worldwide. Their reliability affects the profitability of the system and therefore has a major impact on the potential investors. In this paper, an overview of the recent advances in the area of reliability evaluation for VSC-HVDC systems is provided. Taken into account the latest multi-level converter topology, the VSC-HVDC system is categorized into several sub-systems and the reliability data for the key components is discussed based on sources with academic and industrial backgrounds. The development of reliability evaluation methodologies is reviewed and the issues surrounding the different computation approaches are briefly analysed. A general VSC-HVDC reliability evaluation procedure is illustrated in this paper.

  3. On combining reference data to improve imputation accuracy.

    Directory of Open Access Journals (Sweden)

    Jun Chen

    Full Text Available Genotype imputation is an important tool in human genetics studies, which uses reference sets with known genotypes and prior knowledge on linkage disequilibrium and recombination rates to infer un-typed alleles for human genetic variations at a low cost. The reference sets used by current imputation approaches are based on HapMap data, and/or based on recently available next-generation sequencing (NGS data such as data generated by the 1000 Genomes Project. However, with different coverage and call rates for different NGS data sets, how to integrate NGS data sets of different accuracy as well as previously available reference data as references in imputation is not an easy task and has not been systematically investigated. In this study, we performed a comprehensive assessment of three strategies on using NGS data and previously available reference data in genotype imputation for both simulated data and empirical data, in order to obtain guidelines for optimal reference set construction. Briefly, we considered three strategies: strategy 1 uses one NGS data as a reference; strategy 2 imputes samples by using multiple individual data sets of different accuracy as independent references and then combines the imputed samples with samples based on the high accuracy reference selected when overlapping occurs; and strategy 3 combines multiple available data sets as a single reference after imputing each other. We used three software (MACH, IMPUTE2 and BEAGLE for assessing the performances of these three strategies. Our results show that strategy 2 and strategy 3 have higher imputation accuracy than strategy 1. Particularly, strategy 2 is the best strategy across all the conditions that we have investigated, producing the best accuracy of imputation for rare variant. Our study is helpful in guiding application of imputation methods in next generation association analyses.

  4. On mining incomplete medical datasets: Ordering imputation and classification.

    Science.gov (United States)

    Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong; Hu, Ya-Han

    2015-01-01

    To collect medical datasets, it is usually the case that a number of data samples contain some missing values. Performing the data mining task over the incomplete datasets is a difficult problem. In general, missing value imputation can be approached, which aims at providing estimations for missing values by reasoning from the observed data. Consequently, the effectiveness of missing value imputation is heavily dependent on the observed data (or complete data) in the incomplete datasets. In this paper, the research objective is to perform instance selection to filter out some noisy data (or outliers) from a given (complete) dataset to see its effect on the final imputation result. Specifically, four different processes of combining instance selection and missing value imputation are proposed and compared in terms of data classification. Experiments are conducted based on 11 medical related datasets containing categorical, numerical, and mixed attribute types of data. In addition, missing values for each dataset are introduced into all attributes (the missing data rates are 10%, 20%, 30%, 40%, and 50%). For instance selection and missing value imputation, the DROP3 and k-nearest neighbor imputation methods are employed. On the other hand, the support vector machine (SVM) classifier is used to assess the final classification accuracy of the four different processes. The experimental results show that the second process by performing instance selection first and imputation second allows the SVM classifiers to outperform the other processes. For incomplete medical datasets containing some missing values, it is necessary to perform missing value imputation. In this paper, we demonstrate that instance selection can be used to filter out some noisy data or outliers before the imputation process. In other words, the observed data for missing value imputation may contain some noisy information, which can degrade the quality of the imputation result as well as the

  5. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  6. How Reliable Are Students' Evaluations of Teaching Quality? A Variance Components Approach

    Science.gov (United States)

    Feistauer, Daniela; Richter, Tobias

    2017-01-01

    The inter-rater reliability of university students' evaluations of teaching quality was examined with cross-classified multilevel models. Students (N = 480) evaluated lectures and seminars over three years with a standardised evaluation questionnaire, yielding 4224 data points. The total variance of these student evaluations was separated into the…

  7. reliability reliability

    African Journals Online (AJOL)

    eobe

    In this work, a FORTRAN-based computer computer. Eurocode 2 (EC 2)[1] ... addresses addresses: 1 idrcivil1@yahoo.com, 2 adomaarf1@gmail.com computer computer program was developed to aid the design of reinforced co program was ..... Haldar, A. and Mahadevan, S. Reliability Assessment using Stochastic Finite ...

  8. Test-Retest Reliability of a Tutor Evaluation Form Used in a Problem-Based Curriculum.

    Science.gov (United States)

    Hay, John A.

    1997-01-01

    A study examined the test-retest reliability of 30 student evaluations of tutors in a problem-based learning curriculum at McMaster University in Hamilton, Ontario. Results were used for the improvement of reliability of the instrument. (JOW)

  9. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    Science.gov (United States)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  10. A New Tool for Nutrition App Quality Evaluation (AQEL): Development, Validation, and Reliability Testing.

    Science.gov (United States)

    DiFilippo, Kristen Nicole; Huang, Wenhao; Chapman-Novakofski, Karen M

    2017-10-27

    The extensive availability and increasing use of mobile apps for nutrition-based health interventions makes evaluation of the quality of these apps crucial for integration of apps into nutritional counseling. The goal of this research was the development, validation, and reliability testing of the app quality evaluation (AQEL) tool, an instrument for evaluating apps' educational quality and technical functionality. Items for evaluating app quality were adapted from website evaluations, with additional items added to evaluate the specific characteristics of apps, resulting in 79 initial items. Expert panels of nutrition and technology professionals and app users reviewed items for face and content validation. After recommended revisions, nutrition experts completed a second AQEL review to ensure clarity. On the basis of 150 sets of responses using the revised AQEL, principal component analysis was completed, reducing AQEL into 5 factors that underwent reliability testing, including internal consistency, split-half reliability, test-retest reliability, and interrater reliability (IRR). Two additional modifiable constructs for evaluating apps based on the age and needs of the target audience as selected by the evaluator were also tested for construct reliability. IRR testing using intraclass correlations (ICC) with all 7 constructs was conducted, with 15 dietitians evaluating one app. Development and validation resulted in the 51-item AQEL. These were reduced to 25 items in 5 factors after principal component analysis, plus 9 modifiable items in two constructs that were not included in principal component analysis. Internal consistency and split-half reliability of the following constructs derived from principal components analysis was good (Cronbach alpha >.80, Spearman-Brown coefficient >.80): behavior change potential, support of knowledge acquisition, app function, and skill development. App purpose split half-reliability was .65. Test-retest reliability showed no

  11. CARES/PC - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES

    Science.gov (United States)

    Szatmary, S. A.

    1994-01-01

    The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES/PC performs statistical analysis of data obtained from the fracture of simple, uniaxial tensile or flexural specimens and estimates the Weibull and Batdorf material parameters from this data. CARES/PC is a subset of the program CARES (COSMIC program number LEW-15168) which calculates the fast-fracture reliability or failure probability of ceramic components utilizing the Batdorf and Weibull models to describe the effects of multi-axial stress states on material strength. CARES additionally requires that the ceramic structure be modeled by a finite element program such as MSC/NASTRAN or ANSYS. The more limited CARES/PC does not perform fast-fracture reliability estimation of components. CARES/PC estimates ceramic material properties from uniaxial tensile or from three- and four-point bend bar data. In general, the parameters are obtained from the fracture stresses of many specimens (30 or more are recommended) whose geometry and loading configurations are held constant. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests measure the accuracy of the hypothesis that the fracture data comes from a population with a distribution specified by the estimated Weibull parameters. Ninety-percent confidence intervals on the Weibull parameters and the unbiased value of the shape parameter for complete samples are provided

  12. Evaluating the reliability of point estimates of wetland reference evaporation

    Directory of Open Access Journals (Sweden)

    H. Gavin

    2003-01-01

    Full Text Available The Penman-Monteith formulation of evaporation has been criticised for its reliance upon point estimates so that areal estimates of wetland evaporation based upon single weather stations may be misleading. Typically, wetlands comprise a complex mosaic of land cover types from each of which evaporative rates may differ. The need to account for wetland patches when monitoring hydrological fluxes has been noted. This paper presents work carried out over a wet grassland in Southern England. The significance of fetch on actual evaporation was examined using the approach adopted by Gash (1986 based upon surface roughness to estimate the fraction of evaporation sensed from a specified distance upwind of the monitoring station. This theoretical analysis (assuming near-neutral conditions reveals that the fraction of evaporation contributed by the surrounding area increases steadily to a value of 77% at a distance of 224 m and thereafter declines rapidly. Thus, point climate observations may not reflect surface conditions at greater distances. This result was tested through the deployment of four weather stations on the wetland. The resultant data suggested that homogeneous conditions prevailed so that the central weather station provided reliable areal estimates of reference evaporation during the observation period March–April 1999. This may be a result of not accounting for high wind speeds and roughness found in wetlands that lead to widespread atmospheric mixing. It should be noted this analysis was based upon data collected during the period March-April when wind direction was constant (westerly and the land surface was moist. There could be more variation at other times of the year that would lead to greater heterogeneity in actual evaporation. Keywords: evaporation, Penman-Monteith, automatic weather station, fetch, wetland

  13. Reliability and validity of the Chinese version Appropriateness Evaluation Protocol

    NARCIS (Netherlands)

    Liu, W. (Wenwei); Yuan, S. (Suwei); Wei, F. (Fengqing); Yang, J. (Jing); Zhang, Z. (Zhe); C. Zhu (Changbin); Ma, J. (Jin)

    2015-01-01

    textabstractObjective: To adapt the Appropriateness Evaluation Protocol (AEP) to the specific settings of health care in China and to validate the Chinese version AEP (C-AEP). Methods: Forward and backward translations were carried out to the original criteria. Twenty experts participated in the

  14. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  15. Reliability of ultrasound evaluation of the long head of the biceps tendon.

    Science.gov (United States)

    Drolet, Pascale; Martineau, Anne; Lacroix, Rémi; Roy, Jean-Sébastien

    2016-06-13

    To determine the reliability of quantitative measures of the long head of the biceps tendon using an ultrasound-imaging system. Intra- and inter-rater reliability study. Thirty-one participants without shoulder pain. All participants took part in 3 ultrasound imaging sessions; they were assessed by 2 evaluators (inter-rater reliability), one of whom assessed them twice (intra-rater reliability). All measurements were taken at the widest identified part of the tendon using longitudinal and transverse views. Measurements of the long head of the biceps tendon included width, thickness and cross-sectional area. Intraclass correlation coefficients and minimal detectable change were used to characterize reliability. Intra- and inter-rater reliabilities were excellent for all measures when the mean of 2 measures were considered, except for inter-rater reliability of the width, for which it ranged from 0.76 to 0.86. Minimal detectable change ranged from 0.3 to 1.6 mm for width and thickness, and from 2.8 to 4.9 mm2 for cross-sectional area. Ultrasound measurement of the long head of the biceps tendon is a highly reliable method, except for the width. When measuring the long head of the biceps tendon, a mean of 2 measurements is recommended. Now that reliability has been shown in healthy individuals, the next step will be to determine the validity/reliability of these quantitative measures in symptomatic shoulders.

  16. Traffic Speed Data Imputation Method Based on Tensor Completion

    Directory of Open Access Journals (Sweden)

    Bin Ran

    2015-01-01

    Full Text Available Traffic speed data plays a key role in Intelligent Transportation Systems (ITS; however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS. In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC, an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  17. Missing value imputation in DNA microarrays based on conjugate gradient method.

    Science.gov (United States)

    Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh

    2012-02-01

    Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Enlargement of Traffic Information Coverage Area Using Selective Imputation of Floating Car Data

    Science.gov (United States)

    Kumagai, Masatoshi; Hiruta, Tomoaki; Fushiki, Takumi; Yokota, Takayoshi

    This paper discusses a real-time imputation method for sparse floating car data (FCD.) Floating cars are effective way to collect traffic information; however, because of the limitation of the number of floating cars, there is a large amount of missing data with FCD. In an effort to address this problem, we previously proposed a new imputation method based on feature space projection. The method consists of three major processes: (i) determination of a feature space from past FCD history; (ii) feature space projection of current FCD; and (iii) estimation of missing data performed by inverse projection from the feature space. Since estimation is achieved on each feature space axis that represents the spatial correlated component of FCD, it performs an accurate imputation and enlarges information coverage area. However, correlation difference among multiple road-links sometimes causes a trade-off problem between the accuracy and the coverage. Therefore, we developed an additional function in order to filter the road-links that have low correlation with the others. The function uses spectral factorization as filtering index, which is suitable to evaluate the correlation on the multidimensional feature space. Combination use of the imputation method and the filtering function decreases maximum estimation error-rate from 0.39 to 0.24, keeping 60% coverage area against sparse FCD of 15% observations.

  19. Once is not enough : Establishing reliability criteria for teacher evaluation based on classroom observations

    NARCIS (Netherlands)

    van der Lans, Rikkert; van de Grift, Wim; van Veen, Klaas

    2016-01-01

    Classroom observation is the most implemented method to evaluate teaching. To ensure reliability, researchers often train observers extensively. However, schools have limited resources to train observers and often lesson observation is performed by limitedly trained or untrained colleagues. In this

  20. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  1. How to Improve Postgenomic Knowledge Discovery Using Imputation

    Directory of Open Access Journals (Sweden)

    Coppel Ross

    2009-01-01

    Full Text Available While microarrays make it feasible to rapidly investigate many complex biological problems, their multistep fabrication has the proclivity for error at every stage. The standard tactic has been to either ignore or regard erroneous gene readings as missing values, though this assumption can exert a major influence upon postgenomic knowledge discovery methods like gene selection and gene regulatory network (GRN reconstruction. This has been the catalyst for a raft of new flexible imputation algorithms including local least square impute and the recent heuristic collateral missing value imputation, which exploit the biological transactional behaviour of functionally correlated genes to afford accurate missing value estimation. This paper examines the influence of missing value imputation techniques upon postgenomic knowledge inference methods with results for various algorithms consistently corroborating that instead of ignoring missing values, recycling microarray data by flexible and robust imputation can provide substantial performance benefits for subsequent downstream procedures.

  2. How to Improve Postgenomic Knowledge Discovery Using Imputation

    Directory of Open Access Journals (Sweden)

    2009-02-01

    Full Text Available While microarrays make it feasible to rapidly investigate many complex biological problems, their multistep fabrication has the proclivity for error at every stage. The standard tactic has been to either ignore or regard erroneous gene readings as missing values, though this assumption can exert a major influence upon postgenomic knowledge discovery methods like gene selection and gene regulatory network (GRN reconstruction. This has been the catalyst for a raft of new flexible imputation algorithms including local least square impute and the recent heuristic collateral missing value imputation, which exploit the biological transactional behaviour of functionally correlated genes to afford accurate missing value estimation. This paper examines the influence of missing value imputation techniques upon postgenomic knowledge inference methods with results for various algorithms consistently corroborating that instead of ignoring missing values, recycling microarray data by flexible and robust imputation can provide substantial performance benefits for subsequent downstream procedures.

  3. A Review On Missing Value Estimation Using Imputation Algorithm

    Science.gov (United States)

    Armina, Roslan; Zain, Azlan Mohd; Azizah Ali, Nor; Sallehuddin, Roselina

    2017-09-01

    The presence of the missing value in the data set has always been a major problem for precise prediction. The method for imputing missing value needs to minimize the effect of incomplete data sets for the prediction model. Many algorithms have been proposed for countermeasure of missing value problem. In this review, we provide a comprehensive analysis of existing imputation algorithm, focusing on the technique used and the implementation of global or local information of data sets for missing value estimation. In addition validation method for imputation result and way to measure the performance of imputation algorithm also described. The objective of this review is to highlight possible improvement on existing method and it is hoped that this review gives reader better understanding of imputation method trend.

  4. The new features of the ExaMe evaluation system and reliability of its fixed tests.

    Science.gov (United States)

    Martinková, P; Zvára, K; Zvárová, J; Zvára, K

    2006-01-01

    The ExaMe system for the evaluation of targeted knowledge has been in development since 1998. The new features of the ExaMe system are introduced in this paper. Especially, the new three-layer architecture is described. Besides the system itself, the properties of fixed tests in the ExaMe system are studied. In special detail, the reliability of the fixed tests is discussed. The theory background is explained and some limitations of the reliability are pointed out. Three characteristics used for estimation of reliability of educational tests are discussed: Cronbach's alpha, standardized item alpha and split half coefficient. The relation between these characteristics and reliability and between characteristics themselves is investigated. In more detail, the properties of Cronbach's alpha, the characteristics mostly used for the estimation of reliability, are discussed. A confidence interval is introduced for the characteristics. Since 2000, the serviceability of the ExaMe evaluation system as the supporting evaluation tool has been repeatedly shown at the courses of Ph.D. studies in biomedical informatics at Charles University in Prague. The ExaMe system also opens new possibilities for self-evaluation and distance learning, especially when connected with electronic books on the Internet. The estimation of reliability of tests contains some limitations. Keeping them in mind, we can still get some information about the quality of certain educational tests. Therefore, the estimation of reliability of the fixed tests is implemented in the ExaMe system.

  5. System Reliability Evaluation Based on Convex Combination Considering Operation and Maintenance Strategy

    Directory of Open Access Journals (Sweden)

    Lijie Li

    2015-01-01

    Full Text Available The approaches to the system reliability evaluation with respect to the cases, where the components are independent or the components have interactive relationships within the system, were proposed in this paper. Starting from the higher requirements on system operational safety and economy, the reliability focused optimal models of multiobjective maintenance strategies were built. For safety-critical systems, the pessimistic maintenance strategies are usually taken, and, in these cases, the system reliability evaluation has also to be tackled pessimistically. For safety-uncritical systems, the optimistic maintenance strategies were usually taken, and, in these circumstances, the system reliability evaluation had also to be tackled optimistically, respectively. Besides, the reasonable maintenance strategies and their corresponding reliability evaluation can be obtained through the convex combination of the above two cases. With a high-speed train system as the example background, the proposed method is verified by combining the actual failure data with the maintenance data. Results demonstrate that the proposed study can provide a new system reliability calculation method and solution to select and optimize the multiobjective operational strategies with the considerations of system safety and economical requirements. The theoretical basis is also provided for scientifically estimating the reliability of a high-speed train system and formulating reasonable maintenance strategies.

  6. Developing a Comprehensive Teaching Evaluation System for Foundation Courses with Enhanced Validity and Reliability

    Science.gov (United States)

    Xu, Yueyu

    2012-01-01

    This study aims at developing a comprehensive teaching evaluation system, a more useful educational technology, for achieving relatively reliable and valid results that can be well acknowledged by instructors, students, and by administrators. Adopting multi-method approaches, the study integrates student evaluation, expert evaluation and regular…

  7. Handling missing rows in multi-omics data integration: multiple imputation in multiple factor analysis framework.

    Science.gov (United States)

    Voillet, Valentin; Besse, Philippe; Liaubet, Laurence; San Cristobal, Magali; González, Ignacio

    2016-10-03

    configuration even when many individuals were missing in several data tables. This method takes into account the uncertainty of MI-MFA configurations induced by the missing rows, thereby allowing the reliability of the results to be evaluated.

  8. Reliable and valid tools for measuring surgeons' teaching performance: residents' vs. self evaluation.

    Science.gov (United States)

    Boerebach, Benjamin C M; Arah, Onyebuchi A; Busch, Olivier R C; Lombarts, Kiki M J M H

    2012-01-01

    In surgical education, there is a need for educational performance evaluation tools that yield reliable and valid data. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance. We investigated (1) the reliability and validity of 2 tools for evaluating the teaching performance of attending surgeons in residency training programs, and (2) whether surgeons' self evaluation correlated with the residents' evaluation of those surgeons. We surveyed 343 surgeons and 320 residents as part of a multicenter prospective cohort study of faculty teaching performance in residency training programs. The reliability and validity of the SETQ (System for Evaluation Teaching Qualities) tools were studied using standard psychometric techniques. We then estimated the correlations between residents' and surgeons' evaluations. The response rate was 87% among surgeons and 84% among residents, yielding 2625 residents' evaluations and 302 self evaluations. The SETQ tools yielded reliable and valid data on 5 domains of surgical teaching performance, namely, learning climate, professional attitude towards residents, communication of goals, evaluation of residents, and feedback. The correlations between surgeons' self and residents' evaluations were low, with coefficients ranging from 0.03 for evaluation of residents to 0.18 for communication of goals. The SETQ tools for the evaluation of surgeons' teaching performance appear to yield reliable and valid data. The lack of strong correlations between surgeons' self and residents' evaluations suggest the need for using external feedback sources in informed self evaluation of surgeons. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  9. Bulk electric system reliability evaluation incorporating wind power and demand side management

    Science.gov (United States)

    Huang, Dange

    Electric power systems are experiencing dramatic changes with respect to structure, operation and regulation and are facing increasing pressure due to environmental and societal constraints. Bulk electric system reliability is an important consideration in power system planning, design and operation particularly in the new competitive environment. A wide range of methods have been developed to perform bulk electric system reliability evaluation. Theoretically, sequential Monte Carlo simulation can include all aspects and contingencies in a power system and can be used to produce an informative set of reliability indices. It has become a practical and viable tool for large system reliability assessment technique due to the development of computing power and is used in the studies described in this thesis. The well-being approach used in this research provides the opportunity to integrate an accepted deterministic criterion into a probabilistic framework. This research work includes the investigation of important factors that impact bulk electric system adequacy evaluation and security constrained adequacy assessment using the well-being analysis framework. Load forecast uncertainty is an important consideration in an electrical power system. This research includes load forecast uncertainty considerations in bulk electric system reliability assessment and the effects on system, load point and well-being indices and reliability index probability distributions are examined. There has been increasing worldwide interest in the utilization of wind power as a renewable energy source over the last two decades due to enhanced public awareness of the environment. Increasing penetration of wind power has significant impacts on power system reliability, and security analyses become more uncertain due to the unpredictable nature of wind power. The effects of wind power additions in generating and bulk electric system reliability assessment considering site wind speed

  10. Research on Control Method Based on Real-Time Operational Reliability Evaluation for Space Manipulator

    Directory of Open Access Journals (Sweden)

    Yifan Wang

    2014-05-01

    Full Text Available A control method based on real-time operational reliability evaluation for space manipulator is presented for improving the success rate of a manipulator during the execution of a task. In this paper, a method for quantitative analysis of operational reliability is given when manipulator is executing a specified task; then a control model which could control the quantitative operational reliability is built. First, the control process is described by using a state space equation. Second, process parameters are estimated in real time using Bayesian method. Third, the expression of the system's real-time operational reliability is deduced based on the state space equation and process parameters which are estimated using Bayesian method. Finally, a control variable regulation strategy which considers the cost of control is given based on the Theory of Statistical Process Control. It is shown via simulations that this method effectively improves the operational reliability of space manipulator control system.

  11. Evaluation of Soft Tissue Landmark Reliability between Manual and Computerized Plotting Methods.

    Science.gov (United States)

    Kasinathan, Geetha; Kommi, Pradeep B; Kumar, Senthil M; Yashwant, Aniruddh; Arani, Nandakumar; Sabapathy, Senkutvan

    2017-04-01

    The aim of the study is to evaluate the reliability of soft tissue landmark identification between manual and digital plot-tings in both X and Y axes. A total of 50 pretreatment lateral cephalograms were selected from patients who reported for orthodontic treatment. The digital images of each cephalogram were imported directly into Dolphin software for onscreen digi-talization, while for manual tracing, images were printed using a compatible X-ray printer. After the images were standardized, and 10 commonly used soft tissue landmarks were plotted on each cephalogram by six different professional observers, the values obtained were plotted in X and Y axes. Intraclass correlation coefficient was used to determine the intrarater reliability for repeated landmark plotting obtained by both the methods. The evaluation for reliability of soft tissue landmark plottings in both manual and digital methods after subjecting it to interclass correlation showed a good reliability, which was nearing complete homogeneity in both X and Y axes, except for Y axis of throat point in manual plotting, which showed moderate reliability as a cephalometric variable. Intraclass correlation of soft tissue nasion had a moderate reliability along X axis. Soft tissue pogonion shows moderate reliability in Y axis. Throat point exhibited moderate reliability in X axis. The interclass correlation in X and Y axes shows high reliability in both hard tissue and soft tissue except for throat point in Y axis, when plotted manually. The intraclass correlation is more consistent and highly reliable for soft tissue landmarks and the hard tissue landmark identification is also consistent. The results obtained for manual and digital methods were almost similar, but the digital landmark plotting has an added advantage in archiving, retrieval, transmission, and can be enhanced during plotting of lateral cephalograms. Hence, the digital method of landmark plotting could be preferred for both daily use and

  12. Imputing Gene Expression in Uncollected Tissues Within and Beyond GTEx

    Science.gov (United States)

    Wang, Jiebiao; Gamazon, Eric R.; Pierce, Brandon L.; Stranger, Barbara E.; Im, Hae Kyung; Gibbons, Robert D.; Cox, Nancy J.; Nicolae, Dan L.; Chen, Lin S.

    2016-01-01

    Gene expression and its regulation can vary substantially across tissue types. In order to generate knowledge about gene expression in human tissues, the Genotype-Tissue Expression (GTEx) program has collected transcriptome data in a wide variety of tissue types from post-mortem donors. However, many tissue types are difficult to access and are not collected in every GTEx individual. Furthermore, in non-GTEx studies, the accessibility of certain tissue types greatly limits the feasibility and scale of studies of multi-tissue expression. In this work, we developed multi-tissue imputation methods to impute gene expression in uncollected or inaccessible tissues. Via simulation studies, we showed that the proposed methods outperform existing imputation methods in multi-tissue expression imputation and that incorporating imputed expression data can improve power to detect phenotype-expression correlations. By analyzing data from nine selected tissue types in the GTEx pilot project, we demonstrated that harnessing expression quantitative trait loci (eQTLs) and tissue-tissue expression-level correlations can aid imputation of transcriptome data from uncollected GTEx tissues. More importantly, we showed that by using GTEx data as a reference, one can impute expression levels in inaccessible tissues in non-GTEx expression studies. PMID:27040689

  13. Meta-analytic guidelines for evaluating single-item reliabilities of personality instruments.

    Science.gov (United States)

    Spörrle, Matthias; Bekk, Magdalena

    2014-06-01

    Personality is an important predictor of various outcomes in many social science disciplines. However, when personality traits are not the principal focus of research, for example, in global comparative surveys, it is often not possible to assess them extensively. In this article, we first provide an overview of the advantages and challenges of single-item measures of personality, a rationale for their construction, and a summary of alternative ways of assessing their reliability. Second, using seven diverse samples (Ntotal = 4,263) we develop the SIMP-G, the German adaptation of the Single-Item Measures of Personality, an instrument assessing the Big Five with one item per trait, and evaluate its validity and reliability. Third, we integrate previous research and our data into a first meta-analysis of single-item reliabilities of personality measures, and provide researchers with guidelines and recommendations for the evaluation of single-item reliabilities. © The Author(s) 2013.

  14. Photoneutron reaction cross sections from various experiments - analysis and evaluation using physical criteria of data reliability

    Science.gov (United States)

    Varlamov, Vladimir; Ishkhanov, Boris; Orlin, Vadim; Peskov, Nikolai; Stepanov, Mikhail

    2017-09-01

    The majority of photonuclear reaction cross sections important for many fields of science and technology and various data files (EXFOR, RIPL, ENDF, etc.) supported by the IAEA were obtained in experiments using quasimonoenergetic annihilation photons. There are well-known systematic discrepancies between the partial photoneutron reactions (γ, 1n), (γ, 2n), (γ, 3n). For analysis of the data reliability the objective physical criteria were proposed. It was found out that the experimental data for many nuclei are not reliable because of large systematic uncertainties of the neutron multiplicity sorting method used. The experimentally-theoretical method was proposed for evaluating the reaction cross sections data satisfying the reliability criteria. The partial and total reaction cross sections were evaluated for many nuclei. In many cases evaluated data differ noticeably from both the experimental data and the data evaluated before for the IAEA Photonuclear Data Library. Therefore it became evident that the IAEA Library needs to be revised and updated.

  15. Reliability Evaluation for the Running State of the Manufacturing System Based on Poor Information

    Directory of Open Access Journals (Sweden)

    Xintao Xia

    2016-01-01

    Full Text Available The output performance of the manufacturing system has a direct impact on the mechanical product quality. For guaranteeing product quality and production cost, many firms try to research the crucial issues on reliability of the manufacturing system with small sample data, to evaluate whether the manufacturing system is capable or not. The existing reliability methods depend on a known probability distribution or vast test data. However, the population performances of complex systems become uncertain as processing time; namely, their probability distributions are unknown, if the existing methods are still taken into account; it is ineffective. This paper proposes a novel evaluation method based on poor information to settle the problems of reliability of the running state of a manufacturing system under the condition of small sample sizes with a known or unknown probability distribution. Via grey bootstrap method, maximum entropy principle, and Poisson process, the experimental investigation on reliability evaluation for the running state of the manufacturing system shows that, under the best confidence level P=0.95, if the reliability degree of achieving running quality is r>0.65, the intersection area between the inspection data and the intrinsic data is A(T>0.3 and the variation probability of the inspection data is PB(T≤0.7, and the running state of the manufacturing system is reliable; otherwise, it is not reliable. And the sensitivity analysis regarding the size of the samples can show that the size of the samples has no effect on the evaluation results obtained by the evaluation method. The evaluation method proposed provides the scientific decision and suggestion for judging the running state of the manufacturing system reasonably, which is efficient, profitable, and organized.

  16. System Reliability Evaluation Based on Convex Combination Considering Operation and Maintenance Strategy

    OpenAIRE

    Lijie Li; Limin Jia; Yanhui Wang

    2015-01-01

    The approaches to the system reliability evaluation with respect to the cases, where the components are independent or the components have interactive relationships within the system, were proposed in this paper. Starting from the higher requirements on system operational safety and economy, the reliability focused optimal models of multiobjective maintenance strategies were built. For safety-critical systems, the pessimistic maintenance strategies are usually taken, and, in these cases, the ...

  17. Freedom of the Will and Legal Imputability in Schopenhauer

    Directory of Open Access Journals (Sweden)

    Renato César Cardoso

    2015-12-01

    Full Text Available The present article aims to analyze Arthur Schopenhauer's criticism of the postulation that freedom of the will is the condition of possibility of legal imputability. According to the philosopher, an intellectually determinable will, not an unconditioned will, is what would be the true enabler of state imputability. In conclusion, we argue that it is with the potential of change of the agent, and not with the culpability, that society and the state should be concerned. This means that, according to Schopenhauer, an alternative and deterministic conception like yours, contrary to what is often said, does not compromise, but enhances the imputability, which is why there is nothing to fear.

  18. Recovery of information from multiple imputation: a simulation study

    Directory of Open Access Journals (Sweden)

    Lee Katherine J

    2012-06-01

    Full Text Available Abstract Background Multiple imputation is becoming increasingly popular for handling missing data. However, it is often implemented without adequate consideration of whether it offers any advantage over complete case analysis for the research question of interest, or whether potential gains may be offset by bias from a poorly fitting imputation model, particularly as the amount of missing data increases. Methods Simulated datasets (n = 1000 drawn from a synthetic population were used to explore information recovery from multiple imputation in estimating the coefficient of a binary exposure variable when various proportions of data (10-90% were set missing at random in a highly-skewed continuous covariate or in the binary exposure. Imputation was performed using multivariate normal imputation (MVNI, with a simple or zero-skewness log transformation to manage non-normality. Bias, precision, mean-squared error and coverage for a set of regression parameter estimates were compared between multiple imputation and complete case analyses. Results For missingness in the continuous covariate, multiple imputation produced less bias and greater precision for the effect of the binary exposure variable, compared with complete case analysis, with larger gains in precision with more missing data. However, even with only moderate missingness, large bias and substantial under-coverage were apparent in estimating the continuous covariate’s effect when skewness was not adequately addressed. For missingness in the binary covariate, all estimates had negligible bias but gains in precision from multiple imputation were minimal, particularly for the coefficient of the binary exposure. Conclusions Although multiple imputation can be useful if covariates required for confounding adjustment are missing, benefits are likely to be minimal when data are missing in the exposure variable of interest. Furthermore, when there are large amounts of missingness, multiple

  19. Multiple Imputation by Chained Equations (MICE: Implementation in Stata

    Directory of Open Access Journals (Sweden)

    Patrick Royston

    2011-12-01

    Full Text Available Missing data are a common occurrence in real datasets. For epidemiological and prognostic factors studies in medicine, multiple imputation is becoming the standard route to estimating models with missing covariate data under a missing-at-random assumption. We describe ice, an implementation in Stata of the MICE approach to multiple imputation. Real data from an observational study in ovarian cancer are used to illustrate the most important of the many options available with ice. We remark brie y on the new databasearchitecture and procedures for multiple imputation introduced in releases 11 and 12 of Stata.

  20. Cost-effective and accurate method of measuring fetal fraction using SNP imputation.

    Science.gov (United States)

    Kim, Minjeong; Kim, Jai-Hoon; Kim, Kangseok; Kim, Sunshin

    2017-11-08

    With the discovery of cell-free fetal DNA in maternal blood, the demand for non-invasive prenatal testing (NIPT) has been increasing. To obtain reliable NIPT results, it is important to accurately estimate the fetal fraction. In this study, we propose an accurate and cost-effective method for measuring fetal fractions using single-nucleotide polymorphisms (SNPs). A total of 84 samples were sequenced via semiconductor sequencing using a 0.3x sequencing coverage. SNPs were genotyped to estimate the fetal fraction. Approximately 900,000 SNPs were genotyped, and 250,000 of these SNPs matched the semiconductor sequencing results. We performed SNP imputation (1000Genome phase3 and HRC v1.1 reference panel) to increase the number of SNPs. The correlation coefficients (R2) of the fetal fraction estimated using the ratio of non-maternal alleles when coverage was reduced to 0.01 following SNP imputation were 0.93 (HRC v1.1 reference panel) and 0.90 (1000GP3 reference panel). An R2 of 0.72 was found at 0.01x sequencing coverage with no imputation performed. We developed an accurate method to measure fetal fraction using SNP imputation, showing cost-effectiveness by using different commercially available SNP chips and lowering the coverage. We also showed that semiconductor sequencing, which is an inexpensive option, was useful for measuring fetal fraction. python source code and guidelines can be found at https://github.com/KMJ403/fetalfraction-SNPimpute. kangskim@ajou.ac.kr, sunshinkim3@gmail.com. Supplementary data are available at Bioinformatics online.

  1. The Turkish Version of Web-Based Learning Platform Evaluation Scale: Reliability and Validity Study

    Science.gov (United States)

    Dag, Funda

    2016-01-01

    The purpose of this study is to determine the language equivalence and the validity and reliability of the Turkish version of the "Web-Based Learning Platform Evaluation Scale" ("Web Tabanli Ögrenme Ortami Degerlendirme Ölçegi" [WTÖODÖ]) used in the selection and evaluation of web-based learning environments. Within this scope,…

  2. Test-retest reliability of lifting and carrying in a 2-day functional capacity evaluation

    NARCIS (Netherlands)

    Reneman, MF; Dijkstra, PU; Westmaas, M; Goeken, LNH; Göeken, L.N.H.

    2002-01-01

    The objectives of this study were to establish test-retest reliability of lifting and carrying of a functional capacity evaluation (FCE) on two consecutive days and to verify the need for a 2-day protocol. A cohort of 50 patients (39 men, 11 women) with nonspecific low back pain were evaluated using

  3. Reliable and Valid Tools for Measuring Surgeons' Teaching Performance: Residents' vs. Self Evaluation

    NARCIS (Netherlands)

    Boerebach, Benjamin C. M.; Arah, Onyebuchi A.; Busch, Olivier R. C.; Lombarts, Kiki M. J. M. H.

    2012-01-01

    BACKGROUND: In surgical education, there is a need for educational performance evaluation tools that yield reliable and valid data. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance. We

  4. Reading for Reliability: Preservice Teachers Evaluate Web Sources about Climate Change

    Science.gov (United States)

    Damico, James S.; Panos, Alexandra

    2016-01-01

    This study examined what happened when 65 undergraduate prospective secondary level teachers across content areas evaluated the reliability of four online sources about climate change: an oil company webpage, a news report, and two climate change organizations with competing views on climate change. The students evaluated the sources at three time…

  5. Test-retest and interrater reliability of the functional lower extremity evaluation.

    Science.gov (United States)

    Haitz, Karyn; Shultz, Rebecca; Hodgins, Melissa; Matheson, Gordon O

    2014-12-01

    Repeated-measures clinical measurement reliability study. To establish the reliability and face validity of the Functional Lower Extremity Evaluation (FLEE). The FLEE is a 45-minute battery of 8 standardized functional performance tests that measures 3 components of lower extremity function: control, power, and endurance. The reliability and normative values for the FLEE in healthy athletes are unknown. A face validity survey for the FLEE was sent to sports medicine personnel to evaluate the level of importance and frequency of clinical usage of each test included in the FLEE. The FLEE was then administered and rated for 40 uninjured athletes. To assess test-retest reliability, each athlete was tested twice, 1 week apart, by the same rater. To assess interrater reliability, 3 raters scored each athlete during 1 of the testing sessions. Intraclass correlation coefficients were used to assess the test-retest and interrater reliability of each of the FLEE tests. In the face validity survey, the FLEE tests were rated as highly important by 58% to 71% of respondents but frequently used by only 26% to 45% of respondents. Interrater reliability intraclass correlation coefficients ranged from 0.83 to 1.00, and test-retest reliability ranged from 0.71 to 0.95. The FLEE tests are considered clinically important for assessing lower extremity function by sports medicine personnel but are underused. The FLEE also is a reliable assessment tool. Future studies are required to determine if use of the FLEE to make return-to-play decisions may reduce reinjury rates.

  6. Methods and Strategies to Impute Missing Genotypes for Improving Genomic Prediction

    DEFF Research Database (Denmark)

    Ma, Peipei

    for improving genomic prediction. The results indicate the IMPUTE2 and Beagle are accurate imputation methods, while Fimpute is a good alternative for routine imputation with large data set. Genotypes of non-genotyped animals can be accurately imputed if they have genotyped porgenies. A combined reference...

  7. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Antônio Dâmaso

    2017-11-01

    Full Text Available Power consumption is a primary interest in Wireless Sensor Networks (WSNs, and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  8. EVALUATION OF HUMAN RELIABILITY IN SELECTED ACTIVITIES IN THE RAILWAY INDUSTRY

    Directory of Open Access Journals (Sweden)

    Erika SUJOVÁ

    2016-07-01

    Full Text Available The article focuses on evaluation of human reliability in the human – machine system in the railway industry. Based on a survey of a train dispatcher and of selected activities, we have identified risk factors affecting the dispatcher‘s work and the evaluated risk level of their influence on the reliability and safety of preformed activities. The research took place at the authors‘ work place between 2012-2013. A survey method was used. With its help, authors were able to identify selected work activities of train dispatcher’s risk factors that affect his/her work and the evaluated seriousness of its in-fluence on the reliability and safety of performed activities. Amongst the most important finding fall expressions of un-clear and complicated internal regulations and work processes, a feeling of being overworked, fear for one’s safety at small, insufficiently protected stations.

  9. Creation of reliable relevance judgments in information retrieval systems evaluation experimentation through crowdsourcing: a review.

    Science.gov (United States)

    Samimi, Parnia; Ravana, Sri Devi

    2014-01-01

    Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in performing reliable and low-cost evaluation of retrieval systems. Crowdsourcing as a novel method of data acquisition is broadly used in many research fields. It has been proven that crowdsourcing is an inexpensive and quick solution as well as a reliable alternative for creating relevance judgments. One of the crowdsourcing applications in IR is to judge relevancy of query document pair. In order to have a successful crowdsourcing experiment, the relevance judgment tasks should be designed precisely to emphasize quality control. This paper is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment.

  10. Operational reliability evaluation of restructured power systems with wind power penetration utilizing reliability network equivalent and time-sequential simulation approaches

    DEFF Research Database (Denmark)

    Ding, Yi; Cheng, Lin; Zhang, Yonghong

    2014-01-01

    and reserve provides, fast reserve providers and transmission network in restructured power systems. A contingency management schema for real time operation considering its coupling with the day-ahead market is proposed. The time-sequential Monte Carlo simulation is used to model the chronological...... systems. The conventional long-term reliability evaluation techniques have been well developed, which have been more focused on planning and expansion rather than operation of power systems. This paper proposes a new technique for evaluating operational reliabilities of restructured power systems...... with high wind power penetration. The proposed technique is based on the combination of the reliability network equivalent and time-sequential simulation approaches. The operational reliability network equivalents are developed to represent reliability models of wind farms, conventional generation...

  11. Reliability assessment of a peer evaluation instrument in a team-based learning course

    Directory of Open Access Journals (Sweden)

    Wahawisan J

    2016-03-01

    Full Text Available Objective: To evaluate the reliability of a peer evaluation instrument in a longitudinal team-based learning setting. Methods: Student pharmacists were instructed to evaluate the contributions of their peers. Evaluations were analyzed for the variance of the scores by identifying low, medium, and high scores. Agreement between performance ratings within each group of students was assessed via intra-class correlation coefficient (ICC. Results: We found little variation in the standard deviation (SD based on the score means among the high, medium, and low scores within each group. The lack of variation in SD of results between groups suggests that the peer evaluation instrument produces precise results. The ICC showed strong concordance among raters. Conclusions: Findings suggest that our student peer evaluation instrument provides a reliable method for peer assessment in team-based learning settings.

  12. Inter-rater reliability of the evaluation of muscular chains associated with posture alterations in scoliosis

    Science.gov (United States)

    2012-01-01

    Background In the Global postural re-education (GPR) evaluation, posture alterations are associated with anterior or posterior muscular chain impairments. Our goal was to assess the reliability of the GPR muscular chain evaluation. Methods Design: Inter-rater reliability study. Fifty physical therapists (PTs) and two experts trained in GPR assessed the standing posture from photographs of five youths with idiopathic scoliosis using a posture analysis grid with 23 posture indices (PI). The PTs and experts indicated the muscular chain associated with posture alterations. The PTs were also divided into three groups according to their experience in GPR. Experts’ results (after consensus) were used to verify agreement between PTs and experts for muscular chain and posture assessments. We used Kappa coefficients (K) and the percentage of agreement (%A) to assess inter-rater reliability and intra-class coefficients (ICC) for determining agreement between PTs and experts. Results For the muscular chain evaluation, reliability was moderate to substantial for 12 PI for the PTs (%A: 56 to 82; K: 0.42 to 0.76) and perfect for 19 PI for the experts. For posture assessment, reliability was moderate to substantial for 12 PI for the PTs (%A > 60%; K: 0.42 to 0.75) and moderate to perfect for 18 PI for the experts (%A: 80 to 100; K: 0.55 to 1.00). The agreement between PTs and experts was good for most muscular chain evaluations (18 PI; ICC: 0.82 to 0.99) and PI (19 PI; ICC: 0.78 to 1.00). Conclusions The GPR muscular chain evaluation has good reliability for most posture indices. GPR evaluation should help guide physical therapists in targeting affected muscles for treatment of abnormal posture patterns. PMID:22639838

  13. Inter-rater reliability of the evaluation of muscular chains associated with posture alterations in scoliosis

    Directory of Open Access Journals (Sweden)

    Fortin Carole

    2012-05-01

    Full Text Available Abstract Background In the Global postural re-education (GPR evaluation, posture alterations are associated with anterior or posterior muscular chain impairments. Our goal was to assess the reliability of the GPR muscular chain evaluation. Methods Design: Inter-rater reliability study. Fifty physical therapists (PTs and two experts trained in GPR assessed the standing posture from photographs of five youths with idiopathic scoliosis using a posture analysis grid with 23 posture indices (PI. The PTs and experts indicated the muscular chain associated with posture alterations. The PTs were also divided into three groups according to their experience in GPR. Experts’ results (after consensus were used to verify agreement between PTs and experts for muscular chain and posture assessments. We used Kappa coefficients (K and the percentage of agreement (%A to assess inter-rater reliability and intra-class coefficients (ICC for determining agreement between PTs and experts. Results For the muscular chain evaluation, reliability was moderate to substantial for 12 PI for the PTs (%A: 56 to 82; K: 0.42 to 0.76 and perfect for 19 PI for the experts. For posture assessment, reliability was moderate to substantial for 12 PI for the PTs (%A > 60%; K: 0.42 to 0.75 and moderate to perfect for 18 PI for the experts (%A: 80 to 100; K: 0.55 to 1.00. The agreement between PTs and experts was good for most muscular chain evaluations (18 PI; ICC: 0.82 to 0.99 and PI (19 PI; ICC: 0.78 to 1.00. Conclusions The GPR muscular chain evaluation has good reliability for most posture indices. GPR evaluation should help guide physical therapists in targeting affected muscles for treatment of abnormal posture patterns.

  14. Considering wind speed correlation of WECS in reliability evaluation using the time-shifting technique

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Kaigui [State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China); Billinton, Roy [Power System Research Group, University of Saskatchewan, Saskatoon (Canada)

    2009-04-15

    Wind power is a very useful renewable energy source that is attracting consideration around the world due to its non-exhaustive nature and its environmental and social benefits. The correlation between multiple wind sites has a significant impact on the reliability of power systems containing wind energy conversion systems (WECS). Conventional methods cannot be directly applied to evaluate the reliability of WECS in the absence of comprehensive modeling techniques that recognize the correlation of the wind speeds at different wind farm locations. This paper proposes a model for power system reliability assessment that can consider the wind speed correlation and preserve the statistical characteristics of wind speeds, such as the mean and deviation of the wind speed time series (WSTS). A time-shifting technique is used to produce a new WSTS for a given correlation between two wind sites. The optimal shifted time at the two sites is determined using a linear interpolation technique. The probability distributions of the generated power and the system reliability indices with different degrees of correlation between the two sites are compared using two reliability test systems, i.e. the RBTS and the IEEE-RTS. The results show that the proposed method is useful in evaluating the reliability of WECS with correlation between two wind sites. (author)

  15. Digital System Reliability Test for the Evaluation of safety Critical Software of Digital Reactor Protection System

    Directory of Open Access Journals (Sweden)

    Hyun-Kook Shin

    2006-08-01

    Full Text Available A new Digital Reactor Protection System (DRPS based on VME bus Single Board Computer has been developed by KOPEC to prevent software Common Mode Failure(CMF inside digital system. The new DRPS has been proved to be an effective digital safety system to prevent CMF by Defense-in-Depth and Diversity (DID&D analysis. However, for practical use in Nuclear Power Plants, the performance test and the reliability test are essential for the digital system qualification. In this study, a single channel of DRPS prototype has been manufactured for the evaluation of DRPS capabilities. The integrated functional tests are performed and the system reliability is analyzed and tested. The results of reliability test show that the application software of DRPS has a very high reliability compared with the analog reactor protection systems.

  16. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  17. Reliability Evaluation of Distribution System Considering Sequential Characteristics of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Sheng Wanxing

    2016-01-01

    Full Text Available In allusion to the randomness of output power of distributed generation (DG, a reliability evaluation model based on sequential Monte Carlo simulation (SMCS for distribution system with DG is proposed. Operating states of the distribution system can be sampled by SMCS in chronological order thus the corresponding output power of DG can be generated. The proposed method has been tested on feeder F4 of IEEE-RBTS Bus 6. The results show that reliability evaluation of distribution system considering the uncertainty of output power of DG can be effectively implemented by SMCS.

  18. Imputation by PLS regression for generalized linear mixed models

    OpenAIRE

    Guyon, Emilie; Pommeret, Denys

    2011-01-01

    The problem of handling missing data in generalized linear mixed models with correlated covariates is considered when the missing mechanism concerns both the response variable and the covariates. An imputation algorithm combining multiple imputation and Partial Least Squares (PLS) regression is proposed. The method relies on two steps. In a first step, using a linearization technique, the generalized linear mixed model is approximated by a linear mixed model. A latent variable is introduced a...

  19. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    Science.gov (United States)

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Application of a novel hybrid method for spatiotemporal data imputation: A case study of the Minqin County groundwater level

    Science.gov (United States)

    Zhang, Zhongrong; Yang, Xuan; Li, Hao; Li, Weide; Yan, Haowen; Shi, Fei

    2017-10-01

    The techniques for data analyses have been widely developed in past years, however, missing data still represent a ubiquitous problem in many scientific fields. In particular, dealing with missing spatiotemporal data presents an enormous challenge. Nonetheless, in recent years, a considerable amount of research has focused on spatiotemporal problems, making spatiotemporal missing data imputation methods increasingly indispensable. In this paper, a novel spatiotemporal hybrid method is proposed to verify and imputed spatiotemporal missing values. This new method, termed SOM-FLSSVM, flexibly combines three advanced techniques: self-organizing feature map (SOM) clustering, the fruit fly optimization algorithm (FOA) and the least squares support vector machine (LSSVM). We employ a cross-validation (CV) procedure and FOA swarm intelligence optimization strategy that can search available parameters and determine the optimal imputation model. The spatiotemporal underground water data for Minqin County, China, were selected to test the reliability and imputation ability of SOM-FLSSVM. We carried out a validation experiment and compared three well-studied models with SOM-FLSSVM using a different missing data ratio from 0.1 to 0.8 in the same data set. The results demonstrate that the new hybrid method performs well in terms of both robustness and accuracy for spatiotemporal missing data.

  1. Inter-rater reliability of the Sødring Motor Evaluation of Stroke patients (SMES).

    Science.gov (United States)

    Halsaa, K E; Sødring, K M; Bjelland, E; Finsrud, K; Bautz-Holter, E

    1999-12-01

    The Sødring Motor Evaluation of Stroke patients is an instrument for physiotherapists to evaluate motor function and activities in stroke patients. The rating reflects quality as well as quantity of the patient's unassisted performance within three domains: leg, arm and gross function. The inter-rater reliability of the method was studied in a sample of 30 patients admitted to a stroke rehabilitation unit. Three therapists were involved in the study; two therapists assessed the same patient on two consecutive days in a balanced design. Cohen's weighted kappa and McNemar's test of symmetry were used as measures of item reliability, and the intraclass correlation coefficient was used to express the reliability of the sumscores. For 24 out of 32 items the weighted kappa statistic was excellent (0.75-0.98), while 7 items had a kappa statistic within the range 0.53-0.74 (fair to good). The reliability of one item was poor (0.13). The intraclass correlation coefficient for the three sumscores was 0.97, 0.91 and 0.97. We conclude that the Sødring Motor Evaluation of Stroke patients is a reliable measure of motor function in stroke patients undergoing rehabilitation.

  2. Validity and rater reliability of Persian version of the Consensus Auditory Perceptual Evaluation of Voice

    Directory of Open Access Journals (Sweden)

    Nazila Salary Majd

    2014-08-01

    Full Text Available Background and Aim: Auditory-perceptual assessment of voice a main approach in the diagnosis and therapy improvement of voice disorders. Despite, there are few Iranian studies about auditory-perceptual assessment of voice. The aim of present study was development and determination of validity and rater reliability of Persian version of the Consensus Auditory Perceptual Evaluation of Voice (CAPE -V.Methods: The qualitative content validity was detected by collecting 10 questionnaires from 9 experienced speech and language pathologists and a linguist. For reliability purposes, the voice samples of 40 dysphonic (neurogenic, functional with and without laryngeal lesions adults (20-45 years of age and 10 normal healthy speakers were recorded. The samples included sustain of vowels and reading the 6 sentences of Persian version of the consensus auditory perceptual evaluation of voice called the ATSHA.Results: The qualitative content validity was proved for developed Persian version of the consensus auditory perceptual evaluation of voice. Cronbach’s alpha was high (0.95. Intra-rater reliability coefficients ranged from 0.86 for overall severity to 0.42 for pitch; inter-rater reliability ranged from 0.85 for overall severity to 0.32 for pitch (p<0.05.Conclusion: The ATSHA can be used as a valid and reliable Persian scale for auditory perceptual assessment of voice in adults.

  3. Haplotype variation and genotype imputation in African populations

    Science.gov (United States)

    Huang, Lucy; Jakobsson, Mattias; Pemberton, Trevor J.; Ibrahim, Muntaser; Nyambo, Thomas; Omar, Sabah; Pritchard, Jonathan K.; Tishkoff, Sarah A.; Rosenberg, Noah A.

    2013-01-01

    Sub-Saharan Africa has been identified as the part of the world with the greatest human genetic diversity. This high level of diversity causes difficulties for genome-wide association (GWA) studies in African populations—for example, by reducing the accuracy of genotype imputation in African populations compared to non-African populations. Here, we investigate haplotype variation and imputation in Africa, using 253 unrelated individuals from 15 Sub-Saharan African populations. We identify the populations that provide the greatest potential for serving as reference panels for imputing genotypes in the remaining groups. Considering reference panels comprising samples of recent African descent in Phase 3 of the HapMap Project, we identify mixtures of reference groups that produce the maximal imputation accuracy in each of the sampled populations. We find that optimal HapMap mixtures and maximal imputation accuracies identified in detailed tests of imputation procedures can instead be predicted by using simple summary statistics that measure relationships between the pattern of genetic variation in a target population and the patterns in potential reference panels. Our results provide an empirical basis for facilitating the selection of reference panels in GWA studies of diverse human populations, especially those of African ancestry. Genet. Epidemiol. 35:766–780, 2011. PMID:22125220

  4. A web-based approach to data imputation

    KAUST Repository

    Li, Zhixu

    2013-10-24

    In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques. © 2013 Springer Science+Business Media New York.

  5. A SPATIOTEMPORAL APPROACH FOR HIGH RESOLUTION TRAFFIC FLOW IMPUTATION

    Energy Technology Data Exchange (ETDEWEB)

    Han, Lee [University of Tennessee, Knoxville (UTK); Chin, Shih-Miao [ORNL; Hwang, Ho-Ling [ORNL

    2016-01-01

    Along with the rapid development of Intelligent Transportation Systems (ITS), traffic data collection technologies have been evolving dramatically. The emergence of innovative data collection technologies such as Remote Traffic Microwave Sensor (RTMS), Bluetooth sensor, GPS-based Floating Car method, automated license plate recognition (ALPR) (1), etc., creates an explosion of traffic data, which brings transportation engineering into the new era of Big Data. However, despite the advance of technologies, the missing data issue is still inevitable and has posed great challenges for research such as traffic forecasting, real-time incident detection and management, dynamic route guidance, and massive evacuation optimization, because the degree of success of these endeavors depends on the timely availability of relatively complete and reasonably accurate traffic data. A thorough literature review suggests most current imputation models, if not all, focus largely on the temporal nature of the traffic data and fail to consider the fact that traffic stream characteristics at a certain location are closely related to those at neighboring locations and utilize these correlations for data imputation. To this end, this paper presents a Kriging based spatiotemporal data imputation approach that is able to fully utilize the spatiotemporal information underlying in traffic data. Imputation performance of the proposed approach was tested using simulated scenarios and achieved stable imputation accuracy. Moreover, the proposed Kriging imputation model is more flexible compared to current models.

  6. The Relative Impacts of Design Effects and Multiple Imputation on Variance Estimates: A Case Study with the 2008 National Ambulatory Medical Care Survey

    Directory of Open Access Journals (Sweden)

    Lewis Taylor

    2014-03-01

    Full Text Available The National Ambulatory Medical Care Survey collects data on office-based physician care from a nationally representative, multistage sampling scheme where the ultimate unit of analysis is a patient-doctor encounter. Patient race, a commonly analyzed demographic, has been subject to a steadily increasing item nonresponse rate. In 1999, race was missing for 17 percent of cases; by 2008, that figure had risen to 33 percent. Over this entire period, single imputation has been the compensation method employed. Recent research at the National Center for Health Statistics evaluated multiply imputing race to better represent the missing-data uncertainty. Given item nonresponse rates of 30 percent or greater, we were surprised to find many estimates’ ratios of multiple-imputation to single-imputation estimated standard errors close to 1. A likely explanation is that the design effects attributable to the complex sample design largely outweigh any increase in variance attributable to missing-data uncertainty.

  7. Creation, validation, and reliability of a shooting simulator instrument for reaction time evaluation

    Directory of Open Access Journals (Sweden)

    Ellen dos Santos Soares

    Full Text Available Abstract "Creation, validation, and reliability of a shooting simulator instrument for reaction time evaluation." The aim of this study was to develop, validate, and verify the reliability of a shooting simulator instrument for reaction time evaluation. 90 Santa Maria Air Base military personnel participated on the study. Software was developed for use with an electronic gun where participants performed two shooting task tests: simple reaction time and choice reaction time. The results of concurrent validity were satisfactory, no significant differences were found between the two instruments and good agreement was observed. The reliability results were significant in both tests. The instrument can be used for research purposes, or for the purposes of military training as a simple, low-cost tool involving speed, accuracy, and decision-making shooting tasks.

  8. Reliability Assessment and Energy Loss Evaluation for Modern Wind Turbine Systems

    DEFF Research Database (Denmark)

    Zhou, Dao

    of the DFIG system and the PMSG system. The design of the back-to-back power converters and the loss model of the power semiconductor device are discussed and established in Chapter 2. Then, Chapter 3 and Chapter 4 are dedicated to the assessment of the wind power converter in terms of reliability....... Specifically, Chapter 4 estimates and compares the lifespan of the back-to-back power converters based on the thermal stress analyzed in Chapter 3. In accordance with the grid codes, Chapter 4 further evaluates the cost on reliability with various types of reactive power injection for both the configurations...... are explored in Chapter 6. The main contribution of this project is in developing a universal approach to evaluate and estimate the reliability and the cost of energy for modern wind turbine systems. Furthermore, simulation and experimental results validates the feasibility of an enhanced lifespan of the power...

  9. ASPECTS DEFINITION OF RELIABILITY EVALUATION FACADE SYSTEMS FROM THE VIEW POINT OF EUROCODE

    Directory of Open Access Journals (Sweden)

    A. V. Radkevych

    2015-08-01

    Full Text Available Purpose. This paper is devoted to the definition of the most rational technique of reliability evaluation of facade systems of multistoried residential buildings with using the experience of buildings construction and operation abroad. The subject is also focused on defining the parameters of materials and facade systems, the improvement of which can increase the reliability and durability of facade systems of multistoried residential buildings, as well as cut the cost of their operation. Methodology. A comparative analysis of the operating experience of various types of facade systems in Ukraine and abroad based on the data of different authors was conducted. The analysis of the impact of external factors on facade systems with the subsequent comparison of methods for assessing the reliability of facades according to the criteria stated in the Eurocode was carried out as well as the selection of parameters that determine the reliability and durability of facade systems. Findings. Authors have performed researches of evaluation methods of organizational and technological reliability and durability of modern facade systems. It was identified the cause of the failure of facade systems. It is offered the ways of materials improvement of facade systems, and constructional and organization-technological decisions on the structure of facade systems. Methods of increase of reliability and durability of front systems were defined. Originality. The most rational technique of reliability evaluation of facade systems considering requirements of Eurocode in questions structural design was defined. Practical value. Improvement of evaluating methods for organizational-technological reliability of facade systems of multistoried residential buildings will predict more accurately the lifetime of enclosures. Using the methods described in the Eurocodes, to determine the reliability and durability of the facade systems will provide the general criteria for the

  10. System Reliability Evaluation in Water Distribution Networks with the Impact of Valves Experiencing Cascading Failures

    Directory of Open Access Journals (Sweden)

    Qing Shuang

    2017-06-01

    Full Text Available Water distribution networks (WDNs represent a class of critical infrastructure networks. When a disaster occurs, component failures in a WDN may trigger system failures that result in larger-scale reactions. The aim of the paper is to evaluate the evolution of system reliability and failure propagation time for a WDN experiencing cascading failures, and find the critical pipes which may reduce system reliability dramatically. Multiple factors are considered in the method such as network topology, the balance of water supply and demand, demand multiplier, and pipe break isolation. The pipe-based attack with multiple failure scenarios is simulated in the paper. A case WDN is used to illustrate the method. The results show that the lowest capacity gets stronger when a WDN is short of supply, becoming the dominant factor that decides the evolution of system reliability and failure propagation time. The valve ratio (VR and system reliability present a flattened S curve relationship, and there are two turning points in VR. The critical pipes can be identified. With the fixed 5% valves, a WDN can improve system reliability and resist cascading failures effectively. The findings provide insights into the system reliability and failure propagation time for WDNs experiencing cascading failures. It is proven to be useful in future studies focused on the operation and management of water services.

  11. Balance Assessment in Sports-Related Concussion: Evaluating Test-Retest Reliability of the Equilibrate System.

    Science.gov (United States)

    Odom, Mitchell J; Lee, Young M; Zuckerman, Scott L; Apple, Rachel P; Germanos, Theodore; Solomon, Gary S; Sills, Allen K

    2016-01-01

    This study evaluated the test-retest reliability of a novel computer-based, portable balance assessment tool, the Equilibrate System (ES), used to diagnose sports-related concussion. Twenty-seven students participated in ES testing consisting of three sessions over 4 weeks. The modified Balance Error Scoring System was performed. For each participant, test-retest reliability was established using the intraclass correlation coefficient (ICC). The ES test-retest reliability from baseline to week 2 produced an ICC value of 0.495 (95% CI, 0.123-0.745). Week 2 testing produced ICC values of 0.602 (95% CI, 0.279-0.803) and 0.610 (95% CI, 0.299-0.804), respectively. All other single measures test-retest reliability values produced poor ICC values. Same-day ES testing showed fair to good test-retest reliability while interweek measures displayed poor to fair test-retest reliability. Testing conditions should be controlled when using computerized balance assessment methods. ES testing should only be used as a part of a comprehensive assessment.

  12. Critically re-evaluating a common technique: Accuracy, reliability, and confirmation bias of EMG.

    Science.gov (United States)

    Narayanaswami, Pushpa; Geisbush, Thomas; Jones, Lyell; Weiss, Michael; Mozaffar, Tahseen; Gronseth, Gary; Rutkove, Seward B

    2016-01-19

    (1) To assess the diagnostic accuracy of EMG in radiculopathy. (2) To evaluate the intrarater reliability and interrater reliability of EMG in radiculopathy. (3) To assess the presence of confirmation bias in EMG. Three experienced academic electromyographers interpreted 3 compact discs with 20 EMG videos (10 normal, 10 radiculopathy) in a blinded, standardized fashion without information regarding the nature of the study. The EMGs were interpreted 3 times (discs A, B, C) 1 month apart. Clinical information was provided only with disc C. Intrarater reliability was calculated by comparing interpretations in discs A and B, interrater reliability by comparing interpretation between reviewers. Confirmation bias was estimated by the difference in correct interpretations when clinical information was provided. Sensitivity was similar to previous reports (77%, confidence interval [CI] 63%-90%); specificity was 71%, CI 56%-85%. Intrarater reliability was good (κ 0.61, 95% CI 0.41-0.81); interrater reliability was lower (κ 0.53, CI 0.35-0.71). There was no substantial confirmation bias when clinical information was provided (absolute difference in correct responses 2.2%, CI -13.3% to 17.7%); the study lacked precision to exclude moderate confirmation bias. This study supports that (1) serial EMG studies should be performed by the same electromyographer since intrarater reliability is better than interrater reliability; (2) knowledge of clinical information does not bias EMG interpretation substantially; (3) EMG has moderate diagnostic accuracy for radiculopathy with modest specificity and electromyographers should exercise caution interpreting mild abnormalities. This study provides Class III evidence that EMG has moderate diagnostic accuracy and specificity for radiculopathy. © 2015 American Academy of Neurology.

  13. A national drug related problems database: evaluation of use in practice, reliability and reproducibility

    DEFF Research Database (Denmark)

    Kjeldsen, Lene Juel; Birkholm, Trine; Fischer, Hanne Lis

    2014-01-01

    Danish hospital pharmacies. Methods Practice use of the DRP-database was explored by an electronic questionnaire distributed to hospital pharmacies, and consisted of questions regarding current and previous use of the DRP-database. The reliability was evaluated by comparing the categorization of 24 cases...

  14. Reliability-Related Issues in the Context of Student Evaluations of Teaching in Higher Education

    Science.gov (United States)

    Kalender, Ilker

    2015-01-01

    Student evaluations of teaching (SET) have been the principal instrument to elicit students' opinions in higher education institutions. Many decisions, including high-stake ones, are made based on SET scores reported by students. In this respect, reliability of SET scores is of considerable importance. This paper has an argument that there are…

  15. Evaluation of the reliability of Levine method of wound swab for ...

    African Journals Online (AJOL)

    The aim of this paper is to evaluate the reliability of Levine swab in accurate identification of microorganisms present in a wound and identify the necessity for further studies in this regard. Methods: A semi structured questionnaire was administered and physical examination was performed on patients with chronic wounds ...

  16. Test-retest reliability of the isernhagen work systems functional capacity evaluation in healthy adults

    NARCIS (Netherlands)

    Reneman, MF; Brouwer, S; Meinema, A; Dijkstra, PU; Geertzen, JHB; Groothoff, JW

    2004-01-01

    Aim of this study was to investigate test-retest reliability of the Isernhagen Work System Functional Capacity Evaluation (IWS FCE) in healthy subjects. The IWS FCE consists of 28 tests that reflect work-related activities such as lifting, carrying, bending, etc. A convenience sample of 26 healthy

  17. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  18. An evaluation of ventilator reliability: a multivariate, failure time analysis of 5 common ventilator brands.

    Science.gov (United States)

    Blanch, P B

    2001-08-01

    -Mantel test. In 2,567,365 hours of ventilator operation, 290 observations were recorded (226 failures and 64 censored observations). Two of the 7 covariates were judged time-dependent, excluded from the Cox model, and evaluated using other techniques. Of the 5 remaining covariates, 2 were significantly related to reliability, both indirectly. There was no difference in reliability, regardless of how many times a ventilator had been previously repaired, but hospital environment did significantly affect reliability. Ventilator reliability depends on a number of factors. This study indicates that, on average, ventilator reliability improves the more a ventilator is used and the longer the brand has been commercially available. The number of previous ventilator repairs did not affect reliability, but the hospital environment did. These data, if validated, should help to enhance our understanding of ventilator reliability and could eventually have profound economic and safety implications as well.

  19. In vitro model to evaluate reliability and accuracy of a dental shade-matching instrument.

    Science.gov (United States)

    Kim-Pusateri, Seungyee; Brewer, Jane D; Dunford, Robert G; Wee, Alvin G

    2007-11-01

    There are several electronic shade-matching instruments available for clinical use; unfortunately, there are limited acceptable in vitro models to evaluate their reliability and accuracy. The purpose of this in vitro study was to evaluate the reliability and accuracy of a dental clinical shade-matching instrument. Using the shade-matching instrument (ShadeScan), color measurements were made of 3 commercial shade guides (VITA Classical, VITA 3D-Master, and Chromascop). Shade tabs were selected and placed in the middle of a gingival matrix (Shofu Gummy), with tabs of the same nominal shade from additional shade guides placed on both sides. Measurements were made of the central region of the shade tab inside a black box. For the reliability assessment, each shade tab from each of the 3 shade guide types was measured 10 times. For the accuracy assessment, each shade tab from 10 guides of each of the 3 types evaluated was measured once. Reliability, accuracy, and 95% confidence intervals were calculated for each shade tab. Differences were determined by 1-way ANOVA followed by the Bonferroni multiple comparison procedure. Reliability of ShadeScan was as follows: VITA Classical = 95.0%, VITA 3D-Master = 91.2%, and Chromascop = 76.5%. Accuracy of ShadeScan was as follows: VITA Classical = 65.0%, VITA 3D-Master = 54.2%, Chromascop = 84.5%. This in vitro study showed a varying degree of reliability and accuracy for ShadeScan, depending on the type of shade guide system used.

  20. Education Research: Bias and poor interrater reliability in evaluating the neurology clinical skills examination

    Science.gov (United States)

    Schuh, L A.; London, Z; Neel, R; Brock, C; Kissela, B M.; Schultz, L; Gelb, D J.

    2009-01-01

    Objective: The American Board of Psychiatry and Neurology (ABPN) has recently replaced the traditional, centralized oral examination with the locally administered Neurology Clinical Skills Examination (NEX). The ABPN postulated the experience with the NEX would be similar to the Mini-Clinical Evaluation Exercise, a reliable and valid assessment tool. The reliability and validity of the NEX has not been established. Methods: NEX encounters were videotaped at 4 neurology programs. Local faculty and ABPN examiners graded the encounters using 2 different evaluation forms: an ABPN form and one with a contracted rating scale. Some NEX encounters were purposely failed by residents. Cohen’s kappa and intraclass correlation coefficients (ICC) were calculated for local vs ABPN examiners. Results: Ninety-eight videotaped NEX encounters of 32 residents were evaluated by 20 local faculty evaluators and 18 ABPN examiners. The interrater reliability for a determination of pass vs fail for each encounter was poor (kappa 0.32; 95% confidence interval [CI] = 0.11, 0.53). ICC between local faculty and ABPN examiners for each performance rating on the ABPN NEX form was poor to moderate (ICC range 0.14-0.44), and did not improve with the contracted rating form (ICC range 0.09-0.36). ABPN examiners were more likely than local examiners to fail residents. Conclusions: There is poor interrater reliability between local faculty and American Board of Psychiatry and Neurology examiners. A bias was detected for favorable assessment locally, which is concerning for the validity of the examination. Further study is needed to assess whether training can improve interrater reliability and offset bias. GLOSSARY ABIM = American Board of Internal Medicine; ABPN = American Board of Psychiatry and Neurology; CI = confidence interval; HFH = Henry Ford Hospital; ICC = intraclass correlation coefficients; IM = internal medicine; mini-CEX = Mini-Clinical Evaluation Exercise; NEX = Neurology Clinical

  1. Standards and reliability in evaluation: when rules of thumb don't apply.

    Science.gov (United States)

    Norcini, J J

    1999-10-01

    The purpose of this paper is to identify situations in which two rules of thumb in evaluation do not apply. The first rule is that all standards should be absolute. When selection decisions are being made or when classroom tests are given, however, relative standards may be better. The second rule of thumb is that every test should have a reliability of .80 or better. Depending on the circumstances, though, the standard error of measurement, the consistency of pass/fail classifications, and the domain-referenced reliability coefficients may be better indicators of reproducibility.

  2. Reliability evaluation of CIF (chip-in-flex) and COF (chip-on-flex) packages

    Science.gov (United States)

    Jang, Jae-Won; Suk, Kyoung-Lim; Paik, Kyung-Wook; Lee, Soon-Bok

    2010-03-01

    CIF (chip-in-flex) and COF (chip-on-flex) packages have the advantages of fine pitch capability, and flexibility. Anisotropic conductive films (ACFs) are used for the interconnection between chip and substrate. Display, mobile device, and semiconductor industry require for smaller and more integrated packages. Both CIF and COF packages are an alternative for the demands. However, there are some reliability problems of interconnection between the chip and substrate because the packages are subjected to various loading conditions. These may degrade the functionality of the packages. Therefore, reliability assessment of both packages is necessary. In this study, experimental tests were performed to evaluate the reliability of interconnection between the chip and substrate of CIF and COF packages. Thermal cycling tests were performed to evaluate the resistance against thermal fatigue. The shape and warpage of the chip of CIF and COF packages were observed using optical methods (e.g., shadow Moiré and Twyman/Green interferometry). These optical Moiré techniques are widely used for measuring small deformations in microelectronic packages. The stress distribution around the chip was evaluated through FEA (finite element analysis). In addition, we suggested modifying design parameter of CIF packages for the reliability enhancement.

  3. An evaluation tool for myofascial adhesions in patients after breast cancer (MAP-BC evaluation tool): Development and interrater reliability.

    Science.gov (United States)

    De Groef, An; Van Kampen, Marijke; Vervloesem, Nele; De Geyter, Sophie; Dieltjens, Evi; Christiaens, Marie-Rose; Neven, Patrick; Geraerts, Inge; Devoogdt, Nele

    2017-01-01

    To develop a tool to evaluate myofascial adhesions objectively in patients with breast cancer and to investigate its interrater reliability. 1) Development of the evaluation tool. Literature was searched, experts in the field of myofascial therapy were consulted and pilot testing was performed. 2) Thirty patients (63% had a mastectomy, 37% breast-conserving surgery and 97% radiotherapy) with myofascial adhesions were evaluated using the developed tool by 2 independent raters. The Weighted Kappa (WK) and the intra-class correlation coefficient (ICC) were calculated. 1) The evaluation tool for Myofascial Adhesions in Patients with Breast Cancer (MAP-BC evaluation tool) consisted of the assessment of myofascial adhesions at 7 locations: axillary and breast region scars, musculi pectorales region, axilla, frontal chest wall, lateral chest wall and the inframammary fold. At each location the degree of the myofascial adhesion was scored at three levels (skin, superficial and deep) on a 4-points scale (between no adhesions and very stiff adhesions). Additionally, a total score (0-9) was calculated, i.e. the sum of the different levels of each location. 2) Interrater agreement of the different levels separately was moderate for the axillary and mastectomy scar (WK 0.62-0.73) and good for the scar on the breast (WK >0.75). Moderate agreement was reached for almost all levels of the non-scar locations. Interrater reliability of the total scores was the highest for the scars (ICC 0.82-0.99). At non-scar locations good interrater reliability was reached, except for the inframammary fold (ICC = 0.71). The total scores of all locations of the MAP-BC evaluation tool had good to excellent interrater reliability, except for the inframammary fold which only reached moderate reliability.

  4. Assessing Assessment: Evaluating Outcomes and Reliabilities of Grammar, Math, and Writing Skill Measures in an Introductory Journalism Course

    Science.gov (United States)

    Farwell, Tricia M.; Alligood, Leon; Fitzgerald, Sharon; Blake, Ken

    2016-01-01

    This article introduces an objective grammar and math assessment and evaluates the assessment's outcome and reliability when fielded among eighty-one students in media writing courses. In addition, the article proposes a rubric for grading straight news leads and compares the rubric's reliability with the reliability of rating straight news leads…

  5. Evaluation and Design Tools for the Reliability of Wind Power Converter System

    DEFF Research Database (Denmark)

    Ma, Ke; Zhou, Dao; Blaabjerg, Frede

    2015-01-01

    As a key part in the wind turbine system, the power electronic converter is proven to have high failure rates. At the same time, the failure of the wind power converter is becoming more unacceptable because of the quick growth in capacity, remote locations to reach, and strong impact to the power...... grid. As a result, the correct assessment of reliable performance for power electronics is a crucial and emerging need; the assessment is essential for design improvement, as well as for the extension of converter lifetime and reduction of energy cost. Unfortunately, there still exists a lack...... of suitable physic-of-failure based evaluation tools for a reliability assessment in power electronics. In this paper, an advanced tool structure which can acquire various reliability metrics of wind power converter is proposed. The tool is based on failure mechanisms in critical components of the system...

  6. Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions

    Directory of Open Access Journals (Sweden)

    Concepción Crespo Turrado

    2014-10-01

    Full Text Available Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE. This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW and Multiple Linear Regression (MLR. The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.

  7. [Medical expert reports in chest disease; the question of imputability of death].

    Science.gov (United States)

    Martinet, Y

    2011-05-01

    In the course of an investigation, judicial or not, the expert opinion encompasses several questions of a different nature, including the following one « did the patient die of a disease he/she was supposed to suffer from at time of death? » Based on a personal experience over one year in 2008, the goal of this paper is to tackle this question of imputability, which was asked in respect of 12 investigations, including ten of occupational diseases, one of nosocomial infection and one iatrogenic accident. Only two autopsies were carried out; one autopsy refusal was reported. In five out of 12 cases, the imputability of death related to an occupational disease or an iatrogenic accident was considered by the expert to be certain in one case, very probable in two cases, and possible in two cases; in seven out to 12 cases, imputability of death was unlikely, since the cause of death was unknown in two cases, or was not the suggested cause in five cases. The discussion considers several arguments that can help answer this question: evaluation of the vital prognosis of the disease, the importance of the quality of medical records, the contributions and limits of autopsy findings, deaths that result from multiple causes, and the concept of aggravating circumstances. Copyright © 2011 SPLF. Published by Elsevier Masson SAS. All rights reserved.

  8. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes

    Directory of Open Access Journals (Sweden)

    Lotz Meredith J

    2008-01-01

    Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA

  9. Differential Evolution Based Intelligent System State Search Method for Composite Power System Reliability Evaluation

    Science.gov (United States)

    Bakkiyaraj, Ashok; Kumarappan, N.

    2015-09-01

    This paper presents a new approach for evaluating the reliability indices of a composite power system that adopts binary differential evolution (BDE) algorithm in the search mechanism to select the system states. These states also called dominant states, have large state probability and higher loss of load curtailment necessary to maintain real power balance. A chromosome of a BDE algorithm represents the system state. BDE is not applied for its traditional application of optimizing a non-linear objective function, but used as tool for exploring more number of dominant states by producing new chromosomes, mutant vectors and trail vectors based on the fitness function. The searched system states are used to evaluate annualized system and load point reliability indices. The proposed search methodology is applied to RBTS and IEEE-RTS test systems and results are compared with other approaches. This approach evaluates the indices similar to existing methods while analyzing less number of system states.

  10. Reliability of a Scoring System for Qualitative Evaluation of Lymphoscintigraphy of the Lower Extremities.

    Science.gov (United States)

    Ebrahim, Mojgan; Savitcheva, Irina; Axelsson, Rimma

    2017-09-01

    Lymphoscintigraphy is an imaging technique to diagnose and characterize the severity of edema in the upper and lower extremities. In lymphoscintigraphy, a scoring system can increase the ability to differentiate between diagnoses, but the use of any scoring system requires sufficient reliability. Our aim was to determine the inter- and intraobserver reliability of a proposed scoring system for visual interpretation of lymphoscintigrams of the lower extremities. Methods: The lymphoscintigrams of 81 persons were randomly selected from our database for retrospective evaluation. Two nuclear medicine physicians scored these scans according to the 8 criteria of a proposed scoring system for visual interpretation of lymphoscintigrams of the lower extremities. Each scan was scored twice 3 mo apart. The total score was the sum of the scores for all criteria, with a potential range of 0 (normal lymphatic drainage) to 58 (severe lymphatic impairment). The intra- and interobserver reliability of the scoring system was determined using the Wilcoxon signed-rank test, percentage of agreement, weighted κ, and intraclass correlation coefficient with 95% confidence interval. In addition, for 7 categories, differences in total scores between and within observers were determined. Results: We found some insignificant differences between observers. Percentage agreement was high or very high, at 82.7%-99.4% between observers and 84.6%-99.4% within observers. For each criterion of the scoring system, the κ-correlations showed moderate to very good inter- or intraobserver reliability. The total scores for all criteria had good inter- and intraobserver reliability. Regarding the interobserver comparison, 66% and 64% of the difference in total scores were within ±1 scale point (-1, +1), and regarding the intraobserver comparison, 68% and 72% of the difference in total scores were within ±1 scale point. Conclusion: The proposed scoring system is a reliable tool for visual qualitative

  11. Student Practice Evaluation Form-Revised Edition online comment bank: development and reliability analysis.

    Science.gov (United States)

    Rodger, Sylvia; Turpin, Merrill; Copley, Jodie; Coleman, Allison; Chien, Chi-Wen; Caine, Anne-Maree; Brown, Ted

    2014-08-01

    The reliable evaluation of occupational therapy students completing practice education placements along with provision of appropriate feedback is critical for both students and for universities from a quality assurance perspective. This study describes the development of a comment bank for use with an online version of the Student Practice Evaluation Form-Revised Edition (SPEF-R Online) and investigates its reliability. A preliminary bank of 109 individual comments (based on previous students' placement performance) was developed via five stages. These comments reflected all 11 SPEF-R domains. A purpose-designed online survey was used to examine the reliability of the comment bank. A total of 37 practice educators returned surveys, 31 of which were fully completed. Participants were asked to rate each individual comment using the five-point SPEF-R rating scale. One hundred and two of 109 comments demonstrated satisfactory agreement with their respective default ratings that were determined by the development team. At each domain level, the intra-class correlation coefficients (ranging between 0.86 and 0.96) also demonstrated good to excellent inter-rater reliability. There were only seven items that required rewording prior to inclusion in the final SPEF-R Online comment bank. The development of the SPEF-R Online comment bank offers a source of reliable comments (consistent with the SPEF-R rating scale across different domains) and aims to assist practice educators in providing reliable and timely feedback to students in a user-friendly manner. © 2014 Occupational Therapy Australia.

  12. The Utility of Nonparametric Transformations for Imputation of Survey Data

    Directory of Open Access Journals (Sweden)

    Robbins Michael W.

    2014-12-01

    Full Text Available Missing values present a prevalent problem in the analysis of establishment survey data. Multivariate imputation algorithms (which are used to fill in missing observations tend to have the common limitation that imputations for continuous variables are sampled from Gaussian distributions. This limitation is addressed here through the use of robust marginal transformations. Specifically, kernel-density and empirical distribution-type transformations are discussed and are shown to have favorable properties when used for imputation of complex survey data. Although such techniques have wide applicability (i.e., they may be easily applied in conjunction with a wide array of imputation techniques, the proposed methodology is applied here with an algorithm for imputation in the USDA’s Agricultural Resource Management Survey. Data analysis and simulation results are used to illustrate the specific advantages of the robust methods when compared to the fully parametric techniques and to other relevant techniques such as predictive mean matching. To summarize, transformations based upon parametric densities are shown to distort several data characteristics in circumstances where the parametric model is ill fit; however, no circumstances are found in which the transformations based upon parametric models outperform the nonparametric transformations. As a result, the transformation based upon the empirical distribution (which is the most computationally efficient is recommended over the other transformation procedures in practice.

  13. Imputation of genotypes in Danish two-way crossbred pigs using low density panels

    DEFF Research Database (Denmark)

    Xiang, Tao; Christensen, Ole Fredslund; Legarra, Andres

    of imputation from 5K SNPs to 7K SNPs on Danish Landrace, Yorkshire, and crossbred Landrace-Yorkshire were compared. In conclusion, genotype imputation on crossbreds performs equally well as in purebreds, when parental breeds are used as the reference panel. When the size of reference is considerably large......, it is redundant to use a combined reference to impute the purebred because a within breed reference can already ensure an outstanding imputation accuracy, but in crossbreds, using a combined reference increased the imputation accuracy greatly. Highly accurate imputed 60K crossbred genotypes were achieved from 7K...

  14. Aerosol Optical Depth as a Measure of Particulate Exposure Using Imputed Censored Data, and Relationship with Childhood Asthma Hospital Admissions for 2004 in Athens, Greece

    Directory of Open Access Journals (Sweden)

    Gary Higgs

    2015-01-01

    Full Text Available An understanding of human health implications from atmosphere exposure is a priority in both the geographic and the public health domains. The unique properties of geographic tools for remote sensing of the atmosphere offer a distinct ability to characterize and model aerosols in the urban atmosphere for evaluation of impacts on health. Asthma, as a manifestation of upper respiratory disease prevalence, is a good example of the potential interface of geographic and public health interests. The current study focused on Athens, Greece during the year of 2004 and (1 demonstrates a systemized process for aligning data obtained from satellite aerosol optical depth (AOD with geographic location and time, (2 evaluates the ability to apply imputation methods to censored data, and (3 explores whether AOD data can be used satisfactorily to investigate the association between AOD and health impacts using an example of hospital admission for childhood asthma. This work demonstrates the ability to apply remote sensing data in the evaluation of health outcomes, that the alignment process for remote sensing data is readily feasible, and that missing data can be imputed with a sufficient degree of reliability to develop complete datasets. Individual variables demonstrated small but significant effect levels on hospital admission of children for AOD, nitrogen oxides (NO x , relative humidity (rH, temperature, smoke, and inversely for ozone. However, when applying a multivari-able model, an association with asthma hospital admissions and air quality could not be demonstrated. This work is promising and will be expanded to include additional years.

  15. Reliability and validity of the Balance Evaluation Systems Test (BESTest) in people with subacute stroke.

    Science.gov (United States)

    Chinsongkram, Butsara; Chaikeeree, Nithinun; Saengsirisuwan, Vitoon; Viriyatharakij, Nitaya; Horak, Fay B; Boonsinsukh, Rumpa

    2014-11-01

    The Balance Evaluation Systems Test (BESTest) is a new clinical balance assessment tool, but it has never been validated in patients with subacute stroke. The purpose of this study was to examine the reliability and validity of the BESTest in patients with subacute stroke. This was an observational reliability and validity study. Twelve patients participated in the interrater and intrarater reliability study. Convergent validity was investigated in 70 patients using the Berg Balance Scale (BBS), Postural Assessment Scale for Stroke (PASS), Community Balance and Mobility Scale (CB&M), and Mini-BESTest. The receiver operating characteristic curve was used to calculate the sensitivity, specificity, and accuracy of the BESTest, Mini-BESTest, and BBS in classifying participants into low functional ability (LFA) and high functional ability (HFA) groups based on Fugl-Meyer Assessment motor subscale scores. The BESTest showed excellent intrarater reliability and interrater reliability (intraclass correlation coefficient=.99) and was highly correlated with the BBS (Spearman r=.96), PASS (r=.96), CB&M (r=.91), and Mini-BESTest (r=.96), indicating excellent convergent validity. No floor or ceiling effects were observed with the BESTest. In contrast, the Mini-BESTest and CB&M had a floor effect in the LFA group, and the BBS and PASS demonstrated responsive ceiling effects in the HFA group. In addition, the BESTest showed high accuracy as the BBS and Mini-BESTest in separating participants into HFA and LFA groups. Whether the results are generalizable to patients with chronic stroke is unknown. The BESTest is reliable, valid, sensitive, and specific in assessing balance in people with subacute stroke across all levels of functional disability. © 2014 American Physical Therapy Association.

  16. TRIP: An interactive retrieving-inferring data imputation approach

    KAUST Repository

    Li, Zhixu

    2016-06-25

    Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.

  17. Development of Probabilistic Reliability Models of Photovoltaic System Topologies for System Adequacy Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Alferidi

    2017-02-01

    Full Text Available The contribution of solar power in electric power systems has been increasing rapidly due to its environmentally friendly nature. Photovoltaic (PV systems contain solar cell panels, power electronic converters, high power switching and often transformers. These components collectively play an important role in shaping the reliability of PV systems. Moreover, the power output of PV systems is variable, so it cannot be controlled as easily as conventional generation due to the unpredictable nature of weather conditions. Therefore, solar power has a different influence on generating system reliability compared to conventional power sources. Recently, different PV system designs have been constructed to maximize the output power of PV systems. These different designs are commonly adopted based on the scale of a PV system. Large-scale grid-connected PV systems are generally connected in a centralized or a string structure. Central and string PV schemes are different in terms of connecting the inverter to PV arrays. Micro-inverter systems are recognized as a third PV system topology. It is therefore important to evaluate the reliability contribution of PV systems under these topologies. This work utilizes a probabilistic technique to develop a power output model for a PV generation system. A reliability model is then developed for a PV integrated power system in order to assess the reliability and energy contribution of the solar system to meet overall system demand. The developed model is applied to a small isolated power unit to evaluate system adequacy and capacity level of a PV system considering the three topologies.

  18. Utilizing genotype imputation for the augmentation of sequence data.

    Directory of Open Access Journals (Sweden)

    Brooke L Fridley

    2010-06-01

    Full Text Available In recent years, capabilities for genotyping large sets of single nucleotide polymorphisms (SNPs has increased considerably with the ability to genotype over 1 million SNP markers across the genome. This advancement in technology has led to an increase in the number of genome-wide association studies (GWAS for various complex traits. These GWAS have resulted in the implication of over 1500 SNPs associated with disease traits. However, the SNPs identified from these GWAS are not necessarily the functional variants. Therefore, the next phase in GWAS will involve the refining of these putative loci.A next step for GWAS would be to catalog all variants, especially rarer variants, within the detected loci, followed by the association analysis of the detected variants with the disease trait. However, sequencing a locus in a large number of subjects is still relatively expensive. A more cost effective approach would be to sequence a portion of the individuals, followed by the application of genotype imputation methods for imputing markers in the remaining individuals. A potentially attractive alternative option would be to impute based on the 1000 Genomes Project; however, this has the drawbacks of using a reference population that does not necessarily match the disease status and LD pattern of the study population. We explored a variety of approaches for carrying out the imputation using a reference panel consisting of sequence data for a fraction of the study participants using data from both a candidate gene sequencing study and the 1000 Genomes Project.Imputation of genetic variation based on a proportion of sequenced samples is feasible. Our results indicate the following sequencing study design guidelines which take advantage of the recent advances in genotype imputation methodology: Select the largest and most diverse reference panel for sequencing and genotype as many "anchor" markers as possible.

  19. Evaluation of validity and reliability of the Persian version of the functional index of hand osteoarthritis.

    Science.gov (United States)

    Kordi Yoosefinejad, Amin; Motealleh, Alireza; Babakhani, Mohammad

    2017-05-01

    The Functional index of hand osteoarthritis (FIHOA) is a commonly used patient-reported outcome questionnaire designed to measure function in patients with hand osteoarthritis. The objective of this study was to evaluate the validity and reliability of the Persian version of the FIHOA. The Persian-translated version of FIHOA was administered to 72 native Persian-speaking patients in Iran with hand osteoarthritis. Thirty-six of the patients completed the questionnaire on two occasions 1 week apart. The physical component of the SF-36 and a numerical rating scale were used to evaluate the construct validity of the Persian version of FIHOA. Internal consistency was high (Cronbach's alpha = 0.89). Test-retest reliability for the total score was excellent (weighted kappa = 0.89, 95% CI 0.79-0.94). A significant positive correlation between total FIHOA score and numerical rating scale (r = 0.70) and a significant negative correlation between total FIHOA score and the physical component scale of the SF-36 (r = -0.76) were observed. The Persian version of the FIHOA showed adequate validity and reliability to evaluate functional disability in Persian-speaking patients with hand osteoarthritis.

  20. Reproducibility, Reliability, and Validity of Fuchsin-Based Beads for the Evaluation of Masticatory Performance.

    Science.gov (United States)

    Sánchez-Ayala, Alfonso; Farias-Neto, Arcelino; Vilanova, Larissa Soares Reis; Costa, Marina Abrantes; Paiva, Ana Clara Soares; Carreiro, Adriana da Fonte Porto; Mestriner-Junior, Wilson

    2016-08-01

    Rehabilitation of masticatory function is inherent to prosthodontics; however, despite the various techniques for evaluating oral comminution, the methodological suitability of these has not been completely studied. The aim of this study was to determine the reproducibility, reliability, and validity of a test food based on fuchsin beads for masticatory function assessment. Masticatory performance was evaluated in 20 dentate subjects (mean age, 23.3 years) using two kinds of test foods and methods: fuchsin beads and ultraviolet-visible spectrophotometry, and silicone cubes and multiple sieving as gold standard. Three examiners conducted five masticatory performance trials with each test food. Reproducibility of the results from both test foods was separately assessed using the intraclass correlation coefficient (ICC). Reliability and validity of fuchsin bead data were measured by comparing the average mean of absolute differences and the measurement means, respectively, regarding silicone cube data using the paired Student's t-test (α = 0.05). Intraexaminer and interexaminer ICC for the fuchsin bead values were 0.65 and 0.76 (p masticatory performance were good and excellent, respectively; however, the reliability and validity were low, because fuchsin beads do not measure the grinding capacity of masticatory function as silicone cubes do; instead, this test food describes the crushing potential of teeth. Thus, the two kinds of test foods evaluate different properties of masticatory capacity, confirming fushsin beads as a useful tool for this purpose. © 2015 by the American College of Prosthodontists.

  1. Development and Reliability Evaluation of the Movement Rating Instrument for Virtual Reality Video Game Play.

    Science.gov (United States)

    Levac, Danielle; Nawrotek, Joanna; Deschenes, Emilie; Giguere, Tia; Serafin, Julie; Bilodeau, Martin; Sveistrup, Heidi

    2016-06-01

    Virtual reality active video games are increasingly popular physical therapy interventions for children with cerebral palsy. However, physical therapists require educational resources to support decision making about game selection to match individual patient goals. Quantifying the movements elicited during virtual reality active video game play can inform individualized game selection in pediatric rehabilitation. The objectives of this study were to develop and evaluate the feasibility and reliability of the Movement Rating Instrument for Virtual Reality Game Play (MRI-VRGP). Item generation occurred through an iterative process of literature review and sample videotape viewing. The MRI-VRGP includes 25 items quantifying upper extremity, lower extremity, and total body movements. A total of 176 videotaped 90-second game play sessions involving 7 typically developing children and 4 children with cerebral palsy were rated by 3 raters trained in MRI-VRGP use. Children played 8 games on 2 virtual reality and active video game systems. Intraclass correlation coefficients (ICCs) determined intra-rater and interrater reliability. Excellent intrarater reliability was evidenced by ICCs of >0.75 for 17 of the 25 items across the 3 raters. Interrater reliability estimates were less precise. Excellent interrater reliability was achieved for far reach upper extremity movements (ICC=0.92 [for right and ICC=0.90 for left) and for squat (ICC=0.80) and jump items (ICC=0.99), with 9 items achieving ICCs of >0.70, 12 items achieving ICCs of between 0.40 and 0.70, and 4 items achieving poor reliability (close-reach upper extremity-ICC=0.14 for right and ICC=0.07 for left) and single-leg stance (ICC=0.55 for right and ICC=0.27 for left). Poor video quality, differing item interpretations between raters, and difficulty quantifying the high-speed movements involved in game play affected reliability. With item definition clarification and further psychometric property evaluation, the MRI

  2. Dynamic Reliability Evaluation of Road Vehicle Subjected to Turbulent Crosswinds Based on Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2017-01-01

    Full Text Available As a vehicle moves on roads, a complex vibration system of the running vehicle is formed under the collective excitations of random crosswinds and road surface roughness, together with the artificial handing by the drivers. Several numerical models in deterministic way to assess the safety of running road vehicles under crosswinds were proposed. Actually, the natural wind is a random process in time domain due to turbulence, and the surface roughness of a road is also a random process but in spatial domain. The nature of a running vehicle therefore is an extension of dynamic reliability excited by random processes. This study tries to explore the dynamic reliability of a road vehicle subjected to turbulent crosswinds. Based on a nonlinear vibration system, the dynamic responses of a road vehicle are simulated to obtain the dynamic reliability. Monte Carlo Simulation with Latin Hypercube Sampling is then applied on the possible random variables including the vehicle weight, road friction coefficient, and driver parameter to look at their effects. Finally, a distribution model of the dynamic reliability and a corresponding index for the wind-induced vehicle accident considering these random processes and variables is proposed and employed to evaluate the safety of the running vehicle.

  3. Evaluation of potential emission spectra for the reliable classification of fluorescently coded materials

    Science.gov (United States)

    Brunner, Siegfried; Kargel, Christian

    2011-06-01

    The conservation and efficient use of natural and especially strategic resources like oil and water have become global issues, which increasingly initiate environmental and political activities for comprehensive recycling programs. To effectively reutilize oil-based materials necessary in many industrial fields (e.g. chemical and pharmaceutical industry, automotive, packaging), appropriate methods for a fast and highly reliable automated material identification are required. One non-contacting, color- and shape-independent new technique that eliminates the shortcomings of existing methods is to label materials like plastics with certain combinations of fluorescent markers ("optical codes", "optical fingerprints") incorporated during manufacture. Since time-resolved measurements are complex (and expensive), fluorescent markers must be designed that possess unique spectral signatures. The number of identifiable materials increases with the number of fluorescent markers that can be reliably distinguished within the limited wavelength band available. In this article we shall investigate the reliable detection and classification of fluorescent markers with specific fluorescence emission spectra. These simulated spectra are modeled based on realistic fluorescence spectra acquired from material samples using a modern VNIR spectral imaging system. In order to maximize the number of materials that can be reliably identified, we evaluate the performance of 8 classification algorithms based on different spectral similarity measures. The results help guide the design of appropriate fluorescent markers, optical sensors and the overall measurement system.

  4. Reliability and Agreement of Neck Functional Capacity Evaluation Tests in Patients With Chronic Multifactorial Neck Pain.

    Science.gov (United States)

    Reneman, M F; Roelofs, M; Schiphorst Preuper, H R

    2017-07-01

    To analyze test-retest reliability and agreement, and to explore the safety of neck functional capacity evaluation (Neck-FCE) tests in patients with chronic multifactorial neck pain. Test-retest; 2 FCE sessions were held with a 2-week interval. University-based outpatient rehabilitation center. Individuals (N=18; 14 women) with a mean age of 34 years. Not applicable. The Neck-FCE protocol consists of 6 tests: lifting waist to overhead (kg), 2-handed carrying (kg), overhead working (s), bending and overhead reaching (s), and repetitive side reaching (left and right) (s). Intraclass correlation coefficients (ICCs) and limits of agreement (LoA) were calculated. ICC point estimates between .75 and .90 were considered as good, and >.90 were considered as excellent reliability. ICC point estimates ranged between .39 and .96. Ratios of the LoA ranged between 32.0% and 56.5%. Mean ± SD numeric rating scale pain scores in the neck and shoulder 24 hours after the test were 6.7±2.6 and 6.3±3.0, respectively. Based on ICC point estimates and 95% confidence intervals, 3 tests had excellent reliability and 3 had poor reliability. LoA were substantial in all 6 tests. Safety was confirmed. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  5. A Human Reliability Based Usability Evaluation Method for Safety-Critical Software

    Energy Technology Data Exchange (ETDEWEB)

    Phillippe Palanque; Regina Bernhaupt; Ronald Boring; Chris Johnson

    2006-04-01

    Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been done to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.

  6. Reliability Evaluation of a Distribution Network with Microgrid Based on a Combined Power Generation System

    Directory of Open Access Journals (Sweden)

    Hao Bai

    2015-02-01

    Full Text Available Distributed generation (DG, battery storage (BS and electric vehicles (EVs in a microgrid constitute the combined power generation system (CPGS. A CPGS can be applied to achieve a reliable evaluation of a distribution network with microgrids. To model charging load and discharging capacity, respectively, the EVs in a CPGS can be divided into regular EVs and ruleless EVs, according to their driving behavior. Based on statistical data of gasoline-fueled vehicles and the probability distribution of charging start instant and charging time, a statistical model can be built to describe the charging load and discharging capacity of ruleless EVs. The charge and discharge curves of regular EVs can also be drawn on the basis of a daily dispatch table. The CPGS takes the charge and discharge curves of EVs, daily load and DG power generation into consideration to calculate its power supply time during islanding. Combined with fault duration, the power supply time during islanding will be used to analyze and determine the interruption times and interruption duration of loads in islands. Then the Sequential Monte Carlo method is applied to complete the reliability evaluation of the distribution system. The RBTS Bus 4 test system is utilized to illustrate the proposed technique. The effects on the system reliability of BS capacity and V2G technology, driving behavior, recharging mode and penetration of EVs are all investigated.

  7. Validity and reliability of the Mastication Observation and Evaluation (MOE) instrument.

    Science.gov (United States)

    Remijn, Lianne; Speyer, Renée; Groen, Brenda E; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G

    2014-07-01

    The Mastication Observation and Evaluation (MOE) instrument was developed to allow objective assessment of a child's mastication process. It contains 14 items and was developed over three Delphi rounds. The present study concerns the further development of the MOE using the COSMIN (Consensus based Standard for the Selection of Measurement Instruments) and investigated the instrument's internal consistency, inter-observer reliability, construct validity and floor and ceiling effects. Consumption of three bites of bread and biscuit was evaluated using the MOE. Data of 59 healthy children (6-48 mths) and 38 children (bread) and 37 children (biscuit) with cerebral palsy (24-72 mths) were used. Four items were excluded before analysis due to zero variance. Principal Components Analysis showed one factor with 8 items. Internal consistency was >0.70 (Chronbach's alpha) for both food consistencies and for both groups of children. Inter-observer reliability varied from 0.51 to 0.98 (weighted Gwet's agreement coefficient). The total MOE scores for both groups showed normal distribution for the population. There were no floor or ceiling effects. The revised MOE now contains 8 items that (a) have a consistent concept for mastication and can be scored on a 4-point scale with sufficient reliability and (b) are sensitive to stages of chewing development in young children. The removed items are retained as part of a criterion referenced list within the MOE. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Enhancing Transition to Practice Using a Valid and Reliable Evaluation Tool: Progressive Orientation Level Evaluation (POLE) Tool.

    Science.gov (United States)

    Acuna, Gail K; Yoder, Lina H; Madrigal-Gonzalez, Lizely; Yoder-Wise, Patricia S

    2017-03-01

    Numerous evaluation tools are used to verify nursing competencies, job satisfaction, and leadership qualities separately. One comprehensive tool, the Progressive Orientation Level Evaluation (POLE) tool, measures the increasing complexity of competencies from novice through competent levels. The tool also measures employee satisfaction and engagement to identify potential challenges, stressors, or barriers to transition prior to completion of the post-hiring orientation (on-boarding) process. A prospective cohort design was used to determine reliability of the POLE tool examining new graduate nurses' (NGNs) learning needs using objectives to individualize the length of training as needed. Cronbach's alpha was used to determine internal consistency. The reliability of the instrument was established showing high internal consistency (Cronbach's alpha = .90 to .99). Using a valid and reliable tool provides NGNs and organizations with a way to evaluate the success of stated goals and outcomes from a residency program in a standardized and consistent manner. J Contin Educ Nurs. 2017;48(3):123-128. Copyright 2017, SLACK Incorporated.

  9. A prospective study assessing agreement and reliability of a geriatric evaluation.

    Science.gov (United States)

    Locatelli, Isabella; Monod, Stéfanie; Cornuz, Jacques; Büla, Christophe J; Senn, Nicolas

    2017-07-19

    The present study takes place within a geriatric program, aiming at improving the diagnosis and management of geriatric syndromes in primary care. Within this program it was of prime importance to be able to rely on a robust and reproducible geriatric consultation to use as a gold standard for evaluating a primary care brief assessment tool. The specific objective of the present study was thus assessing the agreement and reliability of a comprehensive geriatric consultation. The study was conducted at the outpatient clinic of the Service of Geriatric Medicine, University of Lausanne, Switzerland. All community-dwelling older persons aged 70 years and above were eligible. Patients were excluded if they hadn't a primary care physician, they were unable to speak French, or they were already assessed by a geriatrician within the last 12 months. A set of 9 geriatricians evaluated 20 patients. Each patient was assessed twice within a 2-month delay. Geriatric consultations were based on a structured evaluation process, leading to rating the following geriatric conditions: functional, cognitive, visual, and hearing impairment, mood disorders, risk of fall, osteoporosis, malnutrition, and urinary incontinence. Reliability and agreement estimates on each of these items were obtained using a three-way Intraclass Correlation and a three-way Observed Disagreement index. The latter allowed a decomposition of overall disagreement into disagreements due to each source of error variability (visit, rater and random). Agreement ranged between 0.62 and 0.85. For most domains, geriatrician-related error variability explained an important proportion of disagreement. Reliability ranged between 0 and 0.8. It was poor/moderate for visual impairment, malnutrition and risk of fall, and good/excellent for functional/cognitive/hearing impairment, osteoporosis, incontinence and mood disorders. Six out of nine items of the geriatric consultation described in this study (functional

  10. An Evaluation of the Reliability, Construct Validity, and Factor Structure of the Static-2002R.

    Science.gov (United States)

    Jung, Sandy; Ennis, Liam; Hermann, Chantal A; Pham, Anna T; Choy, Alberto L; Corabian, Gabriela; Hook, Tarah

    2017-03-01

    The fundamental psychometric properties of the subscales found in the Static-2002R, an actuarial measure of sexual recidivism risk, were evaluated in the current study. Namely, the reliability, concurrent and construct validity, and factor structure of the Static-2002R subscales were examined with a sample of 372 adult male sex offenders. In addition to using validated measures of sexual violence risk to examine concurrent validity, construct-related measures taken from extant risk measures and psychometric tests were correlated with three of the subscales to assess overall construct validity. Moderate support was found for the reliability of the Static-2002R. The concurrent and construct validity of the General Criminality, Persistence of Sexual Offending, and Deviant Sexual Interest subscales were supported. Generally, these findings further support the Static-2002R as a valid sex offender risk appraisal instrument that encompasses multiple distinct, clinically relevant, risk domains.

  11. Evaluating seismic reliability of Reinforced Concrete Bridge in view of their rehabilitation

    Directory of Open Access Journals (Sweden)

    Boubel Hasnae

    2018-01-01

    Full Text Available Considering in this work, a simplified methodology was proposed in order to evaluate seismic vulnerability of Reinforced Concrete Bridge. Reliability assessment of stress limits state and the applied loading which are assumed to be random variables. It is assumed that only their means and standard deviations are known while no information is available about their densities of probabilities. First Order Reliability Method is applied to a response surface representation of the stress limit state obtained through quadratic polynomial regression of finite element results. Then a parametric study is performed regarding the influence of the distributions of probabilities chosen to model the problem uncertainties for Reinforced Concrete Bridge. It is shown that the probability of failure depends largely on the chosen densities of probabilities, mainly in the useful domain of small failure probabilities.

  12. Multiple Imputation of Item Scores in Test and Questionnaire Data, and Influence on Psychometric Results

    Science.gov (United States)

    van Ginkel, Joost R.; van der Ark, L. Andries; Sijtsma, Klaas

    2007-01-01

    The performance of five simple multiple imputation methods for dealing with missing data were compared. In addition, random imputation and multivariate normal imputation were used as lower and upper benchmark, respectively. Test data were simulated and item scores were deleted such that they were either missing completely at random, missing at…

  13. The reliability and validity of the clinical competence evaluation scale in physical therapy.

    Science.gov (United States)

    Yoshino, Jun; Usuda, Shigeru

    2013-12-01

    [Purpose] To examine the internal consistency, criterion-related validity, factorial validity, and content validity of the Clinical Competence Evaluation Scale in Physical Therapy (CEPT). [Subjects] The subjects were 278 novice physical therapy trainees and 119 tutors from 21 medical facilities. [Methods] The trainees self-evaluated their clinical competences and the tutors evaluated trainee competences using the CEPT. Overall trainee autonomy was evaluated using a visual analog scale (VAS) for self-evaluation and the trainees were also evaluated by their tutors. The content validity of the CEPT was examined by asking if the CEPT could evaluate the competence of novice physical therapists on a four-point scale. [Results] Cronbach's alpha of the CEPT was 0.96 for the trainees and 0.97 for the tutors. The correlation coefficient between the total score of the CEPT and whole competence by VAS was 0.83 for the trainees and 0.87 for the tutors. Factor analysis identified two factors, "the specialty of the physical therapist" and "the essential competence of a health professional". Ninety percent or more of the trainees and the tutors answered that the CEPT could sufficiently evaluate the competence of novice physical therapists. [Conclusion] The CEPT is a reliable and valid scale for clinical competence evaluation of novice physical therapists.

  14. Comparing methodologies for imputing ethnicity in an urban ophthalmology clinic.

    Science.gov (United States)

    Storey, Philip; Murchison, Ann P; Dai, Yang; Hark, Lisa; Pizzi, Laura T; Leiby, Benjamin E; Haller, Julia A

    2014-04-01

    To compare methodologies for imputing ethnicity in an urban ophthalmology clinic. Using data from 19,165 patients with self-reported ethnicity, surname, and home address, we compared the accuracy of three methodologies for imputing ethnicity: (1) a surname method based on tabulation from the 2000 US Census; (2) a geocoding method based on tract data from the 2010 US Census; and (3) a combined surname geocoding method using Bayes' theorem. The combined surname geocoding model had the highest accuracy of the three methodologies, imputing black ethnicity with a sensitivity of 84% and positive predictive value (PPV) of 94%, white ethnicity with a sensitivity of 92% and PPV of 82%, Hispanic ethnicity with a sensitivity of 77% and PPV of 71%, and Asian ethnicity with a sensitivity of 83% and PPV of 79%. Overall agreement of imputed and self-reported ethnicity was fair for the surname method (κ 0.23), moderate for the geocoding method (κ 0.58), and strong for the combined method (κ 0.76). A methodology combining surname analysis and Census tract data using Bayes' theorem to determine ethnicity is superior to other methods tested and is ideally suited for research purposes of clinical and administrative data.

  15. Multiple imputation for cure rate quantile regression with censored data.

    Science.gov (United States)

    Wu, Yuanshan; Yin, Guosheng

    2017-03-01

    The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.

  16. A Distribution-Based Multiple Imputation Method for Handling Bivariate Pesticide Data with Values below the Limit of Detection

    Science.gov (United States)

    Chen, Haiying; Quandt, Sara A.; Grzywacz, Joseph G.; Arcury, Thomas A.

    2011-01-01

    Background Environmental and biomedical researchers frequently encounter laboratory data constrained by a lower limit of detection (LOD). Commonly used methods to address these left-censored data, such as simple substitution of a constant for all values LOD, may bias parameter estimation. In contrast, multiple imputation (MI) methods yield valid and robust parameter estimates and explicit imputed values for variables that can be analyzed as outcomes or predictors. Objective In this article we expand distribution-based MI methods for left-censored data to a bivariate setting, specifically, a longitudinal study with biological measures at two points in time. Methods We have presented the likelihood function for a bivariate normal distribution taking into account values LOD as well as missing data assumed missing at random, and we use the estimated distributional parameters to impute values LOD and to generate multiple plausible data sets for analysis by standard statistical methods. We conducted a simulation study to evaluate the sampling properties of the estimators, and we illustrate a practical application using data from the Community Participatory Approach to Measuring Farmworker Pesticide Exposure (PACE3) study to estimate associations between urinary acephate (APE) concentrations (indicating pesticide exposure) at two points in time and self-reported symptoms. Results Simulation study results demonstrated that imputed and observed values together were consistent with the assumed and estimated underlying distribution. Our analysis of PACE3 data using MI to impute APE values LOD showed that urinary APE concentration was significantly associated with potential pesticide poisoning symptoms. Results based on simple substitution methods were substantially different from those based on the MI method. Conclusions The distribution-based MI method is a valid and feasible approach to analyze bivariate data with values LOD, especially when explicit values for the

  17. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.

    Directory of Open Access Journals (Sweden)

    Aureliano eCrameri

    2015-07-01

    Full Text Available The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials. One flexible technique for statistical inference with missing data is multiple imputation (MI. Since methods such as MI rely on the assumption of missing data being at random (MAR, a sensitivity analysis for testing the robustness against departures from this assumption is required.In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45 and the Helping Alliance Questionnaire (HAQ in a sample of 260 outpatients.The sensitivity analysis can be used to (1 quantify the degree of bias introduced by missing not at random data (MNAR in a worst reasonable case scenario, (2 compare the performance of different analysis methods for dealing with missing data, or (3 detect the influence of possible violations to the model assumptions (e.g. lack of normality.Moreover, our analysis showed that ratings from the patient’s and therapist’s version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and nonrandomized effectiveness studies in the field of outpatient psychotherapy.

  18. Sequence imputation of HPV16 genomes for genetic association studies.

    Directory of Open Access Journals (Sweden)

    Benjamin Smith

    Full Text Available Human Papillomavirus type 16 (HPV16 causes over half of all cervical cancer and some HPV16 variants are more oncogenic than others. The genetic basis for the extraordinary oncogenic properties of HPV16 compared to other HPVs is unknown. In addition, we neither know which nucleotides vary across and within HPV types and lineages, nor which of the single nucleotide polymorphisms (SNPs determine oncogenicity.A reference set of 62 HPV16 complete genome sequences was established and used to examine patterns of evolutionary relatedness amongst variants using a pairwise identity heatmap and HPV16 phylogeny. A BLAST-based algorithm was developed to impute complete genome data from partial sequence information using the reference database. To interrogate the oncogenic risk of determined and imputed HPV16 SNPs, odds-ratios for each SNP were calculated in a case-control viral genome-wide association study (VWAS using biopsy confirmed high-grade cervix neoplasia and self-limited HPV16 infections from Guanacaste, Costa Rica.HPV16 variants display evolutionarily stable lineages that contain conserved diagnostic SNPs. The imputation algorithm indicated that an average of 97.5±1.03% of SNPs could be accurately imputed. The VWAS revealed specific HPV16 viral SNPs associated with variant lineages and elevated odds ratios; however, individual causal SNPs could not be distinguished with certainty due to the nature of HPV evolution.Conserved and lineage-specific SNPs can be imputed with a high degree of accuracy from limited viral polymorphic data due to the lack of recombination and the stochastic mechanism of variation accumulation in the HPV genome. However, to determine the role of novel variants or non-lineage-specific SNPs by VWAS will require direct sequence analysis. The investigation of patterns of genetic variation and the identification of diagnostic SNPs for lineages of HPV16 variants provides a valuable resource for future studies of HPV16

  19. Reliability and validity of the performance index evaluation among men's and women's college basketball players.

    Science.gov (United States)

    Barfield, Jean-Paul; Johnson, Robert J; Russo, Paul; Cobler, Dennis C

    2007-05-01

    The Performance Index Evaluation (PIE) is a basketball-specific assessment of physical performance. The battery consists of items typically included in sport assessments, such as agility and power, but also addresses an often-overlooked performance component, namely, core strength. The purpose of this study was to examine the reliability (test-retest, interrater), validity (criterion-related, construct-related), and practice effect of the PIE among men's and women's college basketball players. Test-retest estimates were moderate for men (intraclass correlation coefficient [ICC] = 0.79) and poor for women (ICC = 0.35), but interrater reliability was high (ICC = 0.95). Criterion-related validity evidence (i.e., relationship between PIE and playing time) was weak, but construct-related evidence was acceptable (i.e., college players had higher scores than high school players). A practice effect was also demonstrated among men. In conclusion, reliability of the battery should be improved before its use is recommended among college basketball players. Additionally, the battery does not appear to be a predictor of performance but does appear to distinguish between skill levels.

  20. Ischiofemoral impingement: evaluation with new MRI parameters and assessment of their reliability

    Energy Technology Data Exchange (ETDEWEB)

    Tosun, Ozgur; Algin, Oktay; Cay, Nurdan; Karaoglanoglu, Mustafa [Ankara Ataturk Education and Research Hospital, Department of Radiology, Ankara (Turkey); Yalcin, Nadir [University of California, Department of Orthopaedic Surgery, San Francisco, CA (United States); Ocakoglu, Gokhan [Uludag University Medical Faculty, Biostatistics Department, Bursa (Turkey)

    2012-05-15

    The aim of this study was to describe the magnetic resonance imaging (MRI) findings in patients with ischiofemoral impingement (IFI) and to evaluate the reliability of these MRI findings. Seventy hips of 50 patients with hip pain and quadratus femoris muscle (QFM) edema and 38 hips of 30 control cases were included in the study. The QFM edema and fatty replacement were assessed visually. Ischiofemoral space (IFS), quadratus femoris space (QFS), inclination angle (IA), hamstring tendon area (HTA), and total quadratus femoris muscle volume (TQFMV) measurements were performed independently by two musculoskeletal radiologists. The intra- and interobserver reliabilities were obtained for quantitative variables. IFS, QFS, and TQFMV values of the patient group were significantly lower than those of controls (P < 0.001). HTA and IA measurements of the patient group were also significantly higher than in controls (P < 0.05). The QFM fatty replacement grades were significantly higher in the patient group than in the control group (P < 0.001). Inter- and intra-observer reliabilities were strong for all continuous variables. Clinicians and radiologists should be aware of IFI in patients with hip or groin pain, and MRI should be obtained for the presence of the QFM edema/fatty replacement, narrowing of the IFS-QFS, and other features that may help in the clinical diagnosis of IFI for the proper diagnosis and treatment of the disease. (orig.)

  1. Reliability evaluation of isolated solar-diesel power systems using a Monte Carlo simulation approach

    Energy Technology Data Exchange (ETDEWEB)

    Roy Billinton, B.; Karki, R. [Saskatchewan Univ., Saskatoon, SK (Canada). Power Systems Research Group

    2003-08-01

    This paper presents a Monte Carlo simulation technique for determining the reliability of an isolated solar-diesel power system (ISDPS). ISDPS is commonly used around the world to generate electricity in remote areas. The reliability of solar-hybrid systems differs from conventional generating sources because of the extremely variable nature of solar irradiation at any given location, energy conversion by the PV array, energy storage capability, and solar energy input. The proposed method is based on the use of an hourly simulation that imitates the operation of a generating system with energy storage. The simulation considers the stochastic nature of solar radiation along with other system factors. The newly developed method has been used in a series of analyses on hypothetical power systems using data from two different sites in Canada. Results indicate that reliability depends greatly on the actual site location. Therefore, accurate and detailed data for a particular location is a crucial requirement for evaluating ISDPS. 11 refs., 3 tabs., 10 figs.

  2. Is Near-Infrared Spectroscopy a Reliable Method to Evaluate Clamping Ischemia during Carotid Surgery?

    Directory of Open Access Journals (Sweden)

    Luciano Pedrini

    2012-01-01

    Full Text Available Guidelines do not include cerebral oximetry among monitoring for carotid endarterectomy (CEA. The purpose of this study was to evaluate the reliability of near-infrared spectroscopy (NIRS in the detection of clamping ischemia and in the prevention of clamping-related neurologic deficits using, as a cutoff for shunting, a 20% regional cerebral oxygen saturation (rSO2 decrease if persistent more than 4 minutes, otherwise a 25% rSO2 decrease. Bilateral rSO2 was monitored continuously in patients undergoing CEA under general anesthesia (GA. Data was recorded after clamping, declamping, during shunting and lowest values achieved. Preoperative neurologic, CT-scan, and vascular lesions were recorded. We reviewed 473 cases: 305 males (64.5% mean age 73.3±7.3. Three patients presented transient ischemic deficits at awakening, no perioperative stroke or death; 41 (8.7% required shunting: 30 based on the initial rSO2 value and 11 due to a decrease during surgery. Using the ROC curve analysis we found, for a >25% reduction from baseline value, a sensitivity of 100% and a specificity of 90.6%. Reliability, PPV, and NPV were 95.38%, 9%, and 100%, respectively. In conclusion, this study indicates the potential reliability of NIRS monitoring during CEA under GA, using a cutoff of 25% or a cutoff of 20% for prolonged hypoperfusion.

  3. Reliability and validity of the photogrammetry for scoliosis evaluation: a cross-sectional prospective study.

    Science.gov (United States)

    Saad, Karen Ruggeri; Colombo, Alexandra S; João, Silvia M Amado

    2009-01-01

    The purpose of this study was to investigate the reliability and validity of photogrammetry in measuring the lateral spinal inclination angles. Forty subjects (32 female and 8 males) with a mean age of 23.4 +/- 11.2 years had their scoliosis evaluated by radiographs of their trunk, determined by the Cobb angle method, and by photogrammetry. The statistical methods used included Cronbach alpha, Pearson/Spearman correlation coefficients, and regression analyses. The Cronbach alpha values showed that the photogrammetric measures showed high internal consistency, which indicated that the sample was bias free. The radiograph method showed to be more precise with intrarater reliabilities of 0.936, 0.975, and 0.945 for the thoracic, lumbar, and thoracolumbar curves, respectively, and interrater reliabilities of 0.942 and 0.879 for the angular measures of the thoracic and thoracolumbar segments, respectively. The regression analyses revealed a high determination coefficient although limited to the adjusted linear model between the radiographic and photographic measures. It was found that with more severe scoliosis, the lateral curve measures obtained with the photogrammetry were for the thoracic and lumbar regions (R = 0.619 and 0.551). The photogrammetric measures were found to be reproducible in this study and could be used as supplementary information to decrease the number of radiographs necessary for the monitoring of scoliosis.

  4. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Quality Assurance Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; R. Nims; K. J. Kvarfordt; C. Wharton

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 6 and 7, what constitutes its parts, and limitations of those processes.

  5. Evaluation of the reliability of maize reference assays for GMO quantification.

    Science.gov (United States)

    Papazova, Nina; Zhang, David; Gruden, Kristina; Vojvoda, Jana; Yang, Litao; Buh Gasparic, Meti; Blejec, Andrej; Fouilloux, Stephane; De Loose, Marc; Taverniers, Isabel

    2010-03-01

    A reliable PCR reference assay for relative genetically modified organism (GMO) quantification must be specific for the target taxon and amplify uniformly along the commercialised varieties within the considered taxon. Different reference assays for maize (Zea mays L.) are used in official methods for GMO quantification. In this study, we evaluated the reliability of eight existing maize reference assays, four of which are used in combination with an event-specific polymerase chain reaction (PCR) assay validated and published by the Community Reference Laboratory (CRL). We analysed the nucleotide sequence variation in the target genomic regions in a broad range of transgenic and conventional varieties and lines: MON 810 varieties cultivated in Spain and conventional varieties from various geographical origins and breeding history. In addition, the reliability of the assays was evaluated based on their PCR amplification performance. A single base pair substitution, corresponding to a single nucleotide polymorphism (SNP) reported in an earlier study, was observed in the forward primer of one of the studied alcohol dehydrogenase 1 (Adh1) (70) assays in a large number of varieties. The SNP presence is consistent with a poor PCR performance observed for this assay along the tested varieties. The obtained data show that the Adh1 (70) assay used in the official CRL NK603 assay is unreliable. Based on our results from both the nucleotide stability study and the PCR performance test, we can conclude that the Adh1 (136) reference assay (T25 and Bt11 assays) as well as the tested high mobility group protein gene assay, which also form parts of CRL methods for quantification, are highly reliable. Despite the observed uniformity in the nucleotide sequence of the invertase gene assay, the PCR performance test reveals that this target sequence might occur in more than one copy. Finally, although currently not forming a part of official quantification methods, zein and SSIIb

  6. Uncertainty evaluation of reliability of shutdown system of a medium size fast breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    Zeliang, Chireuding; Singh, Om Pal, E-mail: singhop@iitk.ac.in; Munshi, Prabhat

    2016-11-15

    Highlights: • Uncertainty analysis of reliability of Shutdown System is carried out. • Monte Carlo method of sampling is used. • The effect of various reliability improvement measures of SDS are accounted. - Abstract: In this paper, results are presented on the uncertainty evaluation of the reliability of Shutdown System (SDS) of a Medium Size Fast Breeder Reactor (MSFBR). The reliability analysis results are of Kumar et al. (2005). The failure rate of the components of SDS are taken from International literature and it is assumed that these follow log-normal distribution. Fault tree method is employed to propagate the uncertainty in failure rate from components level to shutdown system level. The beta factor model is used to account different extent of diversity. The Monte Carlo sampling technique is used for the analysis. The results of uncertainty analysis are presented in terms of the probability density function, cumulative distribution function, mean, variance, percentile values, confidence intervals, etc. It is observed that the spread in the probability distribution of SDS failure rate is less than SDS components failure rate and ninety percent values of the failure rate of SDS falls below the target value. As generic values of failure rates are used, sensitivity analysis is performed with respect to failure rate of control and safety rods and beta factor. It is discovered that a large increase in failure rate of SDS rods is not carried to SDS system failure proportionately. The failure rate of SDS is very sensitive to the beta factor of common cause failure between the two systems of SDS. The results of the study provide insight in the propagation of uncertainty in the failure rate of SDS components to failure rate of shutdown system.

  7. Short-Term and Medium-Term Reliability Evaluation for Power Systems With High Penetration of Wind Power

    DEFF Research Database (Denmark)

    Ding, Yi; Singh, Chanan; Goel, Lalit

    2014-01-01

    The expanding share of the fluctuating and less predictable wind power generation can introduce complexities in power system reliability evaluation and management. This entails a need for the system operator to assess the system status more accurately for securing real-time balancing. The existing...... reliability evaluation techniques for power systems are well developed. These techniques are more focused on steady-state (time-independent) reliability evaluation and have been successfully applied in power system planning and expansion. In the operational phase, however, they may be too rough...... an approximation of the time-varying behavior of power systems with high penetration of wind power. This paper proposes a time-varying reliability assessment technique. Time-varying reliability models for wind farms, conventional generating units, and rapid start-up generating units are developed and represented...

  8. Universal generating function based recursive algorithms for reliability evaluation of multi-state weighted k-out-of-n systems

    DEFF Research Database (Denmark)

    Yuan, Yan; Ding, Yi

    2012-01-01

    A multi-state k-out-of-n system model provides a flexible tool for evaluating vulnerability and reliability of critical infrastructures such as electric power systems. The multi-state weighted k-out-of-n system model is the generalization of the multi-state k-out-of-n system model, where...... the component i in state j carries a certain utility contributing to the system's performance. However the computational efficiency has become the crucial factor for reliability evaluation of large scale multi-state k-out-of-n systems. Li et al proposed recursive algorithms for reliability evaluation...

  9. Improved imputation accuracy of rare and low-frequency variants using population-specific high-coverage WGS-based imputation reference panel.

    Science.gov (United States)

    Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit

    2017-06-01

    Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAFWGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.

  10. Reliability and construct validity of a new Danish translation of the Prosthesis Evaluation Questionnaire in a population of Danish amputees

    DEFF Research Database (Denmark)

    Christensen, Jan; Doherty, Patrick; Bjorner, Jakob Bue

    2017-01-01

    . Estimates for standard error of measurement (SEM) were calculated based on reliability estimates. Construct validity was evaluated by testing using hypotheses testing. Results: Reliability estimates (ICC/Cronbach’s alpha) for the nine subscales were: Social Burden (0.85/0.76), Appearance (0...

  11. Reliability of the Matson Evaluation of Social Skills with Youngsters (MESSY) for Children with Autism Spectrum Disorders

    Science.gov (United States)

    Matson, Johnny L.; Horovitz, Max; Mahan, Sara; Fodstad, Jill

    2013-01-01

    The purpose of this paper was to update the psychometrics of the "Matson Evaluation of Social Skills for Youngsters" ("MESSY") with children with Autism Spectrum Disorders (ASD), specifically with respect to internal consistency, split-half reliability, and inter-rater reliability. In Study 1, 114 children with ASD (Autistic Disorder, Asperger's…

  12. A Reliability and Validity of an Instrument to Evaluate the School-Based Assessment System: A Pilot Study

    Science.gov (United States)

    Ghazali, Nor Hasnida Md

    2016-01-01

    A valid, reliable and practical instrument is needed to evaluate the implementation of the school-based assessment (SBA) system. The aim of this study is to develop and assess the validity and reliability of an instrument to measure the perception of teachers towards the SBA implementation in schools. The instrument is developed based on a…

  13. Multi-state time-varying reliability evaluation of smart grid with flexible demand resources utilizing Lz transform

    Science.gov (United States)

    Jia, Heping; Jin, Wende; Ding, Yi; Song, Yonghua; Yu, Dezhao

    2017-01-01

    With the expanding proportion of renewable energy generation and development of smart grid technologies, flexible demand resources (FDRs) have been utilized as an approach to accommodating renewable energies. However, multiple uncertainties of FDRs may influence reliable and secure operation of smart grid. Multi-state reliability models for a single FDR and aggregating FDRs have been proposed in this paper with regard to responsive abilities for FDRs and random failures for both FDR devices and information system. The proposed reliability evaluation technique is based on Lz transform method which can formulate time-varying reliability indices. A modified IEEE-RTS has been utilized as an illustration of the proposed technique.

  14. The prosthesis evaluation questionnaire: reliability and cross-validation of the Turkish version

    Science.gov (United States)

    Safer, Vildan Binay; Yavuzer, Gunes; Demir, Sibel Ozbudak; Yanikoglu, Inci; Guneri, Fulya Demircioglu

    2015-01-01

    [Purpose] Currently, there are a limited number of amputee-specific instruments for measuring prosthesis-related quality of life with good psychometric properties in Turkey. This study translated the Prosthetic Evaluation Questionnaire to Turkish and analyzed as well as discussed its construct validity and internal consistency. [Subjects and Methods] The Prosthetic Evaluation Questionnaire was adapted for use in Turkish by forward/backward translation. The final Turkish version of this questionnaire was administered to 90 unilateral amputee patients. Second evaluation was possible in 83 participants within a median 28 day time period. [Results] Point estimates for the intraclass correlation coefficient ranged from 0.69 to 0.89 for all 9 Prosthetic Evaluation Questionnaire scales, indicating good correlation. Overall Cronbach’s alpha coefficients ranged from 0.64 to 0.92, except for the perceived response subscale of 0.39. The ambulation subscale was correlated with the physical functioning subscales of Short Form-36 (SF-36) (r=0.48). The social burden subscale score of the Prosthetic Evaluation Questionnaire was correlated with social functioning subscales of SF-36 (r= 0.63). [Conclusion] The Turkish version of the Prosthetic Evaluation Questionnaire is a valid and reliable tool for implementation in the Turkish unilateral amputee population. PMID:26180296

  15. Gross intraoperative evaluation (GIE): a reliable method for the evaluation of surgical margins at partial nephrectomy.

    Science.gov (United States)

    Yilmaz, Hasan; Ciftci, Seyfettin; Ozkan, Levend; Saribacak, Ali; Yildiz, Kursat; Dillioglugil, Ozdal

    2014-01-01

    To determine the efficacy of a new method called by us as "gross intra-operative evaluation (GIE)" for the assessment of surgical margin (SM) status. A total of 26 consecutive patients operated with cT1a-b renal tumors at a single center were included in this study. After the excision, the tumors were uniformly divided into two halves in the longitudinal axis ex vivo. In this way, margins were exposed for GIE for the evaluation of the safety of SMs. Findings of GIE were compared with the permanent section analysis in terms of SM status. Mean patient age, tumor size and margin thickness was 59 (38-79), 3.1 (1.5-6) cm and 3.7 (0.1-12) mm, respectively. In all patients, GIE showed intact margins and none of the patients had positive SM in the final pathological examination. There was no evidence of local recurrence or distant metastasis with a mean follow-up of 25 (4-104) months. All patients are alive. GIE of resected specimen without FS analysis is a safe and effective method for the evaluation of SMs in partial nephrectomy patients.

  16. Application of imputation methods to genomic selection in Chinese Holstein cattle

    Directory of Open Access Journals (Sweden)

    Weng Ziqing

    2012-02-01

    Full Text Available Abstract Missing genotypes are a common feature of high density SNP datasets obtained using SNP chip technology and this is likely to decrease the accuracy of genomic selection. This problem can be circumvented by imputing the missing genotypes with estimated genotypes. When implementing imputation, the criteria used for SNP data quality control and whether to perform imputation before or after data quality control need to consider. In this paper, we compared six strategies of imputation and quality control using different imputation methods, different quality control criteria and by changing the order of imputation and quality control, against a real dataset of milk production traits in Chinese Holstein cattle. The results demonstrated that, no matter what imputation method and quality control criteria were used, strategies with imputation before quality control performed better than strategies with imputation after quality control in terms of accuracy of genomic selection. The different imputation methods and quality control criteria did not significantly influence the accuracy of genomic selection. We concluded that performing imputation before quality control could increase the accuracy of genomic selection, especially when the rate of missing genotypes is high and the reference population is small.

  17. Development of an Occupational Cognitive Failure Questionnaire (OCFQ: Evaluation Validity and Reliability

    Directory of Open Access Journals (Sweden)

    M. Saremi

    2012-05-01

    Full Text Available Background and aims: Accident investigation show that more than 90 percent of accidents cause by human errors. Human errors are also due to cognitive failures. Cognitive failures can be defined as cognitive-based errors on simple tasks that a person is normally able to complete without fault; such mistakes include problems with memory, attention or action. The present study was designed to develop a measurement tool for the estimation of cognitive failures in industrial workplaces. Methods: In the present analytical-descriptive study, an Occupational Cognitive Failure Questionnaire (OCFQ was developed. For the evaluation of validity, internal consistency and repeatability of the OCFQ the content validity, Cronbach α coefficient and test-retest methods were used, respectively.   Results: A draft of 35-items questionnaire was created and following the evaluation of validity, five items were rejected. The new measuring instrument with 30-items was developed. The CVI for the final OCFQ was found to be acceptable (CVI=0.7. Results showed that the final OCFQ was internally consistent (α=0.96 and repeatable (ICC =0.996 and P<0.001. Conclusion: For measurement of cognitive failure in industrial workplaces, a valid and reliable instrument is required. Based on the obtained results, it can be concluded that the developed questionnaire (OCFQ is a valid and reliable tool for the estimation of cognitive failures in industrial workplaces.

  18. Evaluation of seismic reliability of steel moment resisting frames rehabilitated by concentric braces with probabilistic models

    Directory of Open Access Journals (Sweden)

    Fateme Rezaei

    2017-08-01

    Full Text Available Probability of structure failure which has been designed by "deterministic methods" can be more than the one which has been designed in similar situation using probabilistic methods and models considering "uncertainties". The main purpose of this research was to evaluate the seismic reliability of steel moment resisting frames rehabilitated with concentric braces by probabilistic models. To do so, three-story and nine-story steel moment resisting frames were designed based on resistant criteria of Iranian code and then they were rehabilitated based on controlling drift limitations by concentric braces. Probability of frames failure was evaluated by probabilistic models of magnitude, location of earthquake, ground shaking intensity in the area of the structure, probabilistic model of building response (based on maximum lateral roof displacement and probabilistic methods. These frames were analyzed under subcrustal source by sampling probabilistic method "Risk Tools" (RT. Comparing the exceedance probability of building response curves (or selected points on it of the three-story and nine-story model frames (before and after rehabilitation, seismic response of rehabilitated frames, was reduced and their reliability was improved. Also the main effective variables in reducing the probability of frames failure were determined using sensitivity analysis by FORM probabilistic method. The most effective variables reducing the probability of frames failure are  in the magnitude model, ground shaking intensity model error and magnitude model error

  19. Reliability Evaluation of Power System Considering Voltage Stability and Continuation Power Flow

    Directory of Open Access Journals (Sweden)

    R. K. Saket

    2007-06-01

    Full Text Available This article describes the methodology for evaluation of the reliability of an composite electrical power system considering voltage stability and continuation power flow, which takes into account the peak load and steady state stability limit. The voltage stability is obtained for the probable outage of transmission lines and removal of generators along with the combined state probabilities. The loss of load probabilities (LOLP index is evaluated by merging the capacity probability with load model. State space is truncated by assuming the limits on total numbers of outages of generators and transmission lines. A prediction correction technique has been used along with one dimensional search method to get optimized stability limit for each outage states. The algorithm has been implemented on a six-bus test system.

  20. Applying fuzzy GERT with approximate fuzzy arithmetic based on the weakest t-norm operations to evaluate repairable reliability

    National Research Council Canada - National Science Library

    Lin, Kuo-Ping; Wen, Wu; Chou, Chang-Chien; Jen, Chih-Hung; Hung, Kuo-Chen

    2011-01-01

    ...) to evaluate fuzzy reliability models based on fuzzy GERT simulation technology. The approximate fuzzy arithmetic operations employ principle of interval arithmetic under the weakest t-norm arithmetic operations...

  1. Evaluation of neck muscle size: long-term reliability and comparison of methods.

    Science.gov (United States)

    Belavý, D L; Miokovic, T; Armbrecht, G; Felsenberg, D

    2015-03-01

    Although it is important for prospective studies, the reliability of quantitative measures of cervical muscle size on magnetic resonance imaging is not well established. The aim of the current work was to assess the long-term reliability of measurements of cervical muscle size. In addition, we examined the utility of selecting specific sub-regions of muscles at each vertebral level, averaging between sides of the body, and pooling muscles into larger groups. Axial scans from the base of skull to the third thoracic vertebra were performed in 20 healthy male subjects at baseline and 1.5 years later. We evaluated the semi-spinalis capitis, splenius capitis, spinalis cervicis, longus capitis, longus colli, levator scapulae, sternocleidomastoid, anterior scalenes and middle with posterior scalenes. Bland-Altman analysis showed all measurements to be repeatable between testing-days. Reliability was typically best when entire muscle volume was measured (co-efficients of variation (CVs): 3.3-8.1% depending on muscle). However, when the size of the muscle was assessed at specific vertebral levels, similar measurement precision was achieved (CVs: 2.7-7.6%). A median of 4-6 images were measured at the specific vertebral levels versus 18-37 images for entire muscle volume. This would represent considerable time saving. Based on the findings we also recommend measuring both sides of the body and calculating an average value. Pooling specific muscles into the deep neck flexors (CV: 3.5%) and neck extensors (CV: 2.7%) can serve to reduce variability further. The results of the current study help to establish outcome measures for interventional studies and for sample size estimation.

  2. Validity, Reliability, and Potential Bias of Short Forms of Students' Evaluation of Teaching: The Case of UAE University

    Science.gov (United States)

    Dodeen, Hamzeh

    2013-01-01

    Students' opinions continue to be a significant factor in the evaluation of teaching in higher education institutions. The purpose of this study was to psychometrically assess short students evaluation of teaching (SET) forms using the UAE University form as a model. The study evaluated the form validity, reliability, the overall question, and…

  3. Reliability of smartphone-based teleradiology for evaluating thoracolumbar spine fractures.

    Science.gov (United States)

    Stahl, Ido; Dreyfuss, Daniel; Ofir, Dror; Merom, Lior; Raichel, Michael; Hous, Nir; Norman, Doron; Haddad, Elias

    2017-02-01

    Timely interpretation of computed tomography (CT) scans is of paramount importance in diagnosing and managing spinal column fractures, which can be devastating. Out-of-hospital, on-call spine surgeons are often asked to evaluate CT scans of patients who have sustained trauma to the thoracolumbar spine to make diagnosis and to determine the appropriate course of urgent treatment. Capturing radiographic scans and video clips from computer screens and sending them as instant messages have become common means of communication between physicians, aiding in triaging and transfer decision-making in orthopedic and neurosurgical emergencies. The present study aimed to compare the reliability of interpreting CT scans viewed by orthopedic surgeons in two ways for diagnosing, classifying, and treatment planning for thoracolumbar spine fractures: (1) captured as video clips from standard workstation-based picture archiving and communication system (PACS) and sent via a smartphone-based instant messaging application for viewing on a smartphone; and (2) viewed directly on a PACS. Reliability and agreement study. Thirty adults with thoracolumbar spine fractures who had been consecutively admitted to the Division of Orthopedic Surgery of a Level I trauma center during 2014. Intraobserver agreement. CT scans were captured by use of an iPhone 6 smartphone from a computer screen displaying PACS. Then by use of the WhatsApp instant messaging application, video clips of the scans were sent to the personal smartphones of five spine surgeons. These evaluators were asked to diagnose, classify, and determine the course of treatment for each case. Evaluation of the cases was repeated 4 weeks later, this time using the standard method of workstation-based PACS. Intraobserver agreement was interpreted based on the value of Cohen's kappa statistic. The study did not receive any outside funding. Intraobserver agreement for determining fracture level was near perfect (κ=0.94). Intraobserver

  4. Reliability evaluation of nano-bi/silver paste sensor electrode for detecting trace metals.

    Science.gov (United States)

    Lee, Gyoung-Ja; Kim, Chang Kyu; Lee, Min Ku; Rhee, Chang Kyu

    2012-07-01

    The reliability of sensor characteristics for a nano-bismuth (Bi)/silver (Ag) paste electrode has been investigated by comparison with Hg/Bi film electrodes in terms of accuracy and precision. Using Ag paste instead of carbon paste as a conducting layer, the sensitivity and detection limit of the sensor electrode were more enhanced due to a lower electrical conductivity of Ag. For the evaluation of detecting ability, the Zn, Cd, and Pb ion concentrations of the prepared standard solutions were experimentally measured on Hg film, Bi film, and nano-Bi electrodes using anodic stripping voltammetry. A nano-Bi electrode can detect Zn, Cd, and Pb ions at 0.1 ppb with higher precision and accuracy compared with Hg film and Bi film electrodes. From the trace analyses of Zn, Cd, and Pb ions in commercial drinking water and river water using a nano-Bi electrode and inductively coupled plasma (ICP) technique, it was concluded that the nano-Bi electrode exhibited excellent sensing characteristics with high reliability, and could detect even traces of Zn, Cd, and Pb ions that were not detected by the ICP method.

  5. Standardised evaluation of medicine acceptability in paediatric population: reliability of a model.

    Science.gov (United States)

    Vallet, Thibault; Ruiz, Fabrice; Lavarde, Marc; Pensé-Lhéritier, Anne-Marie; Aoussat, Ameziane

    2017-10-26

    Our novel tool to standardise the evaluation of medicine acceptability was developed using observational data on medicines use measured in a paediatric population included for this purpose (0-14 years). Using this tool, any medicine may be positioned on a map and assigned to an acceptability profile. The present exploration aimed to verify its statistical reliability. Permutation test has been used to verify the significance of the relationships among measures highlighted by the acceptability map. Bootstrapping has been used to demonstrate the accuracy of the model (map, profiles and scores of acceptability) regardless of variations in the data. Lastly, simulations of enlarged data sets (×2; ×5; ×10) have been built to study the model's consistency. Permutation test established the significance of the meaningful pattern identified in the data and summarised in the map. Bootstrapping attested the accuracy of the model: high RV coefficients (mean value: 0.930) verified the mapping stability, significant Adjusted Rand Indexes and Jaccard coefficients supported clustering validity (with either two or four profiles), and agreement between acceptability scores demonstrated scoring relevancy. Regarding enlarged data sets, these indicators reflected a very high consistency of the model. These results highlighted the reliability of the model that will permit its use to standardise medicine acceptability assessments. © 2017 Royal Pharmaceutical Society.

  6. Validity and reliability of a new tool to evaluate handwriting difficulties in Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Evelien Nackaerts

    Full Text Available Handwriting in Parkinson's disease (PD features specific abnormalities which are difficult to assess in clinical practice since no specific tool for evaluation of spontaneous movement is currently available.This study aims to validate the 'Systematic Screening of Handwriting Difficulties' (SOS-test in patients with PD.Handwriting performance of 87 patients and 26 healthy age-matched controls was examined using the SOS-test. Sixty-seven patients were tested a second time within a period of one month. Participants were asked to copy as much as possible of a text within 5 minutes with the instruction to write as neatly and quickly as in daily life. Writing speed (letters in 5 minutes, size (mm and quality of handwriting were compared. Correlation analysis was performed between SOS outcomes and other fine motor skill measurements and disease characteristics. Intrarater, interrater and test-retest reliability were assessed using the intraclass correlation coefficient (ICC and Spearman correlation coefficient.Patients with PD had a smaller (p = 0.043 and slower (p 0.769 for both groups.The SOS-test is a short and effective tool to detect handwriting problems in PD with excellent reliability. It can therefore be recommended as a clinical instrument for standardized screening of handwriting deficits in PD.

  7. Reliability of Prognostic and Predictive Factors Evaluated by Needle Core Biopsies of Large Breast Invasive Tumors.

    Science.gov (United States)

    Petrau, Camille; Clatot, Florian; Cornic, Marie; Berghian, Anca; Veresezan, Liana; Callonnec, Françoise; Baron, Marc; Veyret, Corinne; Laberge, Sophie; Thery, Jean-Christophe; Picquenot, Jean-Michel

    2015-10-01

    Preoperative biopsy of breast cancer allows for prognostic/predictive marker assessment. However, large tumors, which are the main candidates for preoperative chemotherapy, are potentially more heterogeneous than smaller ones, which questions the reliability of histologic analyses of needle core biopsy (NCB) specimens compared with whole surgical specimens (WSS). We studied the histologic concordance between NCB specimens and WSS in tumors larger than 2 cm. Early pT2 or higher breast cancers diagnosed between 2008 and 2011 in our center, with no preoperative treatments, were retrospectively screened. We assessed the main prognostic and predictive validated parameters. Comparisons were performed using the κ test. In total, 163 matched NCB specimens and WSS were analyzed. The correlation was excellent for ER and HER2 (κ = 0.94 and 0.91, respectively), moderate for PR (κ = 0.79) and histologic type (κ = 0.74), weak for Ki-67 (κ = 0.55), and minimal for SBR grade (κ = 0.29). Three of the 21 HER2-positive cases (14% of HER2-positive patients or 1.8% of all patients), by WSS analysis, were initially negative on NCB specimens even after chromogenic in situ hybridization. NCB for large breast tumors allowed reliable determination of ER/PR expression. However, the SBR grade may be deeply underestimated, and false-negative evaluation of the HER2 status would have led to a detrimental lack of trastuzumab administration. Copyright© by the American Society for Clinical Pathology.

  8. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Errichello, Robert [GEARTECH, Houston, TX (United States); Halse, Chris [Romax Technology, Nottingham (United Kingdom)

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  9. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    Science.gov (United States)

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification

  10. Genomic evaluations with many more genotypes

    Directory of Open Access Journals (Sweden)

    Wiggans George R

    2011-03-01

    Full Text Available Abstract Background Genomic evaluations in Holstein dairy cattle have quickly become more reliable over the last two years in many countries as more animals have been genotyped for 50,000 markers. Evaluations can also include animals genotyped with more or fewer markers using new tools such as the 777,000 or 2,900 marker chips recently introduced for cattle. Gains from more markers can be predicted using simulation, whereas strategies to use fewer markers have been compared using subsets of actual genotypes. The overall cost of selection is reduced by genotyping most animals at less than the highest density and imputing their missing genotypes using haplotypes. Algorithms to combine different densities need to be efficient because numbers of genotyped animals and markers may continue to grow quickly. Methods Genotypes for 500,000 markers were simulated for the 33,414 Holsteins that had 50,000 marker genotypes in the North American database. Another 86,465 non-genotyped ancestors were included in the pedigree file, and linkage disequilibrium was generated directly in the base population. Mixed density datasets were created by keeping 50,000 (every tenth of the markers for most animals. Missing genotypes were imputed using a combination of population haplotyping and pedigree haplotyping. Reliabilities of genomic evaluations using linear and nonlinear methods were compared. Results Differing marker sets for a large population were combined with just a few hours of computation. About 95% of paternal alleles were determined correctly, and > 95% of missing genotypes were called correctly. Reliability of breeding values was already high (84.4% with 50,000 simulated markers. The gain in reliability from increasing the number of markers to 500,000 was only 1.6%, but more than half of that gain resulted from genotyping just 1,406 young bulls at higher density. Linear genomic evaluations had reliabilities 1.5% lower than the nonlinear evaluations with 50

  11. Missing Data and Multiple Imputation: An Unbiased Approach

    Science.gov (United States)

    Foy, M.; VanBaalen, M.; Wear, M.; Mendez, C.; Mason, S.; Meyers, V.; Alexander, D.; Law, J.

    2014-01-01

    The default method of dealing with missing data in statistical analyses is to only use the complete observations (complete case analysis), which can lead to unexpected bias when data do not meet the assumption of missing completely at random (MCAR). For the assumption of MCAR to be met, missingness cannot be related to either the observed or unobserved variables. A less stringent assumption, missing at random (MAR), requires that missingness not be associated with the value of the missing variable itself, but can be associated with the other observed variables. When data are truly MAR as opposed to MCAR, the default complete case analysis method can lead to biased results. There are statistical options available to adjust for data that are MAR, including multiple imputation (MI) which is consistent and efficient at estimating effects. Multiple imputation uses informing variables to determine statistical distributions for each piece of missing data. Then multiple datasets are created by randomly drawing on the distributions for each piece of missing data. Since MI is efficient, only a limited number, usually less than 20, of imputed datasets are required to get stable estimates. Each imputed dataset is analyzed using standard statistical techniques, and then results are combined to get overall estimates of effect. A simulation study will be demonstrated to show the results of using the default complete case analysis, and MI in a linear regression of MCAR and MAR simulated data. Further, MI was successfully applied to the association study of CO2 levels and headaches when initial analysis showed there may be an underlying association between missing CO2 levels and reported headaches. Through MI, we were able to show that there is a strong association between average CO2 levels and the risk of headaches. Each unit increase in CO2 (mmHg) resulted in a doubling in the odds of reported headaches.

  12. Reliability of clinical tests to evaluate nerve function and mechanosensitivity of the upper limb peripheral nervous system

    Directory of Open Access Journals (Sweden)

    Bachmann Lucas M

    2009-01-01

    Full Text Available Abstract Background Clinical tests to assess peripheral nerve disorders can be classified into two categories: tests for afferent/efferent nerve function such as nerve conduction (bedside neurological examination and tests for increased mechanosensitivity (e.g. upper limb neurodynamic tests (ULNTs and nerve palpation. Reliability reports of nerve palpation and the interpretation of neurodynamic tests are scarce. This study therefore investigated the intertester reliability of nerve palpation and ULNTs. ULNTs were interpreted based on symptom reproduction and structural differentiation. To put the reliability of these tests in perspective, a comparison with the reliability of clinical tests for nerve function was made. Methods Two experienced clinicians examined 31 patients with unilateral arm and/or neck pain. The examination included clinical tests for nerve function (sensory testing, reflexes and manual muscle testing (MMT and mechanosensitivity (ULNTs and palpation of the median, radial and ulnar nerve. Kappa statistics were calculated to evaluate intertester reliability. A meta-analysis determined an overall kappa for the domains with multiple kappa values (MMT, ULNT, palpation. We then compared the difference in reliability between the tests of mechanosensitivity and nerve function using a one-sample t-test. Results We observed moderate to substantial reliability for the tests for afferent/efferent nerve function (sensory testing: kappa = 0.53; MMT: kappa = 0.68; no kappa was calculated for reflexes due to a lack of variation. Tests to investigate mechanosensitivity demonstrated moderate reliability (ULNT: kappa = 0.45; palpation: kappa = 0.59. When compared statistically, there was no difference in reliability for tests for nerve function and mechanosensitivity (p = 0.06. Conclusion This study demonstrates that clinical tests which evaluate increased nerve mechanosensitivity and afferent/efferent nerve function have comparable moderate to

  13. A framework for multiple imputation in cluster analysis.

    Science.gov (United States)

    Basagaña, Xavier; Barrera-Gómez, Jose; Benet, Marta; Antó, Josep M; Garcia-Aymerich, Judith

    2013-04-01

    Multiple imputation is a common technique for dealing with missing values and is mostly applied in regression settings. Its application in cluster analysis problems, where the main objective is to classify individuals into homogenous groups, involves several difficulties which are not well characterized in the current literature. In this paper, we propose a framework for applying multiple imputation to cluster analysis when the original data contain missing values. The proposed framework incorporates the selection of the final number of clusters and a variable reduction procedure, which may be needed in data sets where the ratio of the number of persons to the number of variables is small. We suggest some ways to report how the uncertainty due to multiple imputation of missing data affects the cluster analysis outcomes-namely the final number of clusters, the results of a variable selection procedure (if applied), and the assignment of individuals to clusters. The proposed framework is illustrated with data from the Phenotype and Course of Chronic Obstructive Pulmonary Disease (PAC-COPD) Study (Spain, 2004-2008), which aimed to classify patients with chronic obstructive pulmonary disease into different disease subtypes.

  14. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  15. Evaluation of neck muscle strength with a modified sphygmomanometer dynamometer: reliability and validity.

    Science.gov (United States)

    Vernon, H T; Aker, P; Aramenko, M; Battershill, D; Alepin, A; Penner, T

    1992-01-01

    flexion/extension ratio for whiplash subjects was 0.25:1.00, which is half of that of normal subjects. The MSD has been found to be a reliable instrument for the evaluation of isometric muscle strength in the neck in normal and symptomatic subjects. Normative values for absolute test levels, bilateral symmetry and flexion/extension ratios have been determined. A symptomatic group demonstrated significant deviations from these norms in the form of reduced strength levels and reduced flexion/extension ratios, while still maintaining very high levels of test-retest consistency and bilateral symmetry. The MSD appears very promising in the evaluation of neck-injured patients.

  16. Measurement of mandible movements using a vernier caliper--an evaluation of the intrasession-, intersession- and interobserver reliability.

    Science.gov (United States)

    Best, Norman; Best, Stefanie; Loudovici-Krug, Dana; Smolenski, Ulrich C

    2013-07-01

    The aim of this study was to evaluate the intrasession-, intersession-, and interrater reliability of a vernier caliper measurement of mandible movements. The authors calculated the intrasession, intersession-, and interrater reliability using a plastic caliper for important mandibular parameters. All intraclass-correlation-coefficients (ICC) are at least moderately accurate, especially the values for intrasession- and intersession reliability, which were excellent. Only the interrater reliability shows greater fluctuations. Whereas the mouth opening, protrusion, and the tooth positions are reliably correct, the same was not applicable to the side movements. The lateral movement measurements were highly variable. This did not apply to other movements. Patient compliance is important along with a different mouth-opening angle. It is possible to generate a variable laterotrusion to both sides. The caliper investigated is a fast, simple, and inexpensive tool for daily work.

  17. Systematic evaluation of the teaching qualities of Obstetrics and Gynecology faculty: reliability and validity of the SETQ tools.

    Directory of Open Access Journals (Sweden)

    Renée van der Leeuw

    Full Text Available BACKGROUND: The importance of effective clinical teaching for the quality of future patient care is globally understood. Due to recent changes in graduate medical education, new tools are needed to provide faculty with reliable and individualized feedback on their teaching qualities. This study validates two instruments underlying the System for Evaluation of Teaching Qualities (SETQ aimed at measuring and improving the teaching qualities of obstetrics and gynecology faculty. METHODS AND FINDINGS: This cross-sectional multi-center questionnaire study was set in seven general teaching hospitals and two academic medical centers in the Netherlands. Seventy-seven residents and 114 faculty were invited to complete the SETQ instruments in the duration of one month from September 2008 to September 2009. To assess reliability and validity of the instruments, we used exploratory factor analysis, inter-item correlation, reliability coefficient alpha and inter-scale correlations. We also compared composite scales from factor analysis to global ratings. Finally, the number of residents' evaluations needed per faculty for reliable assessments was calculated. A total of 613 evaluations were completed by 66 residents (85.7% response rate. 99 faculty (86.8% response rate participated in self-evaluation. Factor analysis yielded five scales with high reliability (Cronbach's alpha for residents' and faculty: learning climate (0.86 and 0.75, professional attitude (0.89 and 0.81, communication of learning goals (0.89 and 0.82, evaluation of residents (0.87 and 0.79 and feedback (0.87 and 0.86. Item-total, inter-scale and scale-global rating correlation coefficients were significant (P<0.01. Four to six residents' evaluations are needed per faculty (reliability coefficient 0.60-0.80. CONCLUSIONS: Both SETQ instruments were found reliable and valid for evaluating teaching qualities of obstetrics and gynecology faculty. Future research should examine improvement of

  18. FORENSIC-CLINICAL INTERVIEW: RELIABILITY AND VALIDITY FOR THE EVALUATION OF PSYCHOLOGICAL INJURY

    Directory of Open Access Journals (Sweden)

    Francisca Fariña

    2013-01-01

    Full Text Available Forensic evaluation of psychological injury involves the use of a multimethod approximation i.e., a psychometric instrument, normally the MMPI-2, and a clinical interview. In terms of the clinical interview, the traditional clinical interview (e.g., SCID is not valid for forensic settings as it does not fulfil the triple objective of forensic evaluation: diagnosis of psychological injury in terms of Post Traumatic Stress Disorder (PTSD, a differential diagnosis of feigning, and establishing a causal relationship between allegations of intimate partner violence (IPV and psychological injury. To meet this requirement, Arce and Fariña (2001 created the forensic-clinical interview based on two techniques that do not contaminate the contents i.e., reinstating the contexts and free recall, and a methodic categorical system of contents analysis for the diagnosis of psychological injury and a differential diagnosis of feigning. The reliability and validity of the forensic-clinical interview designed for the forensic evaluation of psychological injury was assessed in 51 genuine cases of (IPV and 54 mock victims of IPV who were evaluated using a forensic-clinical interview and the MMPI-2. The result revealed that the forensic-clinical interview was a reliable instrument (α = .85 for diagnostic criteria of psychological injury, and α = .744 for feigning strategies. Moreover, the results corroborated the predictive validity (the diagnosis of PTSD was similar to the expected rate; the convergence validity (the diagnosis of PTSD in the interview strongly correlated with the Pk Scale of the MMPI-2, and discriminant validity (the diagnosis of PTSD in the interview did not correlate with the Pk Scale in feigners. The feigning strategies (differential diagnosis also showed convergent validity (high correlation with the Scales and indices of the MMPI2 for the measure of feigning and discriminant validity (no genuine victim was classified as a feigner

  19. Validity and reliability of a health care service evaluation instrument for tuberculosis.

    Science.gov (United States)

    Scatena, Lucia Marina; Wysocki, Anneliese Domingues; Beraldo, Aline Ale; Magnabosco, Gabriela Tavares; Brunello, Maria Eugênia Firmino; Netto Ruffino, Antonio; Nogueira, Jordana de Almeida; Silva Sobrinho, Reinaldo Antonio; Brito, Ewerton William Gomes; Alexandre, Patricia Borges Dias; Monroe, Aline Aparecida; Villa, Tereza Cristina Scatena

    2015-01-01

    OBJECTIVE To evaluate the validity and reliability of an instrument that evaluates the structure of primary health care units for the treatment of tuberculosis. METHODS This cross-sectional study used simple random sampling and evaluated 1,037 health care professionals from five Brazilian municipalities (Natal, state of Rio Grande do Norte; Cabedelo, state of Paraíba; Foz do Iguaçu, state of Parana; Sao José do Rio Preto, state of Sao Paulo, and Uberaba, state of Minas Gerais) in 2011. Structural indicators were identified and validated, considering different methods of organization of the health care system in the municipalities of different population sizes. Each structure represented the organization of health care services and contained the resources available for the execution of health care services: physical resources (equipment, consumables, and facilities); human resources (number and qualification); and resources for maintenance of the existing infrastructure and technology (deemed as the organization of health care services). The statistical analyses used in the validation process included reliability analysis, exploratory factor analysis, and confirmatory factor analysis. RESULTS The validation process indicated the retention of five factors, with 85.9% of the total variance explained, internal consistency between 0.6460 and 0.7802, and quality of fit of the confirmatory factor analysis of 0.995 using the goodness-of-fit index. The retained factors comprised five structural indicators: professionals involved in the care of tuberculosis patients, training, access to recording instruments, availability of supplies, and coordination of health care services with other levels of care. Availability of supplies had the best performance and the lowest coefficient of variation among the services evaluated. The indicators of assessment of human resources and coordination with other levels of care had satisfactory performance, but the latter showed the highest

  20. Validity and reliability of a health care service evaluation instrument for tuberculosis

    Directory of Open Access Journals (Sweden)

    Lucia Marina Scatena

    2015-01-01

    Full Text Available OBJECTIVE To evaluate the validity and reliability of an instrument that evaluates the structure of primary health care units for the treatment of tuberculosis. METHODS This cross-sectional study used simple random sampling and evaluated 1,037 health care professionals from five Brazilian municipalities (Natal, state of Rio Grande do Norte; Cabedelo, state of Paraíba; Foz do Iguaçu, state of Parana; Sao José do Rio Preto, state of Sao Paulo, and Uberaba, state of Minas Gerais in 2011. Structural indicators were identified and validated, considering different methods of organization of the health care system in the municipalities of different population sizes. Each structure represented the organization of health care services and contained the resources available for the execution of health care services: physical resources (equipment, consumables, and facilities; human resources (number and qualification; and resources for maintenance of the existing infrastructure and technology (deemed as the organization of health care services. The statistical analyses used in the validation process included reliability analysis, exploratory factor analysis, and confirmatory factor analysis. RESULTS The validation process indicated the retention of five factors, with 85.9% of the total variance explained, internal consistency between 0.6460 and 0.7802, and quality of fit of the confirmatory factor analysis of 0.995 using the goodness-of-fit index. The retained factors comprised five structural indicators: professionals involved in the care of tuberculosis patients, training, access to recording instruments, availability of supplies, and coordination of health care services with other levels of care. Availability of supplies had the best performance and the lowest coefficient of variation among the services evaluated. The indicators of assessment of human resources and coordination with other levels of care had satisfactory performance, but the latter

  1. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    Energy Technology Data Exchange (ETDEWEB)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  2. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    Energy Technology Data Exchange (ETDEWEB)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  3. Landscape-scale parameterization of a tree-level forest growth model: a k-nearest neighbor imputation approach incorporating LiDAR data

    Science.gov (United States)

    Michael J. Falkowski; Andrew T. Hudak; Nicholas L. Crookston; Paul E. Gessler; Edward H. Uebler; Alistair M. S. Smith

    2010-01-01

    Sustainable forest management requires timely, detailed forest inventory data across large areas, which is difficult to obtain via traditional forest inventory techniques. This study evaluated k-nearest neighbor imputation models incorporating LiDAR data to predict tree-level inventory data (individual tree height, diameter at breast height, and...

  4. High intertester reliability of the cumulated ambulation score for the evaluation of basic mobility in patients with hip fracture

    DEFF Research Database (Denmark)

    Kristensen, Morten Tange; Andersen, Lene; Bech-Jensen, Rie

    2009-01-01

    OBJECTIVE: To examine the intertester reliability of the three activities of the Cumulated Ambulation Score (CAS) and the total CAS, and to define limits for the smallest change in basic mobility that indicates a real change in patients with hip fracture. DESIGN: An intertester reliability study....... independent ambulation. MAIN MEASURES: Reliability was evaluated using weighted kappa statistics, the standard error of measurement (SEM) and the smallest real difference (SRD). RESULTS: The kappa coefficient, the SEM and the SRD in the three activities and the total CAS were >or=0.92,...

  5. The water balance questionnaire: design, reliability and validity of a questionnaire to evaluate water balance in the general population.

    Science.gov (United States)

    Malisova, Olga; Bountziouka, Vassiliki; Panagiotakos, Demosthenes B; Zampelas, Antonis; Kapsokefalou, Maria

    2012-03-01

    There is a need to develop a questionnaire as a research tool for the evaluation of water balance in the general population. The water balance questionnaire (WBQ) was designed to evaluate water intake from fluid and solid foods and drinking water, and water loss from urine, faeces and sweat at sedentary conditions and physical activity. For validation purposes, the WBQ was administrated in 40 apparently healthy participants aged 22-57 years (37.5% males). Hydration indices in urine (24 h volume, osmolality, specific gravity, pH, colour) were measured through established procedures. Furthermore, the questionnaire was administered twice to 175 subjects to evaluate its reliability. Kendall's τ-b and the Bland and Altman method were used to assess the questionnaire's validity and reliability. The proposed WBQ to assess water balance in healthy individuals was found to be valid and reliable, and it could thus be a useful tool in future projects that aim to evaluate water balance.

  6. Systematic evaluation of the teaching qualities of Obstetrics and Gynecology faculty: reliability and validity of the SETQ tools.

    Science.gov (United States)

    van der Leeuw, Renée; Lombarts, Kiki; Heineman, Maas Jan; Arah, Onyebuchi

    2011-05-03

    The importance of effective clinical teaching for the quality of future patient care is globally understood. Due to recent changes in graduate medical education, new tools are needed to provide faculty with reliable and individualized feedback on their teaching qualities. This study validates two instruments underlying the System for Evaluation of Teaching Qualities (SETQ) aimed at measuring and improving the teaching qualities of obstetrics and gynecology faculty. This cross-sectional multi-center questionnaire study was set in seven general teaching hospitals and two academic medical centers in the Netherlands. Seventy-seven residents and 114 faculty were invited to complete the SETQ instruments in the duration of one month from September 2008 to September 2009. To assess reliability and validity of the instruments, we used exploratory factor analysis, inter-item correlation, reliability coefficient alpha and inter-scale correlations. We also compared composite scales from factor analysis to global ratings. Finally, the number of residents' evaluations needed per faculty for reliable assessments was calculated. A total of 613 evaluations were completed by 66 residents (85.7% response rate). 99 faculty (86.8% response rate) participated in self-evaluation. Factor analysis yielded five scales with high reliability (Cronbach's alpha for residents' and faculty): learning climate (0.86 and 0.75), professional attitude (0.89 and 0.81), communication of learning goals (0.89 and 0.82), evaluation of residents (0.87 and 0.79) and feedback (0.87 and 0.86). Item-total, inter-scale and scale-global rating correlation coefficients were significant (Pevaluations are needed per faculty (reliability coefficient 0.60-0.80). Both SETQ instruments were found reliable and valid for evaluating teaching qualities of obstetrics and gynecology faculty. Future research should examine improvement of teaching qualities when using SETQ.

  7. Fitting additive hazards models for case-cohort studies: a multiple imputation approach.

    Science.gov (United States)

    Jung, Jinhyouk; Harel, Ofer; Kang, Sangwook

    2016-07-30

    In this paper, we consider fitting semiparametric additive hazards models for case-cohort studies using a multiple imputation approach. In a case-cohort study, main exposure variables are measured only on some selected subjects, but other covariates are often available for the whole cohort. We consider this as a special case of a missing covariate by design. We propose to employ a popular incomplete data method, multiple imputation, for estimation of the regression parameters in additive hazards models. For imputation models, an imputation modeling procedure based on a rejection sampling is developed. A simple imputation modeling that can naturally be applied to a general missing-at-random situation is also considered and compared with the rejection sampling method via extensive simulation studies. In addition, a misspecification aspect in imputation modeling is investigated. The proposed procedures are illustrated using a cancer data example. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Evaluating the reliability of ultrasonographic parameters in differentiating benign from malignant superficial lymphadenopathy

    Directory of Open Access Journals (Sweden)

    Sarir Nazemi

    2016-11-01

    Full Text Available Diagnosis of malignant lymphadenopathy is of particular importance for treatment planning, before treatment staging and also for prognosis determination. Currently various diagnostic procedures are used to differentiate benign and malignant lymphadenopathy which are invasive and costly. Ultrasonography as a noninvasive, low-cost and accessible method is proposed. The aim of this study was to evaluate the reliability of some ultrasonographic parameters in differentiating malignant from benign superficial lymphadenopathies. In this study ultrasonography was performed for lymph nodes of 100 patients who were eligible for pathological evaluation of superficial lymphadenopathy. The most accessible lymph nodes were marked and biopsied. Sonographic and pathologic results were compared. The sensitivity and specificity of the test and the appropriate cutoff point was determined based on the Receiver Operating Characteristics (ROC curve using SPSS Ver.17. From 100 evaluated lymph nodes 55 were benign and 45 were malignant. There was no significant difference between malignant and benign lymph nodes in terms of cortical and medullary thickness (p=0.055,but there was a significant difference between benign and malignant lymph nodes in terms of blood supply pattern and mean of Pulsatility Index (PI (P=.007 and Resistive Index (RI (P<0.001 . The cortex thickness of 7.95 mm with 62.2٪ sensitivity, 72.7٪ specificity and 70٪ accuracy was the appropriate cutoff point in differentiating malignant and benign lymphadenopathy. The color Doppler criteria in combination with gray scale ultrasonography could be helpful to select patients for biopsy or Fine Needle Aspiration (FNA, but cannot fully replace pathological evaluation.

  9. Test-Retest Reliability and Practice Effects of the Stability Evaluation Test.

    Science.gov (United States)

    Williams, Richelle M; Corvo, Matthew A; Lam, Kenneth C; Williams, Travis A; Gilmer, Lesley K; McLeod, Tamara C Valovich

    2017-01-17

    Postural control plays an essential role in concussion evaluation. The Stability Evaluation Test (SET) aims to objectively analyze postural control by measuring sway velocity on the NeuroCom's VSR portable force platform (Natus, San Carlos, CA). To assess the test-retest reliability and practice effects of the SET protocol. Cohort. Research Laboratory. Fifty healthy adults (males=20, females=30, age=25.30±3.60 years, height=166.60±12.80 cm, mass=68.80±13.90 kg). All participants completed four trials of the SET. Each trial consisted of six 20-second balance tests with eyes closed, under the following conditions: double-leg firm (DFi), single-leg firm (SFi), tandem firm (TFi), double-leg foam (DFo), single-leg foam (SFo), and tandem foam (TFo). Each trial was separated by a 5-minute seated rest period. The dependent variable was sway velocity (deg/sec), with lower values indicating better balance. Sway velocity was recorded for each of the six conditions as well as a composite score for each trial. Test-retest reliability was analyzed across four trials with Intraclass Correlation Coefficients. Practice effects analyzed with repeated measures analysis of variance, followed by Tukey post-hoc comparisons for any significant main effects (preliability values were good to excellent: DFi (ICC=0.88;95%CI:0.81,0.92), SFi (ICC=0.75;95%CI:0.61,0.85), TFi (ICC=0.84;95%CI:0.75,0.90), DFo (ICC=0.83;95%CI:0.74,0.90), SFo (ICC=0.82;95%CI:0.72,0.89), TFo (ICC=0.81;95%CI:0.69,0.88), and composite score (ICC=0.93;95%CI:0.88,0.95). Significant practice effects (preliability for the assessment of postural control in healthy adults. Due to the practice effects noted, a familiarization session is recommended (i.e., all 6 conditions) prior to recording the data. Future studies should evaluate injured patients to determine meaningful change scores during various injuries.

  10. Reliability and Discriminative Ability of a New Method for Soccer Kicking Evaluation.

    Directory of Open Access Journals (Sweden)

    Ivan Radman

    Full Text Available The study aimed to evaluate the test-retest reliability of a newly developed 356 Soccer Shooting Test (356-SST, and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24, performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun, shooting accuracy (SA; average distance from the ball-entry point to the goal centre, and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry. No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05. The intra-class correlation coefficients were high (ICC = 0.70-0.88, and the coefficients of variation were low (CV = 5.3-5.4%. Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men's soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations.

  11. The reliability evaluation of reclaimed water reused in power plant project

    Science.gov (United States)

    Yang, Jie; Jia, Ru-sheng; Gao, Yu-lan; Wang, Wan-fen; Cao, Peng-qiang

    2017-12-01

    The reuse of reclaimed water has become one of the important measures to solve the shortage of water resources in many cities, But there is no unified way to evaluate the engineering. Concerning this issue, it took Wanneng power plant project in Huai city as a example, analyzed the reliability of wastewater reuse from the aspects of quality in reclaimed water, water quality of sewage plant, the present sewage quantity in the city and forecast of reclaimed water yield, in particular, it was necessary to make a correction to the actual operation flow rate of the sewage plant. the results showed that on the context of the fluctuation of inlet water quality, the outlet water quality of sewage treatment plants is basically stable, and it can meet the requirement of circulating cooling water, but suspended solids(SS) and total hardness in boiler water exceed the limit, and some advanced treatment should be carried out. In addition, the total sewage discharge will reach 13.91×104m3/d and 14.21×104m3/d respectively in the two planning level years of the project. They are greater than the normal collection capacity of the sewage system which is 12.0×104 m3/d, and the reclaimed water yield can reach 10.74×104m3/d, which is greater than the actual needed quantity 8.25×104m3/d of the power plant, so the wastewater reuse of this sewage plant are feasible and reliable to the power plant in view of engineering.

  12. Validity and Reliability of the Korean Version of the Utrecht Scale for Evaluation of Rehabilitation-Participation

    NARCIS (Netherlands)

    Lee, Joo-Hyun; Park, Ji-Hyuk; Kim, Yeong Jo; Lee, Sang Heon; Post, Marcel W. M.; Park, Hae Yean

    2017-01-01

    This study investigated the reliability and validity of the Korean version of the Utrecht Scale for Evaluation of Rehabilitation-Participation (K-USER-P) in patients with stroke. Stroke patients participated in this study. The Utrecht Scale for Evaluation of Rehabilitation-Participation was

  13. Multiple imputation of continuous data via a semiparametric probability integral transformation.

    Science.gov (United States)

    Helenowski, Irene B; Demirtas, Hakan

    2014-01-01

    We propose a semiparametric approach incorporating principles of multiple imputation under the normality assumption, multivariate number generation, and computation of empirical cumulative distribution function (eCDF) values to impute continuous data with variables following any marginal distribution. This method involves mapping the data to normally distributed values, imputing these values, and back-transforming the data onto the scale of the original data. The transformations associated with eCDF computations constitute the nonparametric portion of our algorithm, while imputation under the normality assumption constitutes the parametric portion. Application of this method to simulated and real data leads to promising results.

  14. Assessing and comparison of different machine learning methods in parent-offspring trios for genotype imputation.

    Science.gov (United States)

    Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi

    2016-06-21

    Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. An Appropriate Wind Model for Wind Integrated Power Systems Reliability Evaluation Considering Wind Speed Correlations

    Directory of Open Access Journals (Sweden)

    Rajesh Karki

    2013-02-01

    Full Text Available Adverse environmental impacts of carbon emissions are causing increasing concerns to the general public throughout the world. Electric energy generation from conventional energy sources is considered to be a major contributor to these harmful emissions. High emphasis is therefore being given to green alternatives of energy, such as wind and solar. Wind energy is being perceived as a promising alternative. This source of energy technology and its applications have undergone significant research and development over the past decade. As a result, many modern power systems include a significant portion of power generation from wind energy sources. The impact of wind generation on the overall system performance increases substantially as wind penetration in power systems continues to increase to relatively high levels. It becomes increasingly important to accurately model the wind behavior, the interaction with other wind sources and conventional sources, and incorporate the characteristics of the energy demand in order to carry out a realistic evaluation of system reliability. Power systems with high wind penetrations are often connected to multiple wind farms at different geographic locations. Wind speed correlations between the different wind farms largely affect the total wind power generation characteristics of such systems, and therefore should be an important parameter in the wind modeling process. This paper evaluates the effect of the correlation between multiple wind farms on the adequacy indices of wind-integrated systems. The paper also proposes a simple and appropriate probabilistic analytical model that incorporates wind correlations, and can be used for adequacy evaluation of multiple wind-integrated systems.

  16. A questionnaire to evaluate the impact of chronic diseases: validated translation and Illness Effects Questionnaire (IEQ reliability study

    Directory of Open Access Journals (Sweden)

    Patrícia Pinto Fonseca

    2012-01-01

    Full Text Available INTRODUCTION: Patients' perception about their health condition, mainly involving chronic diseases, has been investigated in many studies and it has been associated to depression, compliance with the treatment, quality of life and prognosis. The Illness Effects Questionnaire (IEQ is a tool which makes the standardized evaluation of patients' perception about their illness possible, so that it is brief and accessible to the different clinical settings. This work aims to begin the transcultural adaptation of the IEQ to Brazil through the validated translation and the reliability study. METHODS: The back-translation method and the test-retest reliability study were used in a sample of 30 adult patients under chronic hemodialysis. The reliability indexes were estimated using the Pearson, Spearman, Weighted Kappa and Cronbach's alpha coefficients. RESULTS: The semantic equivalence was reached through the validated translation. In this study, the reliability indexes obtained were respectively: 0.85 and 0.75 (p < 0.001; 0.68 and 0.92 (p < 0.0001. DISCUSSION: The reliability indexes obtained attest to the stability of responses in both evaluations. Additional procedures are necessary for the transcultural adaptation of the IEQ to be complete. CONCLUSION: The results indicate the translation validity and the reliability of the Brazilian version of the IEQ for the sample studied.

  17. Evaluation of piping reliability and failure data for use in risk-based inspections of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, V. de; Soares, W.A.; Costa, A.C.L. da; Rabello, E.G.; Marques, R.O., E-mail: vasconv@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2016-07-01

    During operation of industrial facilities, components and systems can deteriorate over time, thus increasing the possibility of accidents. Risk-Based Inspection (RBI) involves inspection planning based on information about risks, through assessing of probability and consequence of failures. In-service inspections are used in nuclear power plants, in order to ensure reliable and safe operation. Traditional deterministic inspection approaches investigate generic degradation mechanisms on all systems. However, operating experience indicates that degradation occurs where there are favorable conditions for developing a specific mechanism. Inspections should be prioritized at these places. Risk-Informed In-service Inspections (RI-ISI) are types of RBI that use Probabilistic Safety Assessment results, increasing reliability and plant safety, and reducing radiation exposure. These assessments use both available generic reliability and failure data, as well as plant specific information. This paper proposes a method for evaluating piping reliability and failure data important for RI-ISI programs, as well as the techniques involved. (author)

  18. Utilisation, Reliability and Validity of Clinical Evaluation Exercise in Otolaryngology Training.

    Science.gov (United States)

    Awad, Z; Hayden, L; Muthuswamy, K; Tolley, N S

    2015-10-01

    To investigate the utilisation, reliability and validity of clinical evaluation exercise (CEX) in otolaryngology training. Retrospective database analysis. Online assessment database. We analysed all CEXs submitted by north London core (CT) and speciality trainees (ST) in otolaryngology from 2010 to 2013. Internal consistency of the 7 CEX items rated as either O: outstanding, S: satisfactory or D: development required. Overall performance rating (pS) of 1-4 assessed against completion of training level. Receiver operating characteristic was used to describe CEX sensitivity and specificity. Overall score (cS), pS and the number of 'D'-rated items were used to investigate construct validity. One thousand one hundred and sixty CEXs from 45 trainees were included. CEX showed good internal consistency (Cronbach's alpha= 0.85). CEX was highly sensitive (99%), yet not specific (6%). cS and pS for ST was higher than CT (99.1% ± 0.4 versus 96.6% ± 0.8 and 3.06 ± 0.05 versus 1.92 ± 0.04, respectively P otolaryngology trainees in clinical examination, but not at higher level. It has the potential to be used in a summative capacity in selecting trainees for ST positions. This would also encourage trainees to master all domains of otolaryngology clinical examination by end of CT. © 2015 John Wiley & Sons Ltd.

  19. Mitogenomic evaluation of the historical biogeography of cichlids toward reliable dating of teleostean divergences

    Directory of Open Access Journals (Sweden)

    Miya Masaki

    2008-07-01

    Full Text Available Abstract Background Recent advances in DNA sequencing and computation offer the opportunity for reliable estimates of divergence times between organisms based on molecular data. Bayesian estimations of divergence times that do not assume the molecular clock use time constraints at multiple nodes, usually based on the fossil records, as major boundary conditions. However, the fossil records of bony fishes may not adequately provide effective time constraints at multiple nodes. We explored an alternative source of time constraints in teleostean phylogeny by evaluating a biogeographic hypothesis concerning freshwater fishes from the family Cichlidae (Perciformes: Labroidei. Results We added new mitogenomic sequence data from six cichlid species and conducted phylogenetic analyses using a large mitogenomic data set. We found a reciprocal monophyly of African and Neotropical cichlids and their sister group relationship to some Malagasy taxa (Ptychochrominae sensu Sparks and Smith. All of these taxa clustered with a Malagasy + Indo/Sri Lankan clade (Etroplinae sensu Sparks and Smith. The results of the phylogenetic analyses and divergence time estimations between continental cichlid clades were much more congruent with Gondwanaland origin and Cretaceous vicariant divergences than with Cenozoic transmarine dispersal between major continents. Conclusion We propose to add the biogeographic assumption of cichlid divergences by continental fragmentation as effective time constraints in dating teleostean divergence times. We conducted divergence time estimations among teleosts by incorporating these additional time constraints and achieved a considerable reduction in credibility intervals in the estimated divergence times.

  20. A Correlated Model for Evaluating Performance and Energy of Cloud System Given System Reliability

    Directory of Open Access Journals (Sweden)

    Hongli Zhang

    2015-01-01

    Full Text Available The serious issue of energy consumption for high performance computing systems has attracted much attention. Performance and energy-saving have become important measures of a computing system. In the cloud computing environment, the systems usually allocate various resources (such as CPU, Memory, Storage, etc. on multiple virtual machines (VMs for executing tasks. Therefore, the problem of resource allocation for running VMs should have significant influence on both system performance and energy consumption. For different processor utilizations assigned to the VM, there exists the tradeoff between energy consumption and task completion time when a given task is executed by the VMs. Moreover, the hardware failure, software failure and restoration characteristics also have obvious influences on overall performance and energy. In this paper, a correlated model is built to analyze both performance and energy in the VM execution environment given the reliability restriction, and an optimization model is presented to derive the most effective solution of processor utilization for the VM. Then, the tradeoff between energy-saving and task completion time is studied and balanced when the VMs execute given tasks. Numerical examples are illustrated to build the performance-energy correlated model and evaluate the expected values of task completion time and consumed energy.

  1. Systems Analysis Programs for Hands-On Integrated Reliability Evaluations (SAPHIRE) Technical Reference

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; W. J. Galyean; S. T. Beck

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows? operating system. Herein information is provided on the principles used in the construction and operation of Version 6.0 and 7.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, cut set "recovery," end state manipulation, and use of "compound events."

  2. Systems Analysis Programs for Hands-On Integrated Reliability Evaluations (SAPHIRE) Technical Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; W. J. Galyean; S. T. Beck

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows? operating system. Herein information is provided on the principles used in the construction and operation of Version 6.0 and 7.0 of the SAPHIRE system. This report summarizes the fundamental mathematical concepts of sets and logic, fault trees, and probability. This volume then describes the algorithms used to construct a fault tree and to obtain the minimal cut sets. It gives the formulas used to obtain the probability of the top event from the minimal cut sets, and the formulas for probabilities that apply for various assumptions concerning reparability and mission time. It defines the measures of basic event importance that SAPHIRE can calculate. This volume gives an overview of uncertainty analysis using simple Monte Carlo sampling or Latin Hypercube sampling, and states the algorithms used by this program to generate random basic event probabilities from various distributions. Also covered are enhance capabilities such as seismic analysis, cut set "recovery," end state manipulation, and use of "compound events."

  3. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Data Loading Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory. This report is intended to assist the user to enter PRA data into the SAPHIRE program using the built-in MAR-D ASCII-text file data transfer process. Towards this end, a small sample database is constructed and utilized for demonstration. Where applicable, the discussion includes how the data processes for loading the sample database relate to the actual processes used to load a larger PRA models. The procedures described herein were developed for use with SAPHIRE Version 6.0 and Version 7.0. In general, the data transfer procedures for version 6 and 7 are the same, but where deviations exist, the differences are noted. The guidance specified in this document will allow a user to have sufficient knowledge to both understand the data format used by SAPHIRE and to carry out the transfer of data between different PRA projects.

  4. A reliable in vitro fruiting system for armillaria mellea for evaluation of agrobacterium tumefaciens transformation vectors

    Science.gov (United States)

    Armillaria mellea is a serious pathogen of horticultural and agricultural systems in Europe and North America. The lack of a reliable in vitro fruiting system has hindered research, and necessitated dependence on intermittently available wild-collected basidiospores. Here we describe a reliable, rep...

  5. Reliability of candida skin test in the evaluation of T-cell function in ...

    African Journals Online (AJOL)

    Background: Both standardized and non-standardized candida skin tests are used in clinical practice for functional in-vivo assessment of cellular immunity with variable results and are considered not reliable under the age of 1 year. We sought to investigate the reliability of using manually prepared candida intradermal test ...

  6. Evaluation the reliability and validity of students\\' satisfaction questionnaire from training chairs

    Directory of Open Access Journals (Sweden)

    Samira Ansari

    2017-09-01

    conclusion: according to this study's results, in investigating the validity and reliability indices of questoinnaire developed by the researcher was noted that this questionnaire have a proper validity and reliability to assess students' satisfaction from training chairs and can be used to assess any office and student chairs.

  7. Reliability Evaluation of a Single-phase H-bridge Inverter with Integrated Active Power Decoupling

    DEFF Research Database (Denmark)

    Tang, Junchaojie; Wang, Haoran; Ma, Siyuan

    2016-01-01

    Various power decoupling methods have been proposed recently to replace the DC-link Electrolytic Capacitors (E-caps) in single-phase conversion system, in order to extend the lifetime and improve the reliability of the DC-link. However, it is still an open question whether the converter level...... reliability becomes better or not, since additional components are introduced and the loading of the existing components may be changed. This paper aims to study the converter level reliability of a single-phase full-bridge inverter with two kinds of active power decoupling module and to compare...... it with the traditional passive DC-link solution. The converter level reliability is obtained by component level electro-thermal stress modeling, lifetime model, Weibull distribution, and Reliability Block Diagram (RBD) method. The results are demonstrated by a 2 kW single-phase inverter application....

  8. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  9. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for ansforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  10. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for transforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  11. Reliability, Resilience, and Vulnerability criteria for the evaluation of Human Health Risks

    Science.gov (United States)

    Rodak, C. M.; Silliman, S. E.; Bolster, D.

    2011-12-01

    Understanding the impact of water quality on the health of a general population is challenging due high degrees of uncertainty and variability in hydrological, toxicological and human aspects of the system. Assessment of the impact of changes in water quality of a public water supply is critical to management of that water supply. We propose the use of three different system evaluation criteria: Reliability, Resilience and Vulnerability (RRV) as a tool for assessing the impact of uncertainty in the arrival of contaminant mass through time with respect to human health risks on a variable population. These criteria were first introduced to the water resources community by Hashimoto et al (1982). Most simply one can understand these criteria as the following: Reliability is the likelihood of the system being in a state of success; Resilience is the probability that the system will return to a state of success at t+1 if it is in failure at time step t, and Vulnerability is the severity of failure, which here is defined as the maximum health risk. These concepts are applied to a theoretical example where the water quality at a water supply well varies over time: health impact is considered based on sliding, 30-year windows of exposure to water derived from the well. We apply the methodology, in terms of uncertainty in water quality deviations, to eight simulated breakthrough curves of a contaminant at the well: each curve represents equal mass of contaminant arriving at the well over a 70-year lifetime of the well, but different mass distributions over time. These curves are used to investigate the impact of uncertainty in the distribution through time of the contaminant mass at the well, as well as the initial arrival of the contaminant over the 70-year lifetime of the well. In addition to extending the health risk through time with uncertainty in mass distribution, we incorporate variability in the human population to examine the evolution of the three criteria within

  12. Reliability/Cost Evaluation on Power System connected with Wind Power for the Reserve Estimation

    DEFF Research Database (Denmark)

    Lee, Go-Eun; Cha, Seung-Tae; Shin, Je-Seok

    2012-01-01

    Wind power is ideally a renewable energy with no fuel cost, but has a risk to reduce reliability of the whole system because of uncertainty of the output. If the reserve of the system is increased, the reliability of the system may be improved. However, the cost would be increased. Therefore...... the reserve needs to be estimated considering the trade-off between reliability and economic aspects. This paper suggests a methodology to estimate the appropriate reserve, when wind power is connected to the power system. As a case study, when wind power is connected to power system of Korea, the effects...

  13. Evaluating Written Patient Information for Eczema in German: Comparing the Reliability of Two Instruments, DISCERN and EQIP.

    Directory of Open Access Journals (Sweden)

    Megan E McCool

    Full Text Available Patients actively seek information about how to cope with their health problems, but the quality of the information available varies. A number of instruments have been developed to assess the quality of patient information, primarily though in English. Little is known about the reliability of these instruments when applied to patient information in German. The objective of our study was to investigate and compare the reliability of two validated instruments, DISCERN and EQIP, in order to determine which of these instruments is better suited for a further study pertaining to the quality of information available to German patients with eczema. Two independent raters evaluated a random sample of 20 informational brochures in German. All the brochures addressed eczema as a disorder and/or therapy options and care. Intra-rater and inter-rater reliability were assessed by calculating intra-class correlation coefficients, agreement was tested with weighted kappas, and the correlation of the raters' scores for each instrument was measured with Pearson's correlation coefficient. DISCERN demonstrated substantial intra- and inter-rater reliability. It also showed slightly better agreement than EQIP. There was a strong correlation of the raters' scores for both instruments. The findings of this study support the reliability of both DISCERN and EQIP. However, based on the results of the inter-rater reliability, agreement and correlation analyses, we consider DISCERN to be the more precise tool for our project on patient information concerning the treatment and care of eczema.

  14. Improving accuracy of genomic prediction in Brangus cattle by adding animals with imputed low-density SNP genotypes.

    Science.gov (United States)

    Lopes, F B; Wu, X-L; Li, H; Xu, J; Perkins, T; Genho, J; Ferretti, R; Tait, R G; Bauck, S; Rosa, G J M

    2018-02-01

    Reliable genomic prediction of breeding values for quantitative traits requires the availability of sufficient number of animals with genotypes and phenotypes in the training set. As of 31 October 2016, there were 3,797 Brangus animals with genotypes and phenotypes. These Brangus animals were genotyped using different commercial SNP chips. Of them, the largest group consisted of 1,535 animals genotyped by the GGP-LDV4 SNP chip. The remaining 2,262 genotypes were imputed to the SNP content of the GGP-LDV4 chip, so that the number of animals available for training the genomic prediction models was more than doubled. The present study showed that the pooling of animals with both original or imputed 40K SNP genotypes substantially increased genomic prediction accuracies on the ten traits. By supplementing imputed genotypes, the relative gains in genomic prediction accuracies on estimated breeding values (EBV) were from 12.60% to 31.27%, and the relative gain in genomic prediction accuracies on de-regressed EBV was slightly small (i.e. 0.87%-18.75%). The present study also compared the performance of five genomic prediction models and two cross-validation methods. The five genomic models predicted EBV and de-regressed EBV of the ten traits similarly well. Of the two cross-validation methods, leave-one-out cross-validation maximized the number of animals at the stage of training for genomic prediction. Genomic prediction accuracy (GPA) on the ten quantitative traits was validated in 1,106 newly genotyped Brangus animals based on the SNP effects estimated in the previous set of 3,797 Brangus animals, and they were slightly lower than GPA in the original data. The present study was the first to leverage currently available genotype and phenotype resources in order to harness genomic prediction in Brangus beef cattle. © 2018 Blackwell Verlag GmbH.

  15. Cross-cultural adaptation and reliability and validity of the Dutch Patient-Rated Tennis Elbow Evaluation (PRTEE-D)

    NARCIS (Netherlands)

    van Ark, Mathijs; Zwerver, Johannes; Diercks, Ronald L; van den Akker-Scheek, Inge

    2014-01-01

    Background: Lateral Epicondylalgia (LE) is a common injury for which no reliable and valid measure exists to determine severity in the Dutch language. The Patient-Rated Tennis Elbow Evaluation (PRTEE) is the first questionnaire specifically designed for LE but in English. The aim of this study was

  16. Evaluation of the Influence of the Logistic Operations Reliability on the Total Costs of a Supply Chain

    Directory of Open Access Journals (Sweden)

    Lukinskiy Valery

    2016-12-01

    Full Text Available Nowadays in logistics integral processes between the material and related flows in supply chains are getting developed more and more. However, in spite of increasing volume of statistical data which reflect the integral processes, the influence evaluation issues of the logistic operations reliability indexes on the total logistics costs remain open and require the corresponding researches implementation.

  17. Assessing Reliability and Validity of the "GroPromo" Audit Tool for Evaluation of Grocery Store Marketing and Promotional Environments

    Science.gov (United States)

    Kerr, Jacqueline; Sallis, James F.; Bromby, Erica; Glanz, Karen

    2012-01-01

    Objective: To evaluate reliability and validity of a new tool for assessing the placement and promotional environment in grocery stores. Methods: Trained observers used the "GroPromo" instrument in 40 stores to code the placement of 7 products in 9 locations within a store, along with other promotional characteristics. To test construct validity,…

  18. Reliability of computerized image analysis for the evaluation of serial synovial biopsies in randomized controlled trials in rheumatoid arthritis

    NARCIS (Netherlands)

    Haringman, J.J.; Vinkenoog, M.; Gerlag, D.M.; Smeets, T.J.M.; Zwinderman, A.H.; Tak, P.P.

    2005-01-01

    Analysis of biomarkers in synovial tissue is increasingly used in the evaluation of new targeted therapies for patients with rheumatoid arthritis (RA). This study determined the intrarater and inter-rater reliability of digital image analysis (DIA) of synovial biopsies from RA patients participating

  19. Reliability of computerized image analysis for the evaluation of serial synovial biopsies in randomized controlled trials in rheumatoid arthritis

    NARCIS (Netherlands)

    Haringman, Jasper J.; Vinkenoog, Marjolein; Gerlag, Danielle M.; Smeets, Tom J. M.; Zwinderman, Aeilko H.; Tak, Paul P.

    2005-01-01

    Analysis of biomarkers in synovial tissue is increasingly used in the evaluation of new targeted therapies for patients with rheumatoid arthritis ( RA). This study determined the intrarater and inter-rater reliability of digital image analysis (DIA) of synovial biopsies from RA patients

  20. Reliability-based evaluation of bridge components for consistent safety margins.

    Science.gov (United States)

    2010-10-01

    The Load and Resistant Factor Design (LRFD) approach is based on the concept of structural reliability. The approach is more : rational than the former design approaches such as Load Factor Design or Allowable Stress Design. The LRFD Specification fo...

  1. Reliability of a method for evaluating porosity in denture base resins.

    Science.gov (United States)

    Pero, Ana Carolina; Marra, Juliê; Paleari, André Gustavo; de Souza, Raphael Freitas; Ruvolo-Filho, Adhemar; Compagnoni, Marco Antonio

    2011-06-01

    The method of porosity analysis by water absorption has been carried out by the storage of the specimens in pure water, but it does not exclude the potential plasticising effect of the water generating unreal values of porosity. The present study evaluated the reliability of this method of porosity analysis in polymethylmethacrylate denture base resins by the determination of the most satisfactory solution for storage (S), where the plasticising effect was excluded. Two specimen shapes (rectangular and maxillary denture base) and two denture base resins, water bath-polymerised (Classico) and microwave-polymerised (Acron MC) were used. Saturated anhydrous calcium chloride solutions (25%, 50%, 75%) and distilled water were used for specimen storage. Sorption isotherms were used to determine S. Porosity factor (PF) and diffusion coefficient (D) were calculated within S and for the groups stored in distilled water. anova and Tukey tests were performed to identify significant differences in PF results and Kruskal-Wallis test and Dunn multiple comparison post hoc test, for D results (α=0.05). For Acron MC denture base shape, FP results were 0.24% (S 50%) and 1.37% (distilled water); for rectangular shape FP was 0.35% (S 75%) and 0.19% (distilled water). For Classico denture base shape, FP results were 0.54% (S 75%) and 1.21% (distilled water); for rectangular shape FP was 0.7% (S 50%) and 1.32% (distilled water). FP results were similar in S and distilled water only for Acron MC rectangular shape (p>0.05). D results in distilled water were statistically higher than S for all groups. The results of the study suggest that an adequate solution for storing specimens must be used to measure porosity by water absorption, based on excluding the plasticising effect. © 2009 The Gerodontology Society and John Wiley & Sons A/S.

  2. Power System Reliability Evaluation Using Fault Tree Approach Based on Generalized Fuzzy Number

    OpenAIRE

    Yaduvir Singh; Amit Kumar; Manjit Verma

    2012-01-01

    This paper describes a fault tree technique based on generalized fuzzy numbers to a possibility distribution of reliability indices for power systems. Due to uncertainty in the collected data, all the failure probabilities are represented by generalized trapezoidal fuzzy number. In this paper, the fault-tree incorporated with the generalized trapezoidal fuzzy number and minimal cut sets approach is used for reliability assessment of power systems. An example of gas power plant is given to dem...

  3. 48 CFR 1830.7002-4 - Determining imputed cost of money.

    Science.gov (United States)

    2010-10-01

    ... money. 1830.7002-4 Section 1830.7002-4 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND... Determining imputed cost of money. (a) Determine the imputed cost of money for an asset under construction, fabrication, or development by applying a cost of money rate (see 1830.7002-2) to the representative...

  4. Consequences of Splitting Sequencing Effort over Multiple Breeds on Imputation Accuracy

    NARCIS (Netherlands)

    Bouwman, A.C.; Veerkamp, R.F.

    2014-01-01

    Imputation from a high-density SNP panel (777k) to whole-genome sequence with a reference population of 20 Holstein resulted in an average imputation accuracy of 0.70, and increased to 0.83 when the reference population was increased by including 3 other dairy breeds with 20 animals each. When the

  5. Mapping gradients of community composition with nearest-neighbour imputation: extending plot data for landscape analysis

    Science.gov (United States)

    Janet L. Ohmann; Matthew J. Gregory; Emilie B. Henderson; Heather M. Roberts

    2011-01-01

    Question: How can nearest-neighbour (NN) imputation be used to develop maps of multiple species and plant communities? Location: Western and central Oregon, USA, but methods are applicable anywhere. Methods: We demonstrate NN imputation by mapping woody plant communities for >100 000 km2 of diverse forests and woodlands. Species abundances on...

  6. Multiple imputation of discrete and continuous data by fully conditional specification

    NARCIS (Netherlands)

    Buuren, S. van

    2010-01-01

    The goal of multiple imputation is to provide valid inferences for statistical estimates from incomplete data. To achieve that goal, imputed values should preserve the structure in the data, as well as the uncertainty about this structure, and include any knowledge about the process that generated

  7. Multiple imputation of covariates by fully conditional specification: Accommodating the substantive model.

    Science.gov (United States)

    Bartlett, Jonathan W; Seaman, Shaun R; White, Ian R; Carpenter, James R

    2015-08-01

    Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation. Imputation of partially observed covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of multiple imputation may impute covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing multiple imputation, can be modified so that covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it with existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. Stata software implementing the approach is freely available. © The Author(s) 2014.

  8. An imputed forest composition map for New England screened by species range boundaries

    Science.gov (United States)

    Matthew J. Duveneck; Jonathan R. Thompson; B. Tyler. Wilson

    2015-01-01

    Initializing forest landscape models (FLMs) to simulate changes in tree species composition requires accurate fine-scale forest attribute information mapped continuously over large areas. Nearest-neighbor imputation maps, maps developed from multivariate imputation of field plots, have high potential for use as the initial condition within FLMs, but the tendency for...

  9. Whole-Genome Sequencing Coupled to Imputation Discovers Genetic Signals for Anthropometric Traits

    DEFF Research Database (Denmark)

    Tachmazidou, Ioanna; Süveges, Dániel; Min, Josine L

    2017-01-01

    Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader alleli...

  10. Reliability of measures used in radiographic evaluation of the adult hip

    Energy Technology Data Exchange (ETDEWEB)

    Bjarnason, J.A.; Reikeras, O. [Oslo University Hospital, Department of Orthopedics, Oslo (Norway); Pripp, A.H. [Oslo University Hospital, Department of Biostatistics, Oslo (Norway)

    2015-02-20

    The reliability of radiographic measurements has been studied in pediatric hips, but less has been published on the adult hip, and none have examined the reliability of measurements for the location of the center of rotation (COR) of the hip joint. We have investigated the reliability of various radiographic variables with a focus on the COR. The study was carried out on a standardized format for anterior-posterior radiographs of the pelvis. The measured variables were; (A) the distance from a sagittal reference line to the COR, (B) the distance from the sagittal reference line to the proximal end of the lateral cortical line of the femur, (C) the distance from the sagittal reference line to the medial rim of the acetabulum, (D) the distance from the horizontal reference line to the roof of the acetabulum, and (E) the distance from the horizontal reference line to the COR. One observer (JAB) conducted the measurements twice separated by a time interval of 45-60 days to assess intra-observer reliability, and the first measurements of JAB were compared to those performed by another observer (OR) to assess inter-observer reliability. Intraclass correlation coefficients were above 0.98 for all measurements, and the minimum and maximum values that statistically include 95 % of the observer differences were all within -3 to +3 mm. These measurements proved to have high reliability and agreement of both within the same observer and between two observers. They should therefore be reproducible in a clinical setting. (orig.)

  11. Comparison of different methods for imputing genome-wide marker genotypes in Swedish and Finnish Red Cattle

    DEFF Research Database (Denmark)

    Ma, Peipei; Brøndum, Rasmus Froberg; Qin, Zahng

    2013-01-01

    This study investigated the imputation accuracy of different methods, considering both the minor allele frequency and relatedness between individuals in the reference and test data sets. Two data sets from the combined population of Swedish and Finnish Red Cattle were used to test the influence...... of these factors on the accuracy of imputation. Data set 1 consisted of 2,931 reference bulls and 971 test bulls, and was used for validation of imputation from 3,000 markers (3K) to 54,000 markers (54K). Data set 2 contained 341 bulls in the reference set and 117 in the test set, and was used for validation...... of imputation from 54K to high density [777,000 markers (777K)]. Both test sets were divided into 4 groups according to their relationship to the reference population. Five imputation methods (Beagle, IMPUTE2, findhap, AlphaImpute, and FImpute) were used in this study. Imputation accuracy was measured...

  12. Systematic Evaluation of the Teaching Qualities of Obstetrics and Gynecology Faculty: Reliability and Validity of the SETQ Tools

    Science.gov (United States)

    van der Leeuw, Renée; Lombarts, Kiki; Heineman, Maas Jan; Arah, Onyebuchi

    2011-01-01

    Background The importance of effective clinical teaching for the quality of future patient care is globally understood. Due to recent changes in graduate medical education, new tools are needed to provide faculty with reliable and individualized feedback on their teaching qualities. This study validates two instruments underlying the System for Evaluation of Teaching Qualities (SETQ) aimed at measuring and improving the teaching qualities of obstetrics and gynecology faculty. Methods and Findings This cross-sectional multi-center questionnaire study was set in seven general teaching hospitals and two academic medical centers in the Netherlands. Seventy-seven residents and 114 faculty were invited to complete the SETQ instruments in the duration of one month from September 2008 to September 2009. To assess reliability and validity of the instruments, we used exploratory factor analysis, inter-item correlation, reliability coefficient alpha and inter-scale correlations. We also compared composite scales from factor analysis to global ratings. Finally, the number of residents' evaluations needed per faculty for reliable assessments was calculated. A total of 613 evaluations were completed by 66 residents (85.7% response rate). 99 faculty (86.8% response rate) participated in self-evaluation. Factor analysis yielded five scales with high reliability (Cronbach's alpha for residents' and faculty): learning climate (0.86 and 0.75), professional attitude (0.89 and 0.81), communication of learning goals (0.89 and 0.82), evaluation of residents (0.87 and 0.79) and feedback (0.87 and 0.86). Item-total, inter-scale and scale-global rating correlation coefficients were significant (Pgynecology faculty. Future research should examine improvement of teaching qualities when using SETQ. PMID:21559275

  13. Evaluating the reliability of an injury prevention screening tool: Test-retest study.

    Science.gov (United States)

    Gittelman, Michael A; Kincaid, Madeline; Denny, Sarah; Wervey Arnold, Melissa; FitzGerald, Michael; Carle, Adam C; Mara, Constance A

    2016-10-01

    A standardized injury prevention (IP) screening tool can identify family risks and allow pediatricians to address behaviors. To assess behavior changes on later screens, the tool must be reliable for an individual and ideally between household members. Little research has examined the reliability of safety screening tool questions. This study utilized test-retest reliability of parent responses on an existing IP questionnaire and also compared responses between household parents. Investigators recruited parents of children 0 to 1 year of age during admission to a tertiary care children's hospital. When both parents were present, one was chosen as the "primary" respondent. Primary respondents completed the 30-question IP screening tool after consent, and they were re-screened approximately 4 hours later to test individual reliability. The "second" parent, when present, only completed the tool once. All participants received a 10-dollar gift card. Cohen's Kappa was used to estimate test-retest reliability and inter-rater agreement. Standard test-retest criteria consider Kappa values: 0.0 to 0.40 poor to fair, 0.41 to 0.60 moderate, 0.61 to 0.80 substantial, and 0.81 to 1.00 as almost perfect reliability. One hundred five families participated, with five lost to follow-up. Thirty-two (30.5%) parent dyads completed the tool. Primary respondents were generally mothers (88%) and Caucasian (72%). Test-retest of the primary respondents showed their responses to be almost perfect; average 0.82 (SD = 0.13, range 0.49-1.00). Seventeen questions had almost perfect test-retest reliability and 11 had substantial reliability. However, inter-rater agreement between household members for 12 objective questions showed little agreement between responses; inter-rater agreement averaged 0.35 (SD = 0.34, range -0.19-1.00). One question had almost perfect inter-rater agreement and two had substantial inter-rater agreement. The IP screening tool used by a single individual had excellent

  14. [Imputing missing data in public health: general concepts and application to dichotomous variables].

    Science.gov (United States)

    Hernández, Gilma; Moriña, David; Navarro, Albert

    The presence of missing data in collected variables is common in health surveys, but the subsequent imputation thereof at the time of analysis is not. Working with imputed data may have certain benefits regarding the precision of the estimators and the unbiased identification of associations between variables. The imputation process is probably still little understood by many non-statisticians, who view this process as highly complex and with an uncertain goal. To clarify these questions, this note aims to provide a straightforward, non-exhaustive overview of the imputation process to enable public health researchers ascertain its strengths. All this in the context of dichotomous variables which are commonplace in public health. To illustrate these concepts, an example in which missing data is handled by means of simple and multiple imputation is introduced. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. A feasible, aesthetic quality evaluation of implant-supported single crowns: an analysis of validity and reliability

    DEFF Research Database (Denmark)

    Hosseini, Mandana; Gotfredsen, Klaus

    2012-01-01

    OBJECTIVES: To test the reliability and validity of six aesthetic parameters and to compare the professional- and patient-reported aesthetic outcomes. MATERIAL AND METHODS: Thirty-four patients with 66 implant-supported premolar crowns were included. Two prosthodontists and 11 dental students......,24) were found between patient and professional evaluations. CONCLUSIONS: The feasibility, reliability and validity of the CIS make the parameters useful for quality control of implant-supported restorations. The professional- and patient-reported aesthetic outcomes had no significant correlation....

  16. The Validity and Reliability of Scales for the Evaluation of End-of-Life Care in Advanced Dementia

    OpenAIRE

    Kiely, Dan K.; Volicer, Ladislav; Teno, Joan; Jones, Richard N.; Prigerson, Holly G.; Mitchell, Susan L.

    2006-01-01

    The lack of valid and reliable instruments designed to measure the experiences of older persons with advanced dementia and those of their health care proxies has limited palliative care research for this condition. This study evaluated the reliability and validity of 3 End-of-Life in Dementia (EOLD) scales that measure the following outcomes: (1) satisfaction with the terminal care (SWC-EOLD), (2) symptom management (SM-EOLD), and (3) comfort during the last 7 days of life (CAD-EOLD). Data we...

  17. Evaluation of clinical and radiographic measures and reliability of the quadriceps angle measurement in elderly women with knee osteoarthritis

    Directory of Open Access Journals (Sweden)

    Mateus Ramos Amorim

    Full Text Available Introduction Knees osteoarthritis (OA is a complex degenerative disease with intra-articular changes affecting the amplitude of the quadriceps angle (Q. To measure this variable, it is necessary to use reliable protocols aiming at methodological reproducibility. The objective was to evaluate the intra-examiner and inter-examiner reliability of clinical and radiographic measures of the Q angle and to investigate the relationship between the degree of OA and the magnitude of this angle in the elderly. Materials and methods 23 volunteers had the Q angle measured by two evaluators at 48-h interval. Clinical measurements were collected by using the universal goniometer in the same position adopted in the radiographic examination. Results The intra-examiner reliability was good (0.722 to 0.763 for radiographic measurements and low (0.518 to 0.574 for clinical assessment, while inter-examiner reliability was moderate (0.634 for radiographic measurements and low (0.499 to the clinics. The correlation analysis between the radiographic values with the OA classification showed no correlation between them (p = 0.824 and r = -0.024. Conclusion Clinically, it is suggested that the radiographic examination is preferable to evaluate the Q angle of elderly women with knee osteoarthritis. Moreover, the magnitude of this angle did not correlate with the degree of impairment of OA in this population.

  18. Genotype imputation in a coalescent model with infinitely-many-sites mutation.

    Science.gov (United States)

    Huang, Lucy; Buzbas, Erkan O; Rosenberg, Noah A

    2013-08-01

    Empirical studies have identified population-genetic factors as important determinants of the properties of genotype-imputation accuracy in imputation-based disease association studies. Here, we develop a simple coalescent model of three sequences that we use to explore the theoretical basis for the influence of these factors on genotype-imputation accuracy, under the assumption of infinitely-many-sites mutation. Employing a demographic model in which two populations diverged at a given time in the past, we derive the approximate expectation and variance of imputation accuracy in a study sequence sampled from one of the two populations, choosing between two reference sequences, one sampled from the same population as the study sequence and the other sampled from the other population. We show that, under this model, imputation accuracy-as measured by the proportion of polymorphic sites that are imputed correctly in the study sequence-increases in expectation with the mutation rate, the proportion of the markers in a chromosomal region that are genotyped, and the time to divergence between the study and reference populations. Each of these effects derives largely from an increase in information available for determining the reference sequence that is genetically most similar to the sequence targeted for imputation. We analyze as a function of divergence time the expected gain in imputation accuracy in the target using a reference sequence from the same population as the target rather than from the other population. Together with a growing body of empirical investigations of genotype imputation in diverse human populations, our modeling framework lays a foundation for extending imputation techniques to novel populations that have not yet been extensively examined. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. A new approach for efficient genotype imputation using information from relatives.

    Science.gov (United States)

    Sargolzaei, Mehdi; Chesnais, Jacques P; Schenkel, Flavio S

    2014-06-17

    Genotype imputation can help reduce genotyping costs particularly for implementation of genomic selection. In applications entailing large populations, recovering the genotypes of untyped loci using information from reference individuals that were genotyped with a higher density panel is computationally challenging. Popular imputation methods are based upon the Hidden Markov model and have computational constraints due to an intensive sampling process. A fast, deterministic approach, which makes use of both family and population information, is presented here. All individuals are related and, therefore, share haplotypes which may differ in length and frequency based on their relationships. The method starts with family imputation if pedigree information is available, and then exploits close relationships by searching for long haplotype matches in the reference group using overlapping sliding windows. The search continues as the window size is shrunk in each chromosome sweep in order to capture more distant relationships. The proposed method gave higher or similar imputation accuracy than Beagle and Impute2 in cattle data sets when all available information was used. When close relatives of target individuals were present in the reference group, the method resulted in higher accuracy compared to the other two methods even when the pedigree was not used. Rare variants were also imputed with higher accuracy. Finally, computing requirements were considerably lower than those of Beagle and Impute2. The presented method took 28 minutes to impute from 6 k to 50 k genotypes for 2,000 individuals with a reference size of 64,429 individuals. The proposed method efficiently makes use of information from close and distant relatives for accurate genotype imputation. In addition to its high imputation accuracy, the method is fast, owing to its deterministic nature and, therefore, it can easily be used in large data sets where the use of other methods is impractical.

  20. Impact of whole-genome amplification on the reliability of pre-transfer cattle embryo breeding value estimates.

    Science.gov (United States)

    Shojaei Saadi, Habib A; Vigneault, Christian; Sargolzaei, Mehdi; Gagné, Dominic; Fournier, Éric; de Montera, Béatrice; Chesnais, Jacques; Blondin, Patrick; Robert, Claude

    2014-10-12

    Genome-wide profiling of single-nucleotide polymorphisms is receiving increasing attention as a method of pre-implantation genetic diagnosis in humans and of commercial genotyping of pre-transfer embryos in cattle. However, the very small quantity of genomic DNA in biopsy material from early embryos poses daunting technical challenges. A reliable whole-genome amplification (WGA) procedure would greatly facilitate the procedure. Several PCR-based and non-PCR based WGA technologies, namely multiple displacement amplification, quasi-random primed library synthesis followed by PCR, ligation-mediated PCR, and single-primer isothermal amplification were tested in combination with different DNA extractions protocols for various quantities of genomic DNA inputs. The efficiency of each method was evaluated by comparing the genotypes obtained from 15 cultured cells (representative of an embryonic biopsy) to unamplified reference gDNA. The gDNA input, gDNA extraction method and amplification technology were all found to be critical for successful genome-wide genotyping. The selected WGA platform was then tested on embryo biopsies (n = 226), comparing their results to that of biopsies collected after birth. Although WGA inevitably leads to a random loss of information and to the introduction of erroneous genotypes, following genomic imputation the resulting genetic index of both sources of DNA were highly correlated (r = 0.99, P<0.001). It is possible to generate high-quality DNA in sufficient quantities for successful genome-wide genotyping starting from an early embryo biopsy. However, imputation from parental and population genotypes is a requirement for completing and correcting genotypic data. Judicious selection of the WGA platform, careful handling of the samples and genomic imputation together, make it possible to perform extremely reliable genomic evaluations for pre-transfer embryos.

  1. Principles of performance and reliability modeling and evaluation essays in honor of Kishor Trivedi on his 70th birthday

    CERN Document Server

    Puliafito, Antonio

    2016-01-01

    This book presents the latest key research into the performance and reliability aspects of dependable fault-tolerant systems and features commentary on the fields studied by Prof. Kishor S. Trivedi during his distinguished career. Analyzing system evaluation as a fundamental tenet in the design of modern systems, this book uses performance and dependability as common measures and covers novel ideas, methods, algorithms, techniques, and tools for the in-depth study of the performance and reliability aspects of dependable fault-tolerant systems. It identifies the current challenges that designers and practitioners must face in order to ensure the reliability, availability, and performance of systems, with special focus on their dynamic behaviors and dependencies, and provides system researchers, performance analysts, and practitioners with the tools to address these challenges in their work. With contributions from Prof. Trivedi's former PhD students and collaborators, many of whom are internationally recognize...

  2. Systems Analysis Programs for Hands-on Intergrated Reliability Evaluations (SAPHIRE) Summary Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events and quantify associated consequential outcome frequencies. Specifically, for nuclear power plant applications, SAPHIRE can identify important contributors to core damage (Level 1 PRA) and containment failure during a severe accident which lead to releases (Level 2 PRA). It can be used for a PRA where the reactor is at full power, low power, or at shutdown conditions. Furthermore, it can be used to analyze both internal and external initiating events and has special features for transforming an internal events model to a model for external events, such as flooding and fire analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to the public and environment (Level 3 PRA). SAPHIRE also includes a separate module called the Graphical Evaluation Module (GEM). GEM is a special user interface linked to SAPHIRE that automates the SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events (for example, to calculate a conditional core damage probability) very efficiently and expeditiously. This report provides an overview of the functions

  3. Evaluation of reliability and validity of three dental color-matching devices.

    Science.gov (United States)

    Tsiliagkou, Aikaterini; Diamantopoulou, Sofia; Papazoglou, Efstratios; Kakaboura, Afrodite

    2016-01-01

    To assess the repeatability and accuracy of three dental color-matching devices under standardized and freehand measurement conditions. Two shade guides (Vita Classical A1-D4, Vita; and Vita Toothguide 3D-Master, Vita), and three color-matching devices (Easyshade, Vita; SpectroShade, MHT Optic Research; and ShadeVision, X-Rite) were used. Five shade tabs were selected from the Vita Classical A1-D4 (A2, A3.5, B1, C4, D3), and five from the Vita Toothguide 3D-Master (1M1, 2R1.5, 3M2, 4L2.5, 5M3) shade guides. Each shade tab was recorded 15 continuous, repeated times with each device under two different measurement conditions (standardized, and freehand). Both qualitative (color shade) and quantitative (L, a, and b) color characteristics were recorded. The color difference (ΔE) of each recorded value with the known values of the shade tab was calculated. The repeatability of each device was evaluated by the coefficient of variance. The accuracy of each device was determined by comparing the recorded values with the known values of the reference shade tab (one sample t test; α = 0.05). The agreement between the recorded shade and the reference shade tab was calculated. The influence of the parameters (devices and conditions) on the parameter ΔE was investigated (two-way ANOVA). Comparison of the devices was performed with Bonferroni pairwise post-hoc analysis. Under standardized conditions, repeatability of all three devices was very good, except for ShadeVision with Vita Classical A1-D4. Accuracy ranged from good to fair, depending on the device and the shade guide. Under freehand conditions, repeatability and accuracy for Easyshade and ShadeVision were negatively influenced, but not for SpectroShade, regardless of the shade guide. Based on the total of the color parameters assessed per device, SpectroShade was the most reliable of the three color-matching devices studied.

  4. RELIABILITY EVALUATION OF THE ACTIVATION MACHINE FOR THE ELECTRIC DETONATING CAPS-EKA 350

    Directory of Open Access Journals (Sweden)

    Ljubinka Radosavljević

    2007-09-01

    Full Text Available The machine - EKA 350 is designed for the activation of the serial or mixed connected electric detonating caps EK - 40 - 69 in explosive fillings at mining and demolition. For the analyzes of reliability it is important that the machine works in the three regimes of function: LOAD, FIRE and EMPTY. Modeling of reliability was executed for each of the mentioned regimes of the EKA 350 machine. In the machine are incorporated the components dedicated to the professional usage and satisfaction of the MIL standards. The machine is treated as it works in a single - stage mission which lasts 20 seconds.

  5. Reliability database development for use with an object-oriented fault tree evaluation program

    Science.gov (United States)

    Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann

    1989-01-01

    A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.

  6. High-density marker imputation accuracy in sixteen French cattle breeds.

    Science.gov (United States)

    Hozé, Chris; Fouilloux, Marie-Noëlle; Venot, Eric; Guillaume, François; Dassonneville, Romain; Fritz, Sébastien; Ducrocq, Vincent; Phocas, Florence; Boichard, Didier; Croiseau, Pascal

    2013-09-03

    Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777,609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No

  7. Validity and reliability of a performance evaluation tool based on the modified Barthel Index for stroke patients

    Directory of Open Access Journals (Sweden)

    Tomoko Ohura

    2017-08-01

    Full Text Available Abstract Background The Barthel Index (BI is a measure of independence in activities of daily living (ADL. In the modified Barthel Index (MBI, a five-point system replaced the original two or three or four point rating system. Based on this modified measure, the performance evaluation tool MBI (PET-MBI was developed in Japan. Although the reliability and validity of PET-MBI have been verified for older people, the use of this tool in stroke patients has not been evaluated. This study investigated the validity and reliability of PET-MBI for stroke patients. Methods Ten raters independently determined the BI and PET-MBI scores of stroke patients by direct observation. These patients’ ADL were videotaped, and 10 other raters then evaluated the videos privately and assigned PET-MBI scores twice, one month apart. The criterion-related validity of the PET-MBI against the BI was evaluated using the correlation coefficients for their total scores. Furthermore, to assess inter- and intra-rater reliabilities from the results of the first and second sessions, Fleiss’ intraclass correlation coefficients (ICCs were calculated for the total scores, with the lower limits of the 95% confidence interval (95%CI, along with weighted kappa (κw coefficients for agreement in individual tasks of this evaluation tool. ICC and κw coefficients of 0.81–1.00 were considered to be “almost perfect” agreement. Results The mean age of the 30 patients (23 men, 7 women was 71.9 (standard deviation 10.5 years. One patient had diplegia, 14 had right hemiplegia, and 15 had left hemiplegia. For the total scores obtained by direct evaluation, Pearson’s and Spearman’s correlation coefficients of the BI versus the PET-MBI were both 0.95 (lower limit of the 95%CI, 0.90. The ICC representing inter-rater reliability for the first session was 0.99 (lower limit of the 95%CI, 0.98]. For intra-rater reliability, the mean value of the ICCs was 0.99 (range, 0.99–1.00. For

  8. Low field magnetic resonance imaging of the lumbar spine: Reliability of qualitative evaluation of disc and muscle parameters

    DEFF Research Database (Denmark)

    Sørensen, Joan Solgaard; Kjaer, Per; Jensen, Tue Secher

    2006-01-01

    PURPOSE: To determine the intra- and interobserver reliability in grading disc and muscle parameters using low-field magnetic resonance imaging (MRI). MATERIAL AND METHODS: MRI scans of 100 subjects representative of the general population were evaluated blindly by two radiologists. Criteria...... for grading lumbar discs were based on the spinal nomenclature of the Combined Task Force and the literature. Consensus in rating was achieved by evaluating 50 MRI examinations in tandem. The remaining 50 examinations were evaluated independently by the observers to determine interobserver agreement and re...

  9. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    Science.gov (United States)

    Islam, Muhammad Faysal

    2013-01-01

    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  10. ELECTRICAL SUBSTATION RELIABILITY EVALUATION WITH EMPHASIS ON EVOLVING INTERDEPENDENCE ON COMMUNICATION INFRASTRUCTURE.

    Energy Technology Data Exchange (ETDEWEB)

    AZARM,M.A.; BARI,R.; YUE,M.; MUSICKI,Z.

    2004-09-12

    This study developed a probabilistic methodology for assessment of the reliability and security of electrical energy distribution networks. This included consideration of the future grid system, which will rely heavily on the existing digitally based communication infrastructure for monitoring and protection. Event tree and fault tree methods were utilized. The approach extensively modeled the types of faults that a grid could potentially experience, the response of the grid, and the specific design of the protection schemes. We demonstrated the methods by applying it to a small sub-section of a hypothetical grid based on an existing electrical grid system of a metropolitan area. The results showed that for a typical design that relies on communication network for protection, the communication network reliability could contribute significantly to the frequency of loss of electrical power. The reliability of the communication network could become a more important contributor to the electrical grid reliability as the utilization of the communication network significantly increases in the near future to support ''smart'' transmission and/or distributed generation.

  11. Evaluation of conventional electric power generating industry quality assurance and reliability practices

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, R.T.; Lauffenburger, H.A.

    1981-03-01

    The techniques and practices utilized in an allied industry (electric power generation) that might serve as a baseline for formulating Quality Assurance and Reliability (QA and R) procedures for photovoltaic solar energy systems were studied. The study results provide direct near-term input for establishing validation methods as part of the SERI performance criteria and test standards development task.

  12. An Evaluation of the Reliability of the Food Label Literacy Questionnaire in Russian

    Science.gov (United States)

    Gurevich, Konstantin G.; Reynolds, Jesse; Bifulco, Lauren; Doughty, Kimberly; Njike, Valentine; Katz, David L.

    2016-01-01

    Objective: School-based nutrition education can promote the development of skills, such as food label reading, that can contribute to making healthier food choices. The purpose of this study was to assess the reliability of a Russian language version of the previously validated Food Label Literacy for Applied Nutrition Knowledge (FLLANK)…

  13. Reliability among central readers in the evaluation of endoscopic findings from patients with Crohn's disease

    NARCIS (Netherlands)

    Khanna, Reena; Zou, Guangyong; D'Haens, Geert; Rutgeerts, Paul; McDonald, J. W. D.; Daperno, Marco; Feagan, Brian G.; Sandborn, William J.; Dubcenco, Elena; Stitt, Larry; Vandervoort, Margaret K.; Donner, Allan; Luo, Allison; Levesque, Barrett G.

    2016-01-01

    The Crohn's Disease Endoscopic Index of Severity (CDEIS) and Simple Endoscopic Score for Crohn's Disease (SES-CD) are commonly used to assess Crohn's disease (CD) activity; however, neither instrument has been fully validated. We assessed intra-rater and inter-rater reliability of these indices.

  14. Reliability and Validity of SERVQUAL Scores Used To Evaluate Perceptions of Library Service Quality.

    Science.gov (United States)

    Thompson, Bruce; Cook, Colleen

    Research libraries are increasingly supplementing collection counts with perceptions of service quality as indices of status and productivity. The present study was undertaken to explore the reliability and validity of scores from the SERVQUAL measurement protocol (A. Parasuraman and others, 1991), which has previously been used in this type of…

  15. The interval shuttle run test for intermittent sport players : evaluation of reliability

    NARCIS (Netherlands)

    Lemmink, K.A.P.M.; Visscher, C.; Lambert, M.I.; Lamberts, R.P.

    2004-01-01

    The reliability of the interval shuttle run test (ISRT) as a submaximal and maximal field test to measure intermittent endurance capacity was examined. During the ISRT, participants alternately run for 30 seconds and walk for 15 seconds. The running speed is increased from 10 km.h(-1) every 90

  16. The interval shuttle run test for intermittent sport players: evaluation of reliability.

    Science.gov (United States)

    Lemmink, Koen A P M; Visscher, Chris; Lambert, Michael I; Lamberts, Robert P

    2004-11-01

    The reliability of the interval shuttle run test (ISRT) as a submaximal and maximal field test to measure intermittent endurance capacity was examined. During the ISRT, participants alternately run for 30 seconds and walk for 15 seconds. The running speed is increased from 10 km.h(-1) every 90 seconds until exhaustion. Within a 2-week period, 17 intermittent sport players (i.e., 10 men and 7 women) performed the ISRT twice in a sports hall under well-standardized conditions. Heart rates per speed and total number of runs were assessed as submaximal and maximal performance measures. With the exception of the heart rates at 10.0 km.h(-1) for men and 10.0, 12.0, and 13.5 km.h(-1) for women, zero lay within the 95% confidence interval of the mean differences, indicating that no bias existed between the outcome measures at the 2 test sessions (absolute reliability). The results illustrate that it is important to control for heart rate before the start of the ISRT. Relative reliability was high (intraclass correlation coefficient > or = 0.86). We conclude that the reliability of the ISRT as a submaximal and maximal field test for intermittent sport players is supported by the results.

  17. Reliability Of Kraus-Weber Exercise Test As An Evaluation Tool In ...

    African Journals Online (AJOL)

    The purpose of this study was to determine strength and flexibility of the spinal and hamstring muscles among University of Ibadan students and the reliability of Kraus-Weber (K-W) exercise test. The Kraus-Weber test, involves a series of exercises that measure minimum strength and flexibility of the back, abdominal, psoas ...

  18. Evaluation of reproducibility and reliability of 3D soft tissue analysis using 3D stereophotogrammetry.

    NARCIS (Netherlands)

    Plooij, J.M.; Swennen, G.R.; Rangel, F.A.; Maal, T.J.J.; Schutyser, F.A.C.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Berge, S.J.

    2009-01-01

    In 3D photographs the bony structures are neither available nor palpable, therefore, the bone-related landmarks, such as the soft tissue gonion, need to be redefined. The purpose of this study was to determine the reproducibility and reliability of 49 soft tissue landmarks, including newly defined

  19. Brazilian Version of the Functional Assessment Measure: Cross-Cultural Adaptation and Reliability Evaluation

    Science.gov (United States)

    Lourenco Jorge, Liliana; Garcia Marchi, Flavia Helena; Portela Hara, Ana Clara; Battistella, Linamara R.

    2011-01-01

    The objective of this prospective study was to perform a cross-cultural adaptation of the Functional Assessment Measure (FAM) into Brazilian Portuguese, and to assess the test-retest reliability. The instrument was translated, back-translated, pretested, and reviewed by a committee. The Brazilian version was assessed in 61 brain-injury patients.…

  20. A Validation and Reliability Study of Community Service Activities Scale in Turkey: A Social Evaluation

    Science.gov (United States)

    Demir, Özden; Kaya, Halil Ibrahim; Tasdan, Murat

    2014-01-01

    The purpose of this study is to test the reliability and validity of Community Service Activities Scale (CSAS) developed by Demir, Kaya and Tasdan (2012) with a view to identify perceptions of Faculty of Education students regarding community service activities. The participants of the study are 313 randomly chosen students who attend six…

  1. The Reliability, Validity, and Evaluation of the Objective Structured Clinical Examination in Podiatry (Chiropody).

    Science.gov (United States)

    Woodburn, Jim; Sutcliffe, Nick

    1996-01-01

    The Objective Structured Clinical Examination (OSCE), initially developed for undergraduate medical education, has been adapted for assessment of clinical skills in podiatry students. A 12-month pilot study found the test had relatively low levels of reliability, high construct and criterion validity, and good stability of performance over time.…

  2. Reliability of FAMACHA chart for the evaluation of anaemia in goats ...

    African Journals Online (AJOL)

    ADEYEYE

    2014-07-25

    Jul 25, 2014 ... The sensitivity of FS was high (64%) when FS 5 and PCV ≤ 19%, were used to determine anaemia, but when FS 4&5 and PCV ≤ 15% were ... gave the most reliable indicator of anaemia in goats, coinciding with the PCV values of ≤ 19%. A high ... examined and classified into one of the five categories ...

  3. BACHSCORE. A tool for evaluating efficiently and reliably the quality of large sets of protein structures

    Science.gov (United States)

    Sarti, E.; Zamuner, S.; Cossio, P.; Laio, A.; Seno, F.; Trovato, A.

    2013-12-01

    In protein structure prediction it is of crucial importance, especially at the refinement stage, to score efficiently large sets of models by selecting the ones that are closest to the native state. We here present a new computational tool, BACHSCORE, that allows its users to rank different structural models of the same protein according to their quality, evaluated by using the BACH++ (Bayesian Analysis Conformation Hunt) scoring function. The original BACH statistical potential was already shown to discriminate with very good reliability the protein native state in large sets of misfolded models of the same protein. BACH++ features a novel upgrade in the solvation potential of the scoring function, now computed by adapting the LCPO (Linear Combination of Pairwise Orbitals) algorithm. This change further enhances the already good performance of the scoring function. BACHSCORE can be accessed directly through the web server: bachserver.pd.infn.it. Catalogue identifier: AEQD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQD_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 130159 No. of bytes in distributed program, including test data, etc.: 24 687 455 Distribution format: tar.gz Programming language: C++. Computer: Any computer capable of running an executable produced by a g++ compiler (4.6.3 version). Operating system: Linux, Unix OS-es. RAM: 1 073 741 824 bytes Classification: 3. Nature of problem: Evaluate the quality of a protein structural model, taking into account the possible “a priori” knowledge of a reference primary sequence that may be different from the amino-acid sequence of the model; the native protein structure should be recognized as the best model. Solution method: The contact potential scores the occurrence of any given type of residue pair in 5 possible

  4. Differential network analysis with multiply imputed lipidomic data.

    Directory of Open Access Journals (Sweden)

    Maiju Kujala

    Full Text Available The importance of lipids for cell function and health has been widely recognized, e.g., a disorder in the lipid composition of cells has been related to atherosclerosis caused cardiovascular disease (CVD. Lipidomics analyses are characterized by large yet not a huge number of mutually correlated variables measured and their associations to outcomes are potentially of a complex nature. Differential network analysis provides a formal statistical method capable of inferential analysis to examine differences in network structures of the lipids under two biological conditions. It also guides us to identify potential relationships requiring further biological investigation. We provide a recipe to conduct permutation test on association scores resulted from partial least square regression with multiple imputed lipidomic data from the LUdwigshafen RIsk and Cardiovascular Health (LURIC study, particularly paying attention to the left-censored missing values typical for a wide range of data sets in life sciences. Left-censored missing values are low-level concentrations that are known to exist somewhere between zero and a lower limit of quantification. To make full use of the LURIC data with the missing values, we utilize state of the art multiple imputation techniques and propose solutions to the challenges that incomplete data sets bring to differential network analysis. The customized network analysis helps us to understand the complexities of the underlying biological processes by identifying lipids and lipid classes that interact with each other, and by recognizing the most important differentially expressed lipids between two subgroups of coronary artery disease (CAD patients, the patients that had a fatal CVD event and the ones who remained stable during two year follow-up.

  5. Reliability and validity of the 30-s continuous jump test for anaerobic fitness evaluation.

    Science.gov (United States)

    Dal Pupo, Juliano; Gheller, Rodrigo G; Dias, Jonathan A; Rodacki, André L F; Moro, Antônio R P; Santos, Saray G

    2014-11-01

    To determine the test-retest reliability and concurrent validity of the 30-s continuous jump (CJ30) test using the Wingate test as a reference. Descriptive validity study. Twenty-one male volleyball players (23.8 ± 3.8 years; 82.5 ± 9.1 kg; 185 ± 4.7 cm) were tested in three separate sessions. The first and second sessions were used to assess the reliability of the CJ30 while in the third session the Wingate test was performed. In the continuous jump test, consisting of maximal continuous jumps performed for 30s, jump height was determined by video kinematic analysis. Blood samples were collected after each test to determine lactate concentration. The CJ30 showed excellent test-retest reliability for the maximal jump height (ICC = 0.94), mean vertical jump height (ICC = 0.98) and fatigue index (ICC = 0.87). Peak lactate showed moderate reliability (ICC = 0.45). Large correlations were found between the mean height of the first four jumps of CJ30 and the peak power of the Wingate (r = 0.57), between the mean vertical jump height of CJ30 and the mean power of the Wingate (r = 0.70) and between the lactate peak of CJ30 and Wingate (r = 0.51). A moderate correlation of fatigue index between CJ30 and the Wingate was found (r = 0.43). The continuous jump is a reliable test and measures some of the same anaerobic properties as WAnT. The correlations observed in terms of anaerobic indices between the tests provide evidence that the CJ30 may adequately assess anaerobic performance level. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  6. A Step by Step Approach for Evaluating the Reliability of the Main Engine Lube Oil System for a Ship's Propulsion System

    Directory of Open Access Journals (Sweden)

    Mohan Anantharaman

    2014-09-01

    Full Text Available Effective and efficient maintenance is essential to ensure reliability of a ship's main propulsion system, which in turn is interdependent on the reliability of a number of associated sub- systems. A primary step in evaluating the reliability of the ship's propulsion system will be to evaluate the reliability of each of the sub- system. This paper discusses the methodology adopted to quantify reliability of one of the vital sub-system viz. the lubricating oil system, and development of a model, based on Markov analysis thereof. Having developed the model, means to improve reliability of the system should be considered. The cost of the incremental reliability should be measured to evaluate cost benefits. A maintenance plan can then be devised to achieve the higher level of reliability. Similar approach could be considered to evaluate the reliability of all other sub-systems. This will finally lead to development of a model to evaluate and improve the reliability of the main propulsion system.

  7. Evaluating validity and test-retest reliability in four drive for muscularity questionnaires.

    Science.gov (United States)

    Tod, David; Morrison, Todd G; Edwards, Christian

    2012-06-01

    The current study assessed relationships among four commonly used drive for muscularity questionnaires, along with their 7 and 14 day test-retest reliability. Sample 1 was comprised of young British adult males (N=272; M(AGE)=20.3) who completed the questionnaires once. Sample 2, a group of young British adult males (N=54, M(AGE)=19.3), completed the questionnaires three times spaced 7 and 14 days apart. Correlations among Sample 1 ranged from .20 to .82 providing evidence for concurrent and discriminant validities. Evidence for test-retest reliability emerged with intraclass correlations ranging from .78 to .95 (p.05). Overall, the data support the psychometric properties of the drive for muscularity inventories; however, the shared variance (35-67%) hints that refinement is possible. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  8. Utilisation of quantitative reliability concepts in evaluating the marginal outage costs of electric generating systems

    Energy Technology Data Exchange (ETDEWEB)

    Ghajar, Raymond; Billinton, Roy [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Electrical Engineering

    1994-12-31

    Marginal outage costs are an important component of electricity spot prices. This paper describes a methodology based on quantitative power system reliability concepts for calculating these costs in electric generating systems. The proposed method involves the calculation of the incremental expected unserved energy at a given operating reserve level and lead time and the multiplication of this value by the average cost of unserved energy of the generating system. An extension of the proposed method is applied to interconnected generating systems in order to calculate the impact of assistance from neighbouring systems on the marginal outage cost profile of the assisted system. This method is based on the equivalent assisting unit approach. The methods discussed in this paper are illustrated by calculating the marginal outage cost profile of a small educational test system and by examining the effect of selected modelling assumptions and parameters to see how simplified representations can be used to approximate the results obtained using more detailed reliability models. (author)

  9. ELECTRICAL SUBSTATION RELIABILITY EVALUATION WITH EMPHASIS ON EVOLVING INTERDEPENDENCE ON COMMUNICATION INFRASTRUCTURE.

    Energy Technology Data Exchange (ETDEWEB)

    AZARM,M.A.BARI,R.A.MUSICKI,Z.

    2004-01-15

    The objective of this study is to develop a methodology for a probabilistic assessment of the reliability and security of electrical energy distribution networks. This includes consideration of the future grid system, which will rely heavily on the existing digitally based communication infrastructure for monitoring and protection. Another important objective of this study is to provide information and insights from this research to Consolidated Edison Company (Con Edison) that could be useful in the design of the new network segment to be installed in the area of the World Trade Center in lower Manhattan. Our method is microscopic in nature and relies heavily on the specific design of the portion of the grid being analyzed. It extensively models the types of faults that a grid could potentially experience, the response of the grid, and the specific design of the protection schemes. We demonstrate that the existing technology can be extended and applied to the electrical grid and to the supporting communication network. A small subsection of a hypothetical grid based on the existing New York City electrical grid system of Con Edison is used to demonstrate the methods. Sensitivity studies show that in the current design the frequency for the loss of the main station is sensitive to the communication network reliability. The reliability of the communication network could become a more important contributor to the electrical grid reliability as the utilization of the communication network significantly increases in the near future to support ''smart'' transmission and/or distributed generation. The identification of potential failure modes and their likelihood can support decisions on potential modifications to the network including hardware, monitoring instrumentation, and protection systems.

  10. A Probabilistic Approach for Reliability and Life Prediction of Electronics in Drilling and Evaluation Tools

    Science.gov (United States)

    2014-12-23

    motor continues to turn the drill string. When the bit is free, the torsional energy stored in the drill string is released, causing the BHA to spin in...illustrates the concept where the average value of operational temperature and vibration over all the previous runs is calculated in columns two...temperature, high- reliability electronics will alter geothermal exploration. Proceedings World Geothermal Congress, Antalya, Turkey. Osterman

  11. INNOVATIVE METHODS TO EVALUATE THE RELIABILITY OF INFORMATION CONSOLIDATED FINANCIAL STATEMENTS

    Directory of Open Access Journals (Sweden)

    Irina P. Kurochkina

    2014-01-01

    Full Text Available The article explores the possibility of using foreign innovative methods to assess the reliabilityof information consolidated fi nancial statements of Russian companies. Recommendations aremade under their adaptation and applicationinto commercial organizations. Banish methodindicators are implemented in one of the world’s largest vertically integrated steel and miningcompanies. Audit firms are proposed to usemethods of assessing the reliability of information in the practical application of ISA.

  12. Re-Evaluating the Netflix Prize - Human Uncertainty and its Impact on Reliability

    OpenAIRE

    Jasberg, Kevin; Sizov, Sergej

    2017-01-01

    In this paper, we examine the statistical soundness of comparative assessments within the field of recommender systems in terms of reliability and human uncertainty. From a controlled experiment, we get the insight that users provide different ratings on same items when repeatedly asked. This volatility of user ratings justifies the assumption of using probability densities instead of single rating scores. As a consequence, the well-known accuracy metrics (e.g. MAE, MSE, RMSE) yield a density...

  13. Validity and Reliability of the Korean Version of the Utrecht Scale for Evaluation of Rehabilitation-Participation

    Directory of Open Access Journals (Sweden)

    Joo-Hyun Lee

    2017-01-01

    Full Text Available This study investigated the reliability and validity of the Korean version of the Utrecht Scale for Evaluation of Rehabilitation-Participation (K-USER-P in patients with stroke. Stroke patients participated in this study. The Utrecht Scale for Evaluation of Rehabilitation-Participation was translated from English into Korean. A total of 120 questionnaires involving the K-USER-P were distributed to rehabilitation hospitals and centers by mail. Of those, 100 questionnaires were returned and 67 were included in the final analysis after exclusion of questionnaires with insufficient responses. We analyzed the questionnaires for internal consistency, test-retest reliability, and construct validity. The results indicated that internal consistency coefficients of the frequency, restriction, and satisfaction domains were 0.69, 0.66, and 0.67, respectively. Test-retest reliability was 0.63, 0.45, and 0.71 for the three domains, respectively. Intercorrelations between the SF-12 and the London Handicap Scale were generally moderate to good. The Korean version of the Utrecht Scale for Evaluation of Rehabilitation-Participation can be used as a measure of the participation level of stroke patients in clinical practice and the local community.

  14. Reliability and Seasonal Changes of Submaximal Variables to Evaluate Professional Cyclists.

    Science.gov (United States)

    Rodríguez-Marroyo, Jose A; Pernía, Raúl; Villa, José G; Foster, Carl

    2017-11-01

    The aim of this study was to determine the reliability and validity of several submaximal variables that can be easily obtained by monitoring cyclists' performances. Eighteen professional cyclists participated in this study. In a first part (n = 15) the test-retest reliability of heart rate (HR) and rating of perceived exertion (RPE) during a progressive maximal test was measured. Derived submaximal variables based on HR, RPE, and power output (PO) responses were analyzed. In a second part (n = 7) the pattern of the submaximal variables according to cyclists' training status was analyzed. Cyclists were assessed 3 times during the season: at the beginning of the season, before the Vuelta a España, and the day after this Grand Tour. Part 1: No significant differences in maximal and submaximal variables between test-retest were found. Excellent ICCs (0.81-0.98) were obtained in all variables. Part 2: The HR and RPE showed a rightward shift from early to peak season. In addition, RPE showed a left shift after the Vuelta a España. Submaximal variables based on RPE had the best relationship with both performance and changes in performance. The present study showed the reliability of different maximal and submaximal variables used to assess cyclists' performances. Submaximal variables based on RPE seem to be the best to monitor changes in training status over a season.

  15. Reliability of photogrammetry in the evaluation of the postural aspects of individuals with structural scoliosis.

    Science.gov (United States)

    Saad, Karen Ruggeri; Colombo, Alexandra Siqueira; Ribeiro, Ana Paula; João, Sílvia Maria Amado

    2012-04-01

    The purpose of this study was to investigate the reliability of photogrammetry in the measurement of the postural deviations in individuals with idiopathic scoliosis. Twenty participants with scoliosis (17 women and three men), with a mean age of 23.1 ± 9 yrs, were photographed from the posterior and lateral views. The postural aspects were measured with CorelDRAW software. High inter-rater and test-retest reliability indices were found. It was observed that with more severity of scoliosis, greater were the variations between the thoracic kyphosis and lumbar lordosis measures obtained by the same examiner from the left lateral view photographs. A greater body mass index (BMI) was associated with greater variability of the trunk rotation measures obtained by two independent examiners from the right, lateral view (r = 0.656; p = 0.002). The severity of scoliosis was also associated with greater inter-rater variability measures of trunk rotation obtained from the left, lateral view (r = 0.483; p = 0.036). Photogrammetry demonstrated to be a reliable method for the measurement of postural deviations from the posterior and lateral views of individuals with idiopathic scoliosis and could be complementarily employed for the assessment procedures, which could reduce the number of X-rays used for the follow-up assessments of these individuals. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Construct validation and test-retest reliability of the seniors in the community: risk evaluation for eating and nutrition questionnaire.

    Science.gov (United States)

    Keller, H H; McKenzie, J D; Goy, R E

    2001-09-01

    We performed two studies. Study 1 was a construct validation of Seniors in the Community: Risk Evaluation for Eating and Nutrition (SCREEN), a 15-item questionnaire for assessing nutritional risk. In Study 2, we examined the test-retest reliability of SCREEN. Study 1 was a cross-sectional study, and Study 2 was a cohort study. For Study 1, ten diverse community sites were used to recruit participants. A total of 128 older adults attended a clinic to provide medical and nutritional history and anthropometric measurements. A dietitian interviewed each participant. Dietitians used clinical judgment to rate the probability of nutritional risk from 1 (low risk) to 10 (high risk). Spearman's rho correlation and receiver operating characteristic curves were completed. An abbreviated SCREEN was developed through multiple linear regression analysis. In Study 2, SCREEN was randomly distributed to members of a seniors' recreation center where a self-selected sample (n = 124) completed two mailed SCREENs, 4 weeks apart. The test-retest reliability was estimated through paired correlations of total scores and individual items. In Study 1, total and abbreviated SCREEN scores were significantly associated with the dietitian nutritional risk rating (rho = -.47 and rho = -.60, respectively). Study 2 revealed that the test-retest reliability of SCREEN was adequate. SCREEN appears to be a valid and reliable tool for identifying community-dwelling older adults at risk for impaired nutritional states.

  17. The Validity and Reliability of Scales for the Evaluation of End-of-Life Care in Advanced Dementia

    Science.gov (United States)

    Kiely, Dan K.; Volicer, Ladislav; Teno, Joan; Jones, Richard N.; Prigerson, Holly G.; Mitchell, Susan L.

    2009-01-01

    The lack of valid and reliable instruments designed to measure the experiences of older persons with advanced dementia and those of their health care proxies has limited palliative care research for this condition. This study evaluated the reliability and validity of 3 End-of-Life in Dementia (EOLD) scales that measure the following outcomes: (1) satisfaction with the terminal care (SWC-EOLD), (2) symptom management (SM-EOLD), and (3) comfort during the last 7 days of life (CAD-EOLD). Data were derived from interviews with the health care proxies (SWC-EOLD) and primary care nurses (SM-EOLD, CAD-EOLD) for 189 nursing home residents with advanced dementia living in 15 Boston-area facilities. The scales demonstrated satisfactory to good reliability: SM-EOLD (α = 0.68), SWC-EOLD (α = 0.83), and CAD-EOLD (α = 0.82). The convergent validity of these scales, as measured against other established instruments assessing similar constructs, was good (correlation coefficients ranged from 0.50 to 0.81). The results of this study demonstrate that the 3 EOLD scales demonstrate “internal consistency” reliability and demonstrate convergent validity, and further establish their utility in palliative care dementia research. PMID:16917188

  18. An independent interobserver reliability and intraobserver reproducibility evaluation of the new AOSpine Thoracolumbar Spine Injury Classification System.

    Science.gov (United States)

    Urrutia, Julio; Zamora, Tomas; Yurac, Ratko; Campos, Mauricio; Palma, Joaquin; Mobarec, Sebastian; Prada, Carlos

    2015-01-01

    Agreement study. To perform an independent interobserver and intraobserver agreement evaluation of the new AOSpine Thoracolumbar Spine Injury Classification System. The new AOSpine Thoracolumbar Spine Injury Classification System was recently published. It showed substantial reliability and reproducibility among the surgeons who developed it; however, an independent evaluation has not been performed. Anteroposterior and lateral radiographs, and computed tomographic scans of 70 patients with acute traumatic thoracolumbar injuries were selected and classified using the morphological grading of the new AOSpine Thoracolumbar Spine Injury Classification System by 6 evaluators (3 spine surgeons and 3 orthopedic surgery residents). After a 6-week interval, the 70 cases were presented in a random sequence to the same evaluators for repeat evaluation. The Kappa coefficient (κ) was used to determine the interobserver and intraobserver agreement. The interobserver reliability was substantial when considering the fracture type (A, B, or C), with a κ= 0.62 (0.57-0.66). The interobserver agreement when considering the subtypes was moderate; κ= 0.55 (0.52-0.57). The intraobserver reproducibility was also substantial, with 85.95% full intraobserver reproducibility considering the fracture type, with κ= 0.77 (0.72-0.83), and was also substantial when considering subtypes with 75.71% full agreement and κ= 0.71 (0.67-0.76). No significant differences were observed between spine surgeons and orthopedic residents in the overall interobserver reliability and intraobserver reproducibility, or in the inter- and intraobserver agreement of specific A, B, or C types of injuries. This classification allows adequate agreement among different observers and by the same observer on separate occasions. Future prospective studies should evaluate whether this classification improves clinical decision making.

  19. Features of applying systems approach for evaluating the reliability of cryogenic systems for special purposes

    Directory of Open Access Journals (Sweden)

    E. D. Chertov

    2016-01-01

    Full Text Available Summary. The analysis of cryogenic installations confirms objective regularity of increase in amount of the tasks solved by systems of a special purpose. One of the most important directions of development of a cryogenics is creation of installations for air separation product receipt, namely oxygen and nitrogen. Modern aviation complexes require use of these gases in large numbers as in gaseous, and in the liquid state. The onboard gas systems applied in aircraft of the Russian Federation are subdivided on: oxygen system; air (nitric system; system of neutral gas; fire-proof system. Technological schemes ADI are in many respects determined by pressure of compressed air or, in a general sense, a refrigerating cycle. For the majority ADI a working body of a refrigerating cycle the divided air is, that is technological and refrigerating cycles in installation are integrated. By this principle differentiate installations: low pressure; average and high pressure; with detander; with preliminary chilling. There is also insignificant number of the ADI types in which refrigerating and technological cycles are separated. These are installations with external chilling. For the solution of tasks of control of technical condition of the BRV hardware in real time and estimates of indicators of reliability it is offered to use multi-agent technologies. Multi-agent approach is the most acceptable for creation of SPPR for reliability assessment as allows: to redistribute processing of information on elements of system that leads to increase in overall performance; to solve a problem of accumulating, storage and recycling of knowledge that will allow to increase significantly efficiency of the solution of tasks of an assessment of reliability; to considerably reduce intervention of the person in process of functioning of system that will save time of the person of the making decision (PMD and will not demand from it special skills of work with it.

  20. Reliability evaluation and analysis of sugarcane 7000 series harvesters in sugarcane harvesting

    Directory of Open Access Journals (Sweden)

    P Najafi

    2015-09-01

    Full Text Available Introduction: The performance of agricultural machines depends on the reliability of the equipment used, the maintenance efficiency, the operation process, the technical expertise of workers, etc. As the size and complexity of agricultural equipment continue to increase, the implications of equipment failure become even more critical. Machine failure probability is (1-R and R is machine reliability (Vafaei et al., 2010. Moreover, system reliability is the probability that an item will perform a required function without failure under stated conditions for a stated period of time (Billinton and Allan, 1992. Therefore, we must be able to create an appropriate compromise between maintenance methods and acceptable reliability levels. Precision failure data gathering in a farm is a worthwhile work, because these can represent a good estimate of machine reliability combining the effects of machine loading, surrounding effects and incorrect repair and maintenance. Each machine based on its work conditions, parts combinationand manufacturing process follows a failures distribution function depending on the environment where the machine work and the machine’s specifications (Meeker and Escobar, 1998. General failures distributions for contiguous data are normal, log-normal, exponential and Weibull (Shirmohamadi, 2002. Each machine can represent proportionate behavior with these functions in short or long time. Materials and methods: The study area was the Hakim Farabi agro-industry Company located 35 kilometers south of Ahvaz in Iran. Arable lands of this company are located in 31 to 31°10 N latitude and 45 to 48°36 E longitudes. The region has dry and warm climate. A total of 24 Austoft 7000 sugarcane chopper harvester are being used in the company. Cane harvesters were divided into 3 group consisting of old, middle aged and new. From each group, one machine was chosen. Data from maintenance reports of harvesters which have been recorded within 400

  1. Microcircuit Device Reliability. Digital Evaluation and Failure Analysis Data. Parts 1 and 2, Summer 1980

    Science.gov (United States)

    1980-01-01

    I I- = C # - g..-CL m0.JW f CL 00. C.J 06 CLL U 0) InvC0 A ~ L LA cu CLn t . a 4A If-.n C in~ Ln 0 to 4 . to 40 - 40404 ~4 jut0 C at Cw~- w LMJ ...IN RELIABILITY ANALYSIS CENTER BASIC TECIINOLOGY BIPOLAP OPERATIONAL TYPE TTL MANUFACTURER : PKG/ : SCF CL/ :DATE/ : TEST STRESS : SPEC. :NO. : DEVICE...00C 30OC : : 5164: 0: * : . : I : :SCyC N.E. S: : - :REVRIAS :100C : : 5164: -.68E 05 0. - . ± I . . : N.R. :: : -: : SCF EM :025C : : 5164: : 5

  2. Evaluating Proposed Investments in Power System Reliability and Resilience: Preliminary Results from Interviews with Public Utility Commission Staff

    Energy Technology Data Exchange (ETDEWEB)

    LaCommare, Kristina [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Larsen, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Eto, Joseph [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-01-01

    Policymakers and regulatory agencies are expressing renewed interest in the reliability and resilience of the U.S. electric power system in large part due to growing recognition of the challenges posed by climate change, extreme weather events, and other emerging threats. Unfortunately, there has been little or no consolidated information in the public domain describing how public utility/service commission (PUC) staff evaluate the economics of proposed investments in the resilience of the power system. Having more consolidated information would give policymakers a better understanding of how different state regulatory entities across the U.S. make economic decisions pertaining to reliability/resiliency. To help address this, Lawrence Berkeley National Laboratory (LBNL) was tasked by the U.S. Department of Energy Office of Energy Policy and Systems Analysis (EPSA) to conduct an initial set of interviews with PUC staff to learn more about how proposed utility investments in reliability/resilience are being evaluated from an economics perspective. LBNL conducted structured interviews in late May-early June 2016 with staff from the following PUCs: Washington D.C. (DCPSC), Florida (FPSC), and California (CPUC).

  3. Intertester reliability of the McKenzie evaluation in assessing patients with mechanical low-back pain.

    Science.gov (United States)

    Razmjou, H; Kramer, J F; Yamada, R

    2000-07-01

    Prospective intertester reliability study investigating the ability of 2 therapists to agree on a low back pain diagnosis using examination techniques and the classification system described by McKenzie. To investigate intertester agreement in determining McKenzie diagnostic syndromes, subsyndromes, presence, and relevance of the spinal deformities. Reliability of the McKenzie approach for determining diagnostic categories is unclear. Previous studies have been characterized by inconsistency of test protocols, criterion measures, and level of training of the examiners, which confounds the interpretation of results. Patients were assessed simultaneously by 2 physical therapists trained in the McKenzie evaluation system. The therapists were randomly assigned as examiner and observer. Agreement was estimated by Kappa statistics. Forty-five subjects (47 +/- 14 years), composed of 25 women and 20 men with acute, subacute, or chronic low back pain were examined. The agreement between raters for selection of the McKenzie syndromes was kappa = 0.70, and for the derangement subsyndromes was kappa = 0.96. Interrater agreement for presence of lateral shift, relevance of lateral shift, relevance of lateral component, and deformity in sagittal plane was kappa = 0.52, 0.85, 0.95, and 1.00, respectively. Intertester agreement on syndrome categories in 17 patients under 55 years of age was excellent, with kapp = 1.00. A form of low back evaluation, using patterns of pain response to repeated end range spinal test movements, was highly reliable when performed by 2 properly trained physical therapists.

  4. Menstrual cycle corrupts reliable and valid assessment of language dominance: Consequences for presurgical evaluation of patients with epilepsy.

    Science.gov (United States)

    Helmstaedter, Christoph; Jockwitz, Christiane; Witt, Juri-Alexander

    2015-05-01

    Functional transcranial Doppler sonography (fTCD) is a valid and non-invasive tool for determining language dominance, e.g. in the context of presurgical evaluations. Beyond this, fTCD might be an ideal tool to study dynamics in language dominance over time. However, an essential prerequisite would be a high test-retest reliability. This was addressed in the present study. Test-retest reliability of hemispheric hemodynamics during open speech was determined in 11 male and 11 female healthy volunteers using the Animation Description Paradigm. Expressive language dominance was assessed weekly over an interval of 4-5 weeks. Internal consistency of the four measurements was excellent (split-half reliability 0.85-0.95), but test-retest reliability of the lateralization index was poor to moderate (rtt=0.37-0.74). Controlling for gender, test-retest reliabilities were better in men (rtt=0.67-0.78) as compared to women (rtt=0.04-0.70). When arranging the assessments in women around day one of menstruation - all were on contraceptives - a significant shift from left hemisphere dominance toward bilaterality (t=2.2 p=0.04) was evident around menstruation with significant reversal afterwards (t=-3.4 p=0.005). A high intraindividual variability of language dominance patterns is indicated in women when assessed repeatedly by fTCD. Menstrual cycle appeared to be the source of inconsistency. The finding challenges the use of non-deactivating methods for language dominance assessment in epilepsy. Support for this is demonstrated with a female patient with epilepsy in whom language dominance assessed by repeated fMRI and fTCD varied concordantly with cycle but not so the repeated intracarotidal amobarbital test. Copyright © 2015 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  5. Regression analysis of the structure function for reliability evaluation of continuous-state system

    Energy Technology Data Exchange (ETDEWEB)

    Gamiz, M.L., E-mail: mgamiz@ugr.e [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain); Martinez Miranda, M.D. [Departamento de Estadistica e I.O., Facultad de Ciencias, Universidad de Granada, Granada 18071 (Spain)

    2010-02-15

    Technical systems are designed to perform an intended task with an admissible range of efficiency. According to this idea, it is permissible that the system runs among different levels of performance, in addition to complete failure and the perfect functioning one. As a consequence, reliability theory has evolved from binary-state systems to the most general case of continuous-state system, in which the state of the system changes over time through some interval on the real number line. In this context, obtaining an expression for the structure function becomes difficult, compared to the discrete case, with difficulty increasing as the number of components of the system increases. In this work, we propose a method to build a structure function for a continuum system by using multivariate nonparametric regression techniques, in which certain analytical restrictions on the variable of interest must be taken into account. Once the structure function is obtained, some reliability indices of the system are estimated. We illustrate our method via several numerical examples.

  6. Evaluation of S1 motor block to determine a safe, reliable test dose for epidural analgesia.

    Science.gov (United States)

    Daoud, Z; Collis, R E; Ateleanu, B; Mapleson, W W

    2002-09-01

    Accidental intrathecal injection of bupivacaine during epidural analgesia in labour remains a hazard, with the potential to cause total spinal anaesthesia and maternal collapse. Sacral block appears early after intrathecal injections compared with epidural ones, and we therefore used SI motor block to determine a safe and reliable test dose for epidural catheter misplacement. Mothers booked for elective Caesarean section were given various intrathecal doses of bupivacaine with fentanyl during routine combined spinal-epidural anaesthesia. Using sequential allocation we found that the ED50 for SI motor block 10 min after intrathecal injection was bupivacaine 7 mg with fentanyl 14 micrograms (95% CI, 6.2-7.8 mg). We then used intrathecal bupivacaine 13 mg to look for the ED95. We found the calculated ED97.5 to be bupivacaine 9.7 mg with fentanyl 19.4 micrograms (95% CI, 8.7-11.4). We conclude that testing for SI motor block 10 min after epidural injection of bupivacaine 10 mg is a reliable test to detect accidental intrathecal injection in the obstetric population.

  7. Increasing imputation and prediction accuracy for Chinese Holsteins using joint Chinese-Nordic reference population

    DEFF Research Database (Denmark)

    Ma, Peipei; Lund, Mogens Sandø; Ding, X

    2015-01-01

    This study investigated the effect of including Nordic Holsteins in the reference population on the imputation accuracy and prediction accuracy for Chinese Holsteins. The data used in this study include 85 Chinese Holstein bulls genotyped with both 54K chip and 777K (HD) chip, 2862 Chinese cows g...... to increase reference population rather than increasing marker density......This study investigated the effect of including Nordic Holsteins in the reference population on the imputation accuracy and prediction accuracy for Chinese Holsteins. The data used in this study include 85 Chinese Holstein bulls genotyped with both 54K chip and 777K (HD) chip, 2862 Chinese cows...... in Chinese Holstein were assessed. The allele correct rate increased around 2.7 and 1.7% in imputation from the 54K to the HD marker data for Chinese Holstein bulls and cows, respectively, when the Nordic HD-genotyped bulls were included in the reference data for imputation. However, the prediction accuracy...

  8. The evaluation of pelvic floor muscle strength in women with pelvic floor dysfunction: A reliability and correlation study.

    Science.gov (United States)

    Navarro Brazález, Beatriz; Torres Lacomba, María; de la Villa, Pedro; Sánchez Sánchez, Beatriz; Prieto Gómez, Virginia; Asúnsolo Del Barco, Ángel; McLean, Linda

    2017-04-28

    The purposes of this study were: (i) to evaluate the reliability of vaginal palpation, vaginal manometry, vaginal dynamometry; and surface (transperineal) electromyography (sEMG), when evaluating pelvic floor muscle (PFM) strength and/or activation; and (ii) to determine the associations among PFM strength measured using these assessments. One hundred and fifty women with pelvic floor disorders participated on one occasion, and 20 women returned for the same investigations by two different raters on 3 different days. At each session, PFM strength was assessed using palpation (both the modified Oxford Grading Scale and the Levator ani testing), manometry, and dynamometry; and PFM activation was assessed using sEMG. The interrater reliability of manometry, dynamometry, and sEMG (both root-mean-square [RMS] and integral average) was high (Lin's Concordance Correlation Coefficient [CCC] = 0.95, 0.93, 0.91, 0.86, respectively), whereas the interrater reliability of both palpation grading scales was low (Cohen's Kappa [k] = 0.27-0.38). The intrarater reliability of manometry (CCC = 0.96), and dynamometry (CCC = 0.96) were high, whereas intrarater reliability of both palpation scales (k = 0.78 for both), and of sEMG (CCC = 0.79 vs 0.80 for RMS vs integral average) was moderate. The Bland-Altman plot showed good inter and intrarater agreement, with little random variability for all instruments. The correlations among palpation, manometry, and dynamometry were moderate (coefficient of determination [r(2) ] ranged from 0.52 to 0.75), however, transperineal sEMG amplitude was only weakly correlated with all measures of strength (r(2)  = 0.23-0.30). Manometry and dynamometry are more reliable tools than vaginal palpation for the assessment of PFM strength in women with pelvic floor disorders, especially when different raters are involved. The different PFM strength measures used clinically are moderately correlated; whereas, PFM activation recorded

  9. A Systematic Review on methods of evaluate sentence production deficits in agrammatic aphasia patients: Validity and Reliability issues

    Directory of Open Access Journals (Sweden)

    Azar Mehri

    2014-01-01

    Full Text Available Background: The grammar assessment in aphasia has been done by few standard tests, but today these tests cannot precise evaluate the sentence production in agrammatic patients. In this study, we review structures and contents of tests or tasks designed to find more frequent methods for sentence production ability in aphasia patients. Materials and Methods: We searched the Cochrane library, Medline by PubMed, Science Direct, Scopus, and Google Scholar from 1980 to October 1, 2013 and evaluated all of exist tests or tasks included in the articles and systematic reviews. The sentence production has been studied in three methods. It contains the use of sentence production in spontaneous speech, tasks designed and both methods. The quality of studies was assessed using Critical Appraisal Skills Program. Results: The 160 articles were reviewed and 38 articles were studied according to inclusion and exclusion criteria. They were classified into three categories based on assessment methods of sentence production. In 39.5% studies, researchers have used tasks designed, 7.9% articles have applied spontaneous speech and 52.6% articles have used both methods for evaluation production. Inter-rater reliability was between 90% and 100% and intra-rater reliability was between 96% and 98% in studied. Conclusion: Agrammatic aphasia has syntax disorders, especially in sentence production. Most researchers and clinicians used both methods for evaluation production.

  10. Reliability and Validity of the Turkish Version of the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V).

    Science.gov (United States)

    Özcebe, Esra; Aydinli, Fatma Esen; Tiğrak, Tuğçe Karahan; İncebay, Önal; Yilmaz, Taner

    2018-01-11

    The main purpose of this study was to culturally adapt the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) to Turkish and to evaluate its internal consistency, validity, and reliability. The Turkish version of CAPE-V was developed, and with the use of a prospective case-control design, the voice recordings of 130 participants were collected according to CAPE-V protocol. Auditory-perceptual evaluation was conducted according to CAPE-V and Grade, Roughness, Breathiness, Asthenia, and Strain (GRBAS) scale by two ear, nose, and throat specialists and two speech and language therapists. The different types of voice disorders, classified as organic and functional disorders, were compared in terms of their CAPE-V scores. The overall severity parameter had the highest intrarater and inter-reliability values for all the participants. For all four raters, the differences in the six CAPE-V parameters between the study and the control groups were found to be statistically significant. Among the correlations for the comparable parameters of the CAPE-V and the GRBAS scales, the highest correlation was found between the overall severity-grade parameters. There was no difference found between the organic and functional voice disorders in terms of the CAPE-V scores. The Turkish version of CAPE-V has been proven to be a reliable and valid instrument to use in the auditory-perceptual evaluation of voice. For the future application of this study, it would be important to investigate whether cepstral measures correlate with the auditory-perceptual judgments of dysphonia severity collected by a Turkish version of the CAPE-V. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  11. Reliability and cross-cultural adaptation of the Turkish version of the Spinal Cord Injury Spasticity Evaluation Tool.

    Science.gov (United States)

    Akpinar, Pinar; Atici, Arzu; Kurt, Kubra N; Ozkan, Feyza U; Aktas, Ilknur; Kulcu, Duygu G

    2017-06-01

    The Spinal Cord Injury Spasticity Evaluation Tool is a 7-day recall self-reported questionnaire that assesses the problematic and useful effects of spasticity on daily life in patients with spinal cord injury (SCI). We aimed to determine the reliability and cross-cultural validation of the Turkish translation of the Spinal Cord Injury Spasticity Evaluation Tool (SCI-SETT). After translation and back translation of the Spinal Cord Injury Spasticity Evaluation Tool, 66 patients between the ages of 18 and 88 years with SCI, American Spinal Injury Association impairment scale grades from A to D with spasticity, and at least 6 months after injury were assessed. Participants rated the SCI-SETT at the same time period of the day, 1 week apart, and test-retest agreement was investigated. Also, the Penn Spasm Frequency Scale, self-assessment of spasticity severity, self-assessment of spasticity impact, Functional Independence Measure motor subscale, and 36-Item Short Form Health Survey were assessed for the evaluation of the convergent validity. There were 45 participants with tetraplegia and 21 patients with paraplegia. The test-retest reliability for the SCI-SETT was good. The intraclass correlation coefficient was 0.80 at 95% confidence interval. There were no significant correlations between the SCI-SETT scores and Functional Independence Measure motor subscale and Penn Spasm Frequency Scale scores. There was a significant correlation between the SCI-SETT scores and vitality scores of the 36-Item Short Form Health Survey. The SCI-SETT showed statistically significant correlations with other measures including self-assessed spasticity severity and self-assessed spasticity impact (P<0.05). The SCI-SETT is a reliable self-rating tool for assessing spasticity in patients with SCI in the Turkish population.

  12. Development and Reliability of the Functional Evaluation Scale for Duchenne Muscular Dystrophy, Gait Domain: A Pilot Study.

    Science.gov (United States)

    de Carvalho, Eduardo Vital; Hukuda, Michele Emy; Escorcio, Renata; Voos, Mariana Callil; Caromano, Fátima Aparecida

    2015-09-01

    The progression of Duchenne muscular dystrophy (DMD) results in the emergence of multiple and varied synergies to compensate muscle weakness and to deal with the demands of the functional tasks (e.g. gait). No functional evaluation instrument for individuals with DMD allows the detailed description (subjective qualitative evaluation) and compensatory movement scoring (objective quantitative evaluation) exclusively of gait. For this reason, clinicians and therapists face difficulties in assessment and decision-making of this functional activity. This study aimed to elaborate the gait domain of the Functional Evaluation Scale for DMD (FES-DMD-GD) and test its intra-rater and inter-rater reliabilities and its relationship with age and timed motor performance. We listed all the compensatory movements observed in 102 10-m gait videos of 51 children with DMD. Based on this report, the FES-DMD-GD was created and submitted to the review of 10 experts. After incorporating the experts suggestions, three examiners scored the videos using the FES-DMD-GD. The intra-rater and inter-rater reliabilities was calculated. Spearman correlation tests investigated the relationships between FES-DMD-GD and age and timed motor performance (p < 0.05). The FES-DMD-GD was composed of three phases and had 14 items to quantify compensatory movements on gait. Intra-class correlation coefficients ranged from acceptable (0.74) to excellent (0.99). FES-DMD-GD correlated to age and timed motor performance. This pilot version of FES-DMD-GD showed reliability and correlated to age and timed motor performance. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Missing value imputation for microarray gene expression data using histone acetylation information

    Directory of Open Access Journals (Sweden)

    Feng Jihua

    2008-05-01

    Full Text Available Abstract Background It is an important pre-processing step to accurately estimate missing values in microarray data, because complete datasets are required in numerous expression profile analysis in bioinformatics. Although several methods have been suggested, their performances are not satisfactory for datasets with high missing percentages. Results The paper explores the feasibility of doing missing value imputation with the help of gene regulatory mechanism. An imputation framework called histone acetylation information aided imputation method (HAIimpute method is presented. It incorporates the histone acetylation information into the conventional KNN(k-nearest neighbor and LLS(local least square imputation algorithms for final prediction of the missing values. The experimental results indicated that the use of acetylation information can provide significant improvements in microarray imputation accuracy. The HAIimpute methods consistently improve the widely used methods such as KNN and LLS in terms of normalized root mean squared error (NRMSE. Meanwhile, the genes imputed by HAIimpute methods are more correlated with the original complete genes in terms of Pearson correlation coefficients. Furthermore, the proposed methods also outperform GOimpute, which is one of the existing related methods that use the functional similarity as the external information. Conclusion We demonstrated that the using of histone acetylation information could greatly improve the performance of the imputation especially at high missing percentages. This idea can be generalized to various imputation methods to facilitate the performance. Moreover, with more knowledge accumulated on gene regulatory mechanism in addition to histone acetylation, the performance of our approach can be further improved and verified.

  14. Predictors of clinical outcome in pediatric oligodendroglioma: meta-analysis of individual patient data and multiple imputation.

    Science.gov (United States)

    Wang, Kevin Yuqi; Vankov, Emilian R; Lin, Doris Da May

    2017-12-01

    OBJECTIVE Oligodendroglioma is a rare primary CNS neoplasm in the pediatric population, and only a limited number of studies in the literature have characterized this entity. Existing studies are limited by small sample sizes and discrepant interstudy findings in identified prognostic factors. In the present study, the authors aimed to increase the statistical power in evaluating for potential prognostic factors of pediatric oligodendrogliomas and sought to reconcile the discrepant findings present among existing studies by performing an individual-patient-data (IPD) meta-analysis and using multiple imputation to address data not directly available from existing studies. METHODS A systematic search was performed, and all studies found to be related to pediatric oligodendrogliomas and associated outcomes were screened for inclusion. Each study was searched for specific demographic and clinical characteristics of each patient and the duration of event-free survival (EFS) and overall survival (OS). Given that certain demographic and clinical information of each patient was not available within all studies, a multivariable imputation via chained equations model was used to impute missing data after the mechanism of missing data was determined. The primary end points of interest were hazard ratios for EFS and OS, as calculated by the Cox proportional-hazards model. Both univariate and multivariate analyses were performed. The multivariate model was adjusted for age, sex, tumor grade, mixed pathologies, extent of resection, chemotherapy, radiation therapy, tumor location, and initial presentation. A p value of less than 0.05 was considered statistically significant. RESULTS A systematic search identified 24 studies with both time-to-event and IPD characteristics available, and a total of 237 individual cases were available for analysis. A median of 19.4% of the values among clinical, demographic, and outcome variables in the compiled 237 cases were missing. Multivariate

  15. Familiarization, reliability, and evaluation of a multiple sprint running test using self-selected recovery periods.

    Science.gov (United States)

    Glaister, Mark; Witmer, Chad; Clarke, Dustin W; Guers, John J; Heller, Justin L; Moir, Gavin L

    2010-12-01

    The aims of the present study were to investigate the process of self-selected recovery in a multiple sprint test with a view to using self-selected recovery time as a means of reliably quantifying an individual's ability to resist fatigue in this type of exercise. Twenty physically active exercise science students (means ± SD for age, height, body mass, body fat, and VO2max of the subjects were 21 ± 2 yr, 1.79 ± 0.09 m, 83.7 ± 10.8 kg, 16.6 ± 3.9%, and 52.7 ± 7.2 ml·kg·min, respectively) completed 4 trials of a 12 × 30 m multiple sprint running test under the instruction that they should allow sufficient recovery time between sprints to enable maximal sprint performance to be maintained throughout each trial. Mean recovery times across the 4 trials were 73.9 ± 24.7, 82.3 ± 23.8, 77.6 ± 19.1, and 77.5 ± 13.9 seconds, respectively, with variability across the first 3 trials considered evidence of learning effects. Test-retest reliability across trials 3 to 4 revealed a good level of reliability as evidenced by a coefficient of variation of 11.1% (95% likely range: 8.0-18.1%) and an intraclass correlation coefficient of 0.76 (95% likely range: 0.40-0.91). Despite no change in sprint performance throughout the trials, ratings of perceived exertion increased progressively and significantly (p < 0.001) from a value of 10 ± 2 after sprint 3 to 14 ± 2 after sprint 12. The correlation between relative VO2max and mean recovery time was 0.14 (95% likely range: -0.37-0.58). The results of the present study show that after the completion of 2 familiarization trials, the ability to maintain sprinting performance in a series of repeated sprints can be self-regulated by an athlete to a high degree of accuracy without the need for external timepieces.

  16. Reliability and validity of the Microsoft Kinect for evaluating static foot posture

    National Research Council Canada - National Science Library

    Mentiplay, Benjamin F; Clark, Ross A; Mullins, Alexandra; Bryant, Adam L; Bartold, Simon; Paterson, Kade

    2013-01-01

    .... An inexpensive and widely available imaging system, the Microsoft Kinect™, may possess the characteristics to objectively evaluate static foot posture in a clinical setting with high accuracy...

  17. The Reliability and Validity of the Clinical Competence Evaluation Scale in Physical Therapy

    National Research Council Canada - National Science Library

    Yoshino, Jun; Usuda, Shigeru

    2013-01-01

    [Purpose] To examine the internal consistency, criterion-related validity, factorial validity, and content validity of the Clinical Competence Evaluation Scale in Physical Therapy (CEPT). [Subjects...

  18. Puget Sound Area Electric Reliability Plan. Appendix B : Local Generation Evaluation : Draft Environmental Impact Statement.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration.

    1991-09-01

    The information and data contained in this Appendix was extracted from numerous sources. The principle sources used for technical data were Bonneville Power Administration's 1990 Resource Program along with its technical appendix, and Chapter 8 of the Draft 1991 Northwest Conservation and Electric Power Plan. All cost data is reported 1988 dollars unless otherwise noted. This information was supplemented by other data developed by Puget Sound utilities who participated on the Local Generation Team. Identifying generating resources available to the Puget Sound area involved a five step process: (1) listing all possible resources that might contribute power to the Puget Sound area, (2) characterizing the technology/resource status, cost and operating characteristics of these resources, (3) identifying exclusion criteria based on the needs of the overall Puget Sound Electric Reliability Plan study, (4) applying these criteria to the list of resources, and (5) summarizing of the costs and characteristics of the final list of resources. 15 refs., 20 tabs.

  19. Adaptation of the Oswestry Disability Index to Kannada Language and Evaluation of Its Validity and Reliability.

    Science.gov (United States)

    Mohan, Venkatdeep; G S, Prashanth; Meravanigi, Gururaja; N, Rajagopalan; Yerramshetty, Janardhan

    2016-06-01

    A translation, cross-cultural adaptation, and validation study. The aim of this study was to translate, adapt cross-culturally, and validate the Kannada version of the Oswestry Disability Index (ODI). Low back pain is recognized as an important public health problem. Self-administered condition-specific questionnaires are important tools for assessing a patient. For low backache, the ODI is used widely. Preferred language of a region can have an effect on interpretation of questions and thus scoring. A search of literature showed no previously validated Kannada version of the ODI. Cross-cultural adaptation and translation was carried out according to previously set guidelines. Patients were recruited from the orthopedic outpatient department. They filled out a booklet containing the Kannada version of the ODI, Kannada version of the Roland Morris Disability Questionnaire (RMDQ), and a 10-point visual analog scale for pain (VASpain). The Kannada ODI was answered by 91 patients and retested in 35 patients. After removing questionnaires with stray or ambiguous markings causing difficulty in computation of scores, 76 test questionnaires and 32 retest questionnaires were available for statistical analysis. The Kannada version showed an excellent internal consistency (Cronbach's alpha = 0.92). The Kannada version of the ODI showed good correlation with the RMDQ (r = 0.72) and moderate correlation with VASpain (r = 0.58). It also showed an excellent test-retest reliability (ICC = 0.96). Standard error of measurement (SEM) was also low (4.08) and a difference of 11 points is the "Minimum Detectable Change (MDC)." The Kannada version of the ODI that was developed showed consistency and reliability. It can be used for assessment of low back pain and treatment outcomes in Kannada-speaking populations. However, in view of a smaller sample size, it will benefit from verification at multiple centers and with more patients. 3.

  20. Reliability of radiologic evaluation of abdominal aortic calcification using the 24-point scale.

    Science.gov (United States)

    Pariente-Rodrigo, E; Sgaramella, G Alessia; García-Velasco, P; Hernández-Hernández, J L; Landeras-Alvaro, R; Olmos-Martínez, J Manuel

    2016-01-01

    Calcification of the abdominal aorta is associated with increased cardiovascular morbidity, so a reliable method to quantify it is clinically transcendent. The 24-point scale (AAC-24) is the standard method for assessing abdominal aortic calcification on lateral plain films of the lumbar spine. The aim of this study was to determine the intraobserver and interobserver agreements for the AAC-24, taking into account the heterogeneity of the distribution of the calcifications in the design of the statistical analysis. We analyzed the intraobserver agreement (in plain films from 81 patients, with a four-year separation between observations) and the interobserver agreement (in plain films from 100 patients, with three observers), using both intraclass correlation and Bland-Altman plots. The intraobserver intraclass correlation coefficient was 0.93 (95% confidence interval [CI95%]: 0.6-0.9), and the interobserver intraclass correlation coefficient was 0.91 (CI95%: 0.8-0.9) with an increase in the coefficient in the tercile with the greatest discrepancy. The difference in means ranged from 0.3 to 1.2 points, and the distance between the limits of agreement ranged from 4.7 to 9.4 points. These differences increased significantly as the calcification progressed. Using the AAC-24 on lateral plain films of the lumbar spine is a reliable and reproducible method of assessing calcification of the abdominal aorta; both intraobserver and interobserver agreement are higher during the initial phases of calcification. Copyright © 2014 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  1. Reliability of optic nerve ultrasound for the evaluation of patients with spontaneous intracranial hemorrhage.

    Science.gov (United States)

    Moretti, Riccardo; Pizzi, Barbara; Cassini, Fabrizio; Vivaldi, Nicoletta

    2009-12-01

    The aim of our study is to confirm the reliability of optic nerve ultrasound as a method to detect intracranial hypertension in patients with spontaneous intracranial hemorrhage, to assess the reproducibility of the measurement of the optic nerve sheath diameter (ONSD), and to verify that ONSD changes concurrently with intracranial pressure (ICP) variations. Sixty-three adult patients with subarachnoid hemorrhage (n = 34) or primary intracerebral hemorrhage (n = 29) requiring sedation and invasive ICP monitoring were enrolled in a 10-bed multivalent ICU. ONSD was measured 3 mm behind the globe through a 7.5-MHz ultrasound probe. Mean binocular ONSD was used for statistical analysis. ICP values were registered simultaneously to ultrasonography. Twenty-eight ONSDs were measured consecutively by two different observers, and interobserver differences were calculated. Twelve coupled measurements were taken before and within 1 min after cerebrospinal fluid (CSF) drainage to control elevated ICP. Ninety-four ONSD measurements were analyzed. 5.2 mm proved to be the optimal ONSD cut-off point to predict raised ICP (>20 mmHg) with 93.1% sensitivity (95% CI: 77.2-99%) and 73.85% specificity (95% CI: 61.5-84%). ONSD-ICP correlation coefficient was 0.7042 (95% CI for r = 0.5850-0.7936). The median interobserver ONSD difference was 0.25 mm. CSF drainage to control elevated ICP caused a rapid and significant reduction of ONSD (from 5.89 ± 0.61 to 5 ± 0.33 mm, P < 0.01). Our investigation confirms the reliability of optic nerve ultrasound as a non-invasive method to detect elevated ICP in intracranial hemorrhage patients. ONSD measurements proved to have a good reproducibility. ONSD changes almost concurrently with CSF pressure variations.

  2. Imputation-based strategies for clinical trial longitudinal data with nonignorable missing values

    Science.gov (United States)

    Yang, Xiaowei; Li, Jinhui; Shoptaw, Steven

    2011-01-01

    SUMMARY Biomedical research is plagued with problems of missing data, especially in clinical trials of medical and behavioral therapies adopting longitudinal design. After a literature review on modeling incomplete longitudinal data based on full-likelihood functions, this paper proposes a set of imputation-based strategies for implementing selection, pattern-mixture, and shared-parameter models for handling intermittent missing values and dropouts that are potentially nonignorable according to various criteria. Within the framework of multiple partial imputation, intermittent missing values are first imputed several times; then, each partially imputed data set is analyzed to deal with dropouts with or without further imputation. Depending on the choice of imputation model or measurement model, there exist various strategies that can be jointly applied to the same set of data to study the effect of treatment or intervention from multi-faceted perspectives. For illustration, the strategies were applied to a data set with continuous repeated measures from a smoking cessation clinical trial. PMID:18205247

  3. First Use of Multiple Imputation with the National Tuberculosis Surveillance System

    Directory of Open Access Journals (Sweden)

    Christopher Vinnard

    2013-01-01

    Full Text Available Aims. The purpose of this study was to compare methods for handling missing data in analysis of the National Tuberculosis Surveillance System of the Centers for Disease Control and Prevention. Because of the high rate of missing human immunodeficiency virus (HIV infection status in this dataset, we used multiple imputation methods to minimize the bias that may result from less sophisticated methods. Methods. We compared analysis based on multiple imputation methods with analysis based on deleting subjects with missing covariate data from regression analysis (case exclusion, and determined whether the use of increasing numbers of imputed datasets would lead to changes in the estimated association between isoniazid resistance and death. Results. Following multiple imputation, the odds ratio for initial isoniazid resistance and death was 2.07 (95% CI 1.30, 3.29; with case exclusion, this odds ratio decreased to 1.53 (95% CI 0.83, 2.83. The use of more than 5 imputed datasets did not substantively change the results. Conclusions. Our experience with the National Tuberculosis Surveillance System dataset supports the use of multiple imputation methods in epidemiologic analysis, but also demonstrates that close attention should be paid to the potential impact of missing covariates at each step of the analysis.

  4. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    Science.gov (United States)

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  5. Toward a document evaluation methodology: What does research tell us about the validity and reliability of evaluation methods?

    NARCIS (Netherlands)

    de Jong, Menno D.T.; Schellens, P.J.

    2000-01-01

    Although the usefulness of evaluating documents has become generally accepted among communication professionals, the supporting research that puts evaluation practices empirically to the test is only beginning to emerge. This article presents an overview of the available research on troubleshooting

  6. Reliable tool life measurements in turning - an application to cutting fluid efficiency evaluation

    DEFF Research Database (Denmark)

    Axinte, Dragos A.; Belluco, Walter; De Chiffre, Leonardo

    2001-01-01

    ) provides efficiency evaluation. Six cutting oils, five of which formulated from vegetable basestock, were evaluated in turning. Experiments were run in a range of cutting parameters. according to a 2, 3-1 factorial design, machining AISI 316L stainless steel with coated carbide tools. Tool life...

  7. Methodology of comprehensive evaluation of the effectiveness and reliability of production lines of preparation of sea water for the cultivation of aquatic organisms

    OpenAIRE

    S. D. Ugryumova; A. I. Krikun

    2016-01-01

    The factors affecting the efficiency and reliability of technical systems. Set stages of development and modernization of production lines that correspond to specific stages of evaluating the effectiveness and reliability. Considered several methods of definition of indicators of indicators of efficiency and reliability of the equipment in technological lines of fisheries sector: forecasting methods, structural methods, physical methods, logical-probability method (method by I.A. Ryabinin) an...

  8. [Evaluation on the Chinese version of adolescent fat intake behavior of psychological measurement scale and its reliability and validity].

    Science.gov (United States)

    Fang, Mingzhu; Zhang, Jie; Huang, Xianhong; Wu, Xian; Gu, Fang; Qu, Xuping; Xu, Liangwen

    2014-03-01

    To develop a suitable fat intake behavior of psychological measurement scales for the Chinese adolescents and evaluate its validity and reliability. According to the multi-stage stratified cluster sampling principle, a total of 3 600 junior students were recruited from the classes in 12 selected high schools in Hangzhou, Wuhan and Xi'an from March to May, 2012. Based on introducing and translating the original scale abroad, Chinese version of adolescent fat intake behavior of psychological measurement scales was utilized in field investigations. The reliability was assessed, using Cronbach's α and split-half reliability; while exploratory factor analysis used to test its validity, with entries-dimension correlation coefficient (IIC), correlation coefficient between the scores and the dimension, and the dimension of correlation coefficient test content validity. The valid subject of the study was 3 448(of whom males were 52.4% (1 806/3 448) and female were 47.6% (1 642/3 448)), while the mean age was (14.85 ± 1.46) years old. The internal consistency reliability (Cronbach's α) for total scale score and four domains were 0.922,0.933, 0.660, 0.773 and 0.869 respectively, whose split-efficacy reliability were separately 0.927, 0.933, 0.790, 0.624 and 0.889. Data from the exploratory factor analysis revealed the following dimensions:the entries were all inclusive, with the cumulative contribution rate at 59.453%, 56.062% and 52.668%, respectively. The results of IIC showed that in the four dimensions, the contained entries between Spearman correlation coefficient have statistically significant, with the r value range of 0.584-0.793, 0.665-0.818, 0.654-0.765 and 0.622-0.747 severely, while other dimensions from weak to moderate relationships, the r value ranged from -0.028 to 0.614. The reliability and validity of the adolescent fat intake behavior of psychological measurement scales (Chinese version) were good, and could be used to measure the fat intake behavior of

  9. Reliability of a tool for measuring theory of planned behaviour constructs for use in evaluating research use in policymaking

    Directory of Open Access Journals (Sweden)

    Dobbins Maureen

    2011-06-01

    Full Text Available Abstract Background Although measures of knowledge translation and exchange (KTE effectiveness based on the theory of planned behavior (TPB have been used among patients and providers, no measure has been developed for use among health system policymakers and stakeholders. A tool that measures the intention to use research evidence in policymaking could assist researchers in evaluating the effectiveness of KTE strategies that aim to support evidence-informed health system decision-making. Therefore, we developed a 15-item tool to measure four TPB constructs (intention, attitude, subjective norm and perceived control and assessed its face validity through key informant interviews. Methods We carried out a reliability study to assess the tool's internal consistency and test-retest reliability. Our study sample consisted of 62 policymakers and stakeholders that participated in deliberative dialogues. We assessed internal consistency using Cronbach's alpha and generalizability (G coefficients, and we assessed test-retest reliability by calculating Pearson correlation coefficients (r and G coefficients for each construct and the tool overall. Results The internal consistency of items within each construct was good with alpha ranging from 0.68 to alpha = 0.89. G-coefficients were lower for a single administration (G = 0.34 to G = 0.73 than for the average of two administrations (G = 0.79 to G = 0.89. Test-retest reliability coefficients for the constructs ranged from r = 0.26 to r = 0.77 and from G = 0.31 to G = 0.62 for a single administration, and from G = 0.47 to G = 0.86 for the average of two administrations. Test-retest reliability of the tool using G theory was moderate (G = 0.5 when we generalized across a single observation, but became strong (G = 0.9 when we averaged across both administrations. Conclusion This study provides preliminary evidence for the reliability of a tool that can be used to measure TPB constructs in relation to research

  10. New reliable scoring system, Toyama mouse score, to evaluate locomotor function following spinal cord injury in mice.

    Science.gov (United States)

    Shigyo, Michiko; Tanabe, Norio; Kuboyama, Tomoharu; Choi, Song-Hyen; Tohda, Chihiro

    2014-06-03

    Among the variety of methods used to evaluate locomotor function following a spinal cord injury (SCI), the Basso Mouse Scale score (BMS) has been widely used for mice. However, the BMS mainly focuses on hindlimb movement rather than on graded changes in body support ability. In addition, some of the scoring methods include double or triple criteria within a single score, which likely leads to an increase in the deviation within the data. Therefore we aimed to establish a new scoring method reliable and easy to perform in mice with SCI. Our Toyama Mouse Score (TMS) was established by rearranging and simplifying the BMS score and combining it with the Body Support Scale score (BSS). The TMS reflects changes in both body support ability and hindlimb movement. The definition of single score is made by combing multiple criteria in the BMS. The ambiguity was improved in the TMS. Using contusive SCI mice, hindlimb function was measured using the TMS, BMS and BSS systems. The TMS could distinguish changes in hindlimb movements that were evaluated as the same score by the BMS. An analysis of the coefficient of variation (CV) of score points recorded for 11 days revealed that the CV for the TMS was significantly lower than the CV obtained using the BMS. A variation in intra evaluators was lower in the TMS than in the BMS. These results suggest that the TMS may be useful as a new reliable method for scoring locomotor function for SCI models.

  11. Radiographic Evaluation of the Reliability of Neck Anatomic Structures as Anterior Cervical Surgical Landmarks.

    Science.gov (United States)

    Liu, Jia-Ming; Du, Liu-Xue; Xiong, Xu; Chen, Xuan-Yin; Zhou, Yang; Long, Xin-Hua; Huang, Shan-Hu; Liu, Zhi-Li

    2017-07-01

    Accurate location of the skin incision is helpful to decrease the technical difficulty and save the operative time in anterior cervical spine surgery. Spine surgeons usually use the traditional neck anatomic structures (the hyoid bone, thyroid cartilage, and cricoid cartilage) as landmarks during the surgery. However, the reliability of these landmarks has not been validated in actual practice. To find out which landmark is the most accurate for identifying the cervical levels in anterior cervical spine surgery. The lateral flexion and extension radiographs of cervical spine in standing position from 30 consecutive patients from January 2015 to February 2015 were obtained. The cervical vertebral bodies from C2 to C7 were divided equally into 2 segments. The cervical segments corresponding to each of the surface landmarks were recorded on the flexion and extension radiographs, respectively, and the displacement of corresponding cervical segments from the flexion to extension radiographs for each landmark was calculated. Based on the measurements, the main corresponding cervical levels for the mandibular angle were C2 on both of the flexion and extension films, for the hyoid bone were the C3-C4 interspace on flexion film and C3 on extension film, for the thyroid cartilage C5 on both of flexion and extension films, and for the cricoid cartilage C6 on flexion film and C5-C6 interspace on extension film, respectively. The ratios of displacement within 2 segments from flexion to extension were 83.3% (25/30) for mandibular angle, 56.7% (17/30) for hyoid bone, 66.7% (20/30) for thyroid cartilage, and 56.7% (17/30) for cricoid cartilage, respectively. The mean displacement from flexion to extension films were significantly less than 2 cervical segments for the mandibular angle but greater than 2 segments for the other landmarks. Significant differences were found between mandibular angle and the other 3 landmarks for the displacement from flexion to extension. The angle of

  12. Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Bang-Hung [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Nuclear Medicine, Taipei Veterans General Hospital, Taiwan (China); Tsai, Sung-Yi [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Imaging Medical, St.Martin De Porres Hospital, Chia-Yi, Taiwan (China); Wang, Shyh-Jen [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Department of Nuclear Medicine, Taipei Veterans General Hospital, Taiwan (China); Su, Tung-Ping; Chou, Yuan-Hwa [Department of Psychiatry, Taipei Veterans General Hospital, Taipei, Taiwan (China); Chen, Chia-Chieh [Institute of Nuclear Energy Research, Longtan, Taiwan (China); Chen, Jyh-Cheng, E-mail: jcchen@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China)

    2011-08-21

    The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images. Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of {sup 123}I-ADAM. The image matrix size was 128x128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans. The average of specific uptake ratio (SUR: target/cerebellum-1) of {sup 123}I-ADAM binding to SERT in midbrain was 1.78{+-}0.27, pons was 1.21{+-}0.53, and striatum was 0.79{+-}0.13. The cronbach's {alpha} of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2

  13. Validation and reliability of the Baecke questionnaire for the evaluation of habitual physical activity in adult men

    Directory of Open Access Journals (Sweden)

    Alex Antonio Florindo

    2003-06-01

    Full Text Available The aim of this study was to verify validity and reliability of the scores for physical exercise in leisure (PEL, leisure and locomotion activities (LLA, and total score (TS of the Baecke habitual physical activity questionnaire in adult males. Twenty-one students of Physical Education were evaluated. For validation, the maximum oxygen uptake (O2max and the decrease of the heart rate in percentile (%DHR were measured through the Cooper's 12-minute walk or run test, and an annual index of physical exercise (IPE, and a week index of locomotion activities (ILA. The reliability was verified through test-retest with interval of 45 days. The Pearson correlation coefficient, and partial correlation adjusted for age and body mass index were used for validation. The intraclass correlation and paired t-test were used for reliability. The results indicated that %DHR was correlated with LLA and TS (r = 0.47 and p = 0.030; r = 0.48 and p = 0.027, respectively. IPE was correlated with PEL and TS (r = 0.56 and p = 0.008; r = 0.46 and p = 0.036, respectively. ILA was correlated with LLA and TS (r = 0.64 and p = 0.002 and r = 0.51 and p = 0.017, respectively. There was no significant difference in PEL, LLA and TS means in test-retest. The intraclass correlations were r = 0.69; r = 0.80 and r = 0.77, respectively for PEL, LLA and TS. In conclusion, the Baecke questionnaire is valid and reliable to measure habitual physical activity in Brazilian adult men.

  14. EVALUATION OF GRAY INTENSITY VALUE FOR RELIABLE DIGITIZATION OF DIGITAL RADIOGRAPHY IN DEFECT DETECTION

    OpenAIRE

    Chitra, P.; B.Sheela Rani; Venkatraman, B; Baldev Raj

    2011-01-01

    Radiography is one of the oldest NDT technique used for evaluation of weld defects in metal. Radiographic defects are classified based on the shape, location, orientation, depth, width etc. Once a radiographof a weld is taken, the radiographer examines the same for identifying the defects and quantitatively evaluating the same based on codes and specifications. In recent years digital imaging has superseded conventional imaging, which has led to a profound change in interpretation of radiogra...

  15. Capacity Expansion and Reliability Evaluation on the Networks Flows with Continuous Stochastic Functional Capacity

    Directory of Open Access Journals (Sweden)

    F. Hamzezadeh

    2014-01-01

    Full Text Available In many systems such as computer network, fuel distribution, and transportation system, it is necessary to change the capacity of some arcs in order to increase maximum flow value from source s to sink t, while the capacity change incurs minimum cost. In real-time networks, some factors cause loss of arc’s flow. For example, in some flow distribution systems, evaporation, erosion or sediment in pipes waste the flow. Here we define a real capacity, or the so-called functional capacity, which is the operational capacity of an arc. In other words, the functional capacity of an arc equals the possible maximum flow that may pass through the arc. Increasing the functional arcs capacities incurs some cost. There is a certain resource available to cover the costs. First, we construct a mathematical model to minimize the total cost of expanding the functional capacities to the required levels. Then, we consider the loss of flow on each arc as a stochastic variable and compute the system reliability.

  16. Reliability evaluation of fiber optic sensors exposed to cyclic thermal load

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Heon Young; Kim, Dong Hoon [Advanced Materials Research Team, Korea Railroad Research Institute, Uiwang (Korea, Republic of); Kim, Dae Hyun [Dept. of Mechanical and Automotive Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-06-15

    Fiber Bragg grating (FBG) sensors are currently the most prevalent sensors because of their unique advantages such as ease of multiplexing and capability of performing absolute measurements. They are applied to various structures for structural health monitoring (SHM). The signal characteristics of FBG sensors under thermal loading should be investigated to enhance the reliability of these sensors, because they are exposed to certain cyclic thermal loads due to temperature changes resulting from change of seasons, when they are applied to structures for SHM. In this study, tests on specimens are conducted in a thermal chamber with temperature changes from - to for 300 cycles. For the specimens, two types of base materials and adhesives that are normally used in the manufacture of packaged FBG sensors are selected. From the test results, it is confirmed that the FBG sensors undergo some degree of compressive strain under cyclic thermal load; this can lead to measurement errors. Hence, a pre-calibration is necessary before applying these sensors to structures for long-term SHM.

  17. Transcutaneous bilirubinometry. Evaluation of accuracy and reliability in a large population.

    Science.gov (United States)

    Yamauchi, Y; Yamanouchi, I

    1988-11-01

    A total of 576 transcutaneous bilirubin measurements were performed on 336 Japanese full-term breast-fed newborn infants during the first twelve days of life. Our present study revealed that transcutaneous bilirubin measurements obtained from the forehead, chest, and sternum correlated well with serum bilirubin concentrations measured by AO bilirubinometer (0.910-0.922, p less than 0.001, n = 576). The 95% confidence limits were +/- 3.04 mg/dl for the forehead, +/- 2.85 mg/dl for the chest, and +/- 2.84 mg/dl for the sternum readings. The overall mean of values from the forehead, chest and sternum, when compared with individual means, was found to correlate better with serum bilirubin concentrations (r = 0.930, p less than 0.001, n = 576) and improve the 95% confidence limits to +/- 2.68 mg/dl. These results demonstrated that the accuracy and reliability of TcB measurement could be increased further with multiple site measurement. The study clearly indicates that transcutaneous bilirubinometry is useful for clinical screening of serum bilirubin levels in Japanese full-term newborn infants.

  18. Evaluating Reliability Index and Determining the Allocation Levels of Water Resources in Water User Association of Alborz Scheme

    Directory of Open Access Journals (Sweden)

    S.F. Hashemi

    2017-01-01

    Full Text Available Introduction: Water allocation management should be performed in a way that the various practical irrigation parts and drainage networks remain stable. Thus, irrigation management transfer and participatory irrigation management have been proposed in more than 57 countries. Such issue along with institutional mechanisms for participation severely emphasizes a new adjustable organization to transfer the investment from public resources to non-governmental sources and thus granting and handling the burden on public WUAs. In this study, the reliability of irrigation indicator was used to evaluate general irrigation planning performance of 20 WUAs along areas at Alborz Integrated Water and Land Management Project in Mazandaran province. Materials and Methods: The overall project area encompassed the watersheds of the BabolRiver, Talar and Saih River of the Mazandaran Province, Iran. The Alborz Irrigation and Drainage network is located in the lower catchment between the Babol and Siah Rivers (western and eastern boundaries respectively and with the Caspian Sea to the north in. The site located between 36ْ 15َ N and 36ْ 46َ N latitude and 52ْ 35َ E and 53ْ E longitude and covers 90520 ha. In downstream of Alborz reservoir, two diversion dam, Raiskola and Ganjafroz is located and two irrigation channels depends on these dam are constructed. Organizing the WUAs is also important in other respects, so that the sources and utilization areas will be limited to 2,000 hectares to 6,000 hectares from 10000 hectares to 30000 hectares, respectively, which increases the simulation accuracy in a small-scale model. WUAs are classified based on the following principles: • Adaptation of hydrological and water boundaries, • Land use and cropping pattern • Main and secondary irrigation and drainage channels location, • Ensuring the financial stability and independence, • Considering the cultural needs, local farmers’ roles and social studies in the

  19. Translation of oswestry disability index into Tamil with cross cultural adaptation and evaluation of reliability and validity(§).

    Science.gov (United States)

    Vincent, Joshua Israel; Macdermid, Joy Christine; Grewal, Ruby; Sekar, Vincent Prabhakaran; Balachandran, Dinesh

    2014-01-01

    Prospective longitudinal validation study. To translate and cross-culturally adapt the Oswestry Disability Index (ODI) to the Tamil language (ODI-T), and to evaluate its reliability and construct validity. ODI is widely used as a disease specific questionnaire in back pain patients to evaluate pain and disability. A thorough literature search revealed that the Tamil version of the ODI has not been previously published. The ODI was translated and cross-culturally adapted to the Tamil language according to established guidelines. 30 subjects (16 women and 14 men) with a mean age of 42.7 years (S.D. 13.6; Range 22 - 69) with low back pain were recruited to assess the psychometric properties of the ODI-T Questionnaire. Patients completed the ODI-T, Roland-Morris disability questionnaire (RMDQ), VAS-pain and VAS-disability at baseline and 24-72 hours from the baseline visit. The ODI-T displayed a high degree of internal consistency, with a Cronbach's alpha of 0.92. The test-retest reliability was high (n=30) with an ICC of 0.92 (95% CI, 0.84 to 0.96) and a mean re-test difference of 2.6 points lower on re-test. The ODI-T scores exhibited a strong correlation with the RMDQ scores (r = 0.82) papriori were supported. The Tamil version of the ODI Questionnaire is a valid and reliable tool that can be used to measure subjective outcomes of pain and disability in Tamil speaking patients with low back pain.

  20. Imputation of genotypes in Danish purebred and two-way crossbred pigs using low-density panels

    DEFF Research Database (Denmark)

    Xiang, Tao; Ma, Peipei; Ostersen, Tage

    2015-01-01

    that could explain the differences observed. Results Genotype imputation performs as well in crossbred animals as in purebred animals when both parental breeds are included in the reference population. When the size of the reference population is very large, it is not necessary to use a reference population...... that combines the two breeds to impute the genotypes of purebred animals because a within-breed reference population can provide a very high level of imputation accuracy (correct rate ≥ 0.99, correlation ≥ 0.95). However, to ensure that similar imputation accuracies are obtained for crossbred animals......, a reference population that combines both parental purebred animals is required. Imputation accuracies are higher when a larger proportion of haplotypes are shared between the reference population and the validation (imputed) populations. Conclusions The results from both real data and pedigree...

  1. Binary variable multiple-model multiple imputation to address missing data mechanism uncertainty: application to a smoking cessation trial.

    Science.gov (United States)

    Siddique, Juned; Harel, Ofer; Crespi, Catherine M; Hedeker, Donald

    2014-07-30

    The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables, which formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Evaluation of MCF10A as a Reliable Model for Normal Human Mammary Epithelial Cells.

    Directory of Open Access Journals (Sweden)

    Ying Qu

    Full Text Available Breast cancer is the most common cancer in women and a leading cause of cancer-related deaths for women worldwide. Various cell models have been developed to study breast cancer tumorigenesis, metastasis, and drug sensitivity. The MCF10A human mammary epithelial cell line is a widely used in vitro model for studying normal breast cell function and transformation. However, there is limited knowledge about whether MCF10A cells reliably represent normal human mammary cells. MCF10A cells were grown in monolayer, suspension (mammosphere culture, three-dimensional (3D "on-top" Matrigel, 3D "cell-embedded" Matrigel, or mixed Matrigel/collagen I gel. Suspension culture was performed with the MammoCult medium and low-attachment culture plates. Cells grown in 3D culture were fixed and subjected to either immunofluorescence staining or embedding and sectioning followed by immunohistochemistry and immunofluorescence staining. Cells or slides were stained for protein markers commonly used to identify mammary progenitor and epithelial cells. MCF10A cells expressed markers representing luminal, basal, and progenitor phenotypes in two-dimensional (2D culture. When grown in suspension culture, MCF10A cells showed low mammosphere-forming ability. Cells in mammospheres and 3D culture expressed both luminal and basal markers. Surprisingly, the acinar structure formed by MCF10A cells in 3D culture was positive for both basal markers and the milk proteins β-casein and α-lactalbumin. MCF10A cells exhibit a unique differentiated phenotype in 3D culture which may not exist or be rare in normal human breast tissue. Our results raise a question as to whether the commonly used MCF10A cell line is a suitable model for human mammary cell studies.

  3. Validity and reliability of the Mastication Observation and Evaluation (MOE) instrument

    NARCIS (Netherlands)

    Remijn, L.; Speyer, R.; Groen, B.E.; Limbeek, J. van; Nijhuis-Van der Sanden, M.W.

    2014-01-01

    The Mastication Observation and Evaluation (MOE) instrument was developed to allow objective assessment of a child's mastication process. It contains 14 items and was developed over three Delphi rounds. The present study concerns the further development of the MOE using the COSMIN (Consensus based

  4. Is Wikipedia a Reliable Learning Resource for Medical Students? Evaluating Respiratory Topics

    Science.gov (United States)

    Azer, Samy A.

    2015-01-01

    The aim of the present study was to critically evaluate the accuracy and readability of English Wikipedia articles on the respiratory system and its disorders and whether they can be a suitable resource for medical students. On April 27, 2014, English Wikipedia was searched for articles on respiratory topics. Using a modified DISCERN instrument,…

  5. Sexual Abuse Evaluations in the Emergency Department: Is the History Reliable?

    Science.gov (United States)

    Gordon, Stacy; Jaudes, Paula K.

    1996-01-01

    Review of charts for 141 children who had undergone both a screening interview by an emergency department physician and an investigative interview for child sexual abuse evaluation found that perpetrator identification obtained during emergency department screening interviews usually agreed with information obtained at the subsequent investigative…

  6. Reliability of Professional Judgments in Forensic Child Sexual Abuse Evaluations: Unsettled or Unsettling Science?

    Science.gov (United States)

    Everson, Mark D.; Sandoval, Jose Miguel; Berson, Nancy; Crowson, Mary; Robinson, Harriet

    2012-01-01

    In the absence of photographic or DNA evidence, a credible eyewitness, or perpetrator confession, forensic evaluators in cases of alleged child sexual abuse must rely on psychosocial or "soft" evidence, often requiring substantial professional judgment for case determination. This article offers a three-part rebuttal to Herman's (2009) argument…

  7. A rating scale for tutor evaluation in a problem-based curriculum: Validity and reliability

    NARCIS (Netherlands)

    D.H.J.M. Dolmans (Diana); I.H.A.P. Wolfhagen (Ineke); H.G. Schmidt (Henk); C.P.M. van der Vleuten (Cees)

    1994-01-01

    textabstractAn instrument has been developed to assess tutor performance in problem-based tutorial groups. This tutor evaluation questionnaire consists of 13 statements reflecting the tutor's behaviour. The statements are based on a description of the tasks set for the tutor. This study reports

  8. Validity and reliability of a Turkish Brief Pain Inventory Short Form when used to evaluate musculoskeletal pain.

    Science.gov (United States)

    Celik, Evrim Coskun; Yalcinkaya, Ebru Yilmaz; Atamaz, Funda; Karatas, Metin; Ones, Kadriye; Sezer, Tezgul; Eren, Imran; Paker, Nurdan; Gning, Ibrahima; Mendoza, Tito; Cleeland, Charles S

    2017-01-01

    The Brief Pain Inventory (BPI) is both a questionnaire and an outcome measure that is used widely in clinical trials to assess pain associated with many conditions. The BPI Short Form has been extensively translated into foreign languages. The aim of this study was to assess the validity and reliability of a Turkish Brief Pain Inventory Short Form (BPI-TR) to evaluate musculoskeletal pain. In total, 297 patients with musculoskeletal pain participated in the study. Demographic characteristics and brief medical histories were recorded. Pain intensity was assessed using a visual analogue scale (VAS) and quality-of-life was assessed using the Short Form 36 (SF-36). Pain was evaluated using the BPI-TR in all patients. Internal consistency and test-retest analysis were used to assess reliability. The internal consistency of the scale items was assessed by calculating Cronbach's α value, which was expected to be > 0.7. The criterion validity of the BPI-TR was assessed by correlation with VAS scores. Pain intensity, pain interference, and other components of the Turkish version were consistent with validity thereof. Cronbach's α was 0.84 for pain intensity and 0.89 for pain interference. The extent of BPI-TR and VAS correlation was statistically significant. The BPI-TR may be used for assessment of musculoskeletal pain.

  9. Cross-cultural adaptation and reliability and validity of the Dutch Patient-Rated Tennis Elbow Evaluation (PRTEE-D).

    Science.gov (United States)

    van Ark, Mathijs; Zwerver, Johannes; Diercks, Ronald L; van den Akker-Scheek, Inge

    2014-08-11

    Lateral Epicondylalgia (LE) is a common injury for which no reliable and valid measure exists to determine severity in the Dutch language. The Patient-Rated Tennis Elbow Evaluation (PRTEE) is the first questionnaire specifically designed for LE but in English. The aim of this study was to translate into Dutch and cross-culturally adapt the PRTEE and determine reliability and validity of the PRTEE-D (Dutch version). The PRTEE was cross-culturally adapted according to international guidelines. Participants (n = 122) were asked to fill out the PRTEE-D twice with a one week interval to assess test-retest reliability. Internal consistency of the PRTEE-D was determined by calculating Crohnbach's alphas for the questionnaire and subscales. Intraclass Correlation Coefficients (ICC) were calculated for the overall PRTEE-D score, pain and function subscale and individual questions to determine test-retest reliability. Additionally, the Disabilities for the Arm, Shoulder and Hand questionnaire (DASH) and Visual Analogue Scale (VAS) pain scores were obtained from 30 patients to assess construct validity; Spearman's correlation coefficients were calculated between the PRTEE-D (subscales) and DASH and VAS-pain scores. The PRTEE was successfully cross-culturally adapted into Dutch (PRTEE-D). Crohnbach's alpha for the first assessment of the PRTEE-D was 0.98; Crohnbach's alpha was 0.93 for the pain subscale and 0.97 for the function subscale. ICC for the PRTEE-D was 0.98; subscales also showed excellent ICC values (pain scale 0.97 and function scale 0.97). A significant moderate correlation exists between PRTEE-D and DASH (0.65) and PRTEE-D and VAS pain (0.68). The PRTEE was successfully cross-culturally adapted and this study showed that the PRTEE-D is reliable and valid to obtain an indication of severity of LE. An easy-to-use instrument for practitioners is now available and this facilitates comparing Dutch and international research data.

  10. GeneImp: Fast Imputation to Large Reference Panels Using Genotype Likelihoods from Ultralow Coverage Sequencing.

    Science.gov (United States)

    Spiliopoulou, Athina; Colombo, Marco; Orchard, Peter; Agakov, Felix; McKeigue, Paul

    2017-05-01

    We address the task of genotype imputation to a dense reference panel given genotype likelihoods computed from ultralow coverage sequencing as inputs. In this setting, the data have a high-level of missingness or uncertainty, and are thus more amenable to a probabilistic representation. Most existing imputation algorithms are not well suited for this situation, as they rely on prephasing for computational efficiency, and, without definite genotype calls, the prephasing task becomes computationally expensive. We describe GeneImp, a program for genotype imputation that does not require prephasing and is computationally tractable for whole-genome imputation. GeneImp does not explicitly model recombination, instead it capitalizes on the existence of large reference panels-comprising thousands of reference haplotypes-and assumes that the reference haplotypes can adequately represent the target haplotypes over short regions unaltered. We validate GeneImp based on data from ultralow coverage sequencing (0.5×), and compare its performance to the most recent version of BEAGLE that can perform this task. We show that GeneImp achieves imputation quality very close to that of BEAGLE, using one to two orders of magnitude less time, without an increase in memory complexity. Therefore, GeneImp is the first practical choice for whole-genome imputation to a dense reference panel when prephasing cannot be applied, for instance, in datasets produced via ultralow coverage sequencing. A related future application for GeneImp is whole-genome imputation based on the off-target reads from deep whole-exome sequencing. Copyright © 2017 by the Genetics Society of America.

  11. Effects of imputation on correlation: implications for analysis of mass spectrometry data from multiple biological matrices.

    Science.gov (United States)

    Taylor, Sandra L; Ruhaak, L Renee; Kelly, Karen; Weiss, Robert H; Kim, Kyoungmi

    2017-03-01

    With expanded access to, and decreased costs of, mass spectrometry, investigators are collecting and analyzing multiple biological matrices from the same subject such as serum, plasma, tissue and urine to enhance biomarker discoveries, understanding of disease processes and identification of therapeutic targets. Commonly, each biological matrix is analyzed separately, but multivariate methods such as MANOVAs that combine information from multiple biological matrices are potentially more powerful. However, mass spectrometric data typically contain large amounts of missing values, and imputation is often used to create complete data sets for analysis. The effects of imputation on multiple biological matrix analyses have not been studied. We investigated the effects of seven imputation methods (half minimum substitution, mean substitution, k-nearest neighbors, local least squares regression, Bayesian principal components analysis, singular value decomposition and random forest), on the within-subject correlation of compounds between biological matrices and its consequences on MANOVA results. Through analysis of three real omics data sets and simulation studies, we found the amount of missing data and imputation method to substantially change the between-matrix correlation structure. The magnitude of the correlations was generally reduced in imputed data sets, and this effect increased with the amount of missing data. Significant results from MANOVA testing also were substantially affected. In particular, the number of false positives increased with the level of missing data for all imputation methods. No one imputation method was universally the best, but the simple substitution methods (Half Minimum and Mean) consistently performed poorly. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  12. Imputation of low-frequency variants using the HapMap3 benefits from large, diverse reference sets

    OpenAIRE

    Jostins, Luke; Morley, Katherine I.; Barrett, Jeffrey C.

    2011-01-01

    Imputation allows the inference of unobserved genotypes in low-density data sets, and is often used to test for disease association at variants that are poorly captured by standard genotyping chips (such as low-frequency variants). Although much effort has gone into developing the best imputation algorithms, less is known about the effects of reference set choice on imputation accuracy. We assess the improvements afforded by increases in reference size and diversity, specifically comparing th...

  13. Strategies for imputation to whole genome sequence using a single or multi-breed reference population in cattle

    DEFF Research Database (Denmark)

    Brøndum, Rasmus Froberg; Guldbrandtsen, Bernt; Sahana, Goutam

    2014-01-01

    autosome 29 using 387,436 bi-allelic variants and 13,612 SNP markers from the bovine HD panel. Results A combined breed reference population led to higher imputation accuracies than did a single breed reference. The highest accuracy of imputation for all three test breeds was achieved when using BEAGLE...... with un-phased reference data (mean genotype correlations of 0.90, 0.89 and 0.87 for Holstein, Jersey and Nordic Red respectively) but IMPUTE2 with un-phased reference data gave similar accuracies for Holsteins and Nordic Red. Pre-phasing the reference data only lead to a minor decrease in the imputation...

  14. Multiple regression based imputation for individualizing template human model from a small number of measured dimensions.

    Science.gov (United States)

    Nohara, Ryuki; Endo, Yui; Murai, Akihiko; Takemura, Hiroshi; Kouchi, Makiko; Tada, Mitsunori

    2016-08-01

    Individual human models are usually created by direct 3D scanning or deforming a template model according to the measured dimensions. In this paper, we propose a method to estimate all the necessary dimensions (full set) for the human model individualization from a small number of measured dimensions (subset) and human dimension database. For this purpose, we solved multiple regression equation from the dimension database given full set dimensions as the objective variable and subset dimensions as the explanatory variables. Thus, the full set dimensions are obtained by simply multiplying the subset dimensions to the coefficient matrix of the regression equation. We verified the accuracy of our method by imputing hand, foot, and whole body dimensions from their dimension database. The leave-one-out cross validation is employed in this evaluation. The mean absolute errors (MAE) between the measured and the estimated dimensions computed from 4 dimensions (hand length, breadth, middle finger breadth at proximal, and middle finger depth at proximal) in the hand, 2 dimensions (foot length, breadth, and lateral malleolus height) in the foot, and 1 dimension (height) and weight in the whole body are computed. The average MAE of non-measured dimensions were 4.58% in the hand, 4.42% in the foot, and 3.54% in the whole body, while that of measured dimensions were 0.00%.

  15. Dealing with missing data in a multi-question depression scale: a comparison of imputation methods

    Directory of Open Access Journals (Sweden)

    Stuart Heather

    2006-12-01

    Full Text Available Abstract Background Missing data present a challenge to many research projects. The problem is often pronounced in studies utilizing self-report scales, and literature addressing different strategies for dealing with missing data in such circumstances is scarce. The objective of this study was to compare six different imputation techniques for dealing with missing data in the Zung Self-reported Depression scale (SDS. Methods 1580 participants from a surgical outcomes study completed the SDS. The SDS is a 20 question scale that respondents complete by circling a value of 1 to 4 for each question. The sum of the responses is calculated and respondents are classified as exhibiting depressive symptoms when their total score is over 40. Missing values were simulated by randomly selecting questions whose values were then deleted (a missing completely at random simulation. Additionally, a missing at random and missing not at random simulation were completed. Six imputation methods were then considered; 1 multiple imputation, 2 single regression, 3 individual mean, 4 overall mean, 5 participant's preceding response, and 6 random selection of a value from 1 to 4. For each method, the imputed mean SDS score and standard deviation were compared to the population statistics. The Spearman correlation coefficient, percent misclassified and the Kappa statistic were also calculated. Results When 10% of values are missing, all the imputation methods except random selection produce Kappa statistics greater than 0.80 indicating 'near perfect' agreement. MI produces the most valid imputed values with a high Kappa statistic (0.89, although both single regression and individual mean imputation also produced favorable results. As the percent of missing information increased to 30%, or when unbalanced missing data were introduced, MI maintained a high Kappa statistic. The individual mean and single regression method produced Kappas in the 'substantial agreement' range

  16. Power system reliability

    Energy Technology Data Exchange (ETDEWEB)

    Allan, R.; Billinton, Roy (Manchester Univ. (United Kingdom). Inst. of Science and Technology Saskatchewan Univ., Saskatoon, SK (Canada))

    1994-01-01

    The function of an electric power system is to satisfy the system load as economically as possible and with a reasonable assurance of continuity or reliability. The application of quantitative reliability techniques in planning and operation has increased considerably in the past few years. Reliability evaluation is now becoming an integral part of the economic comparison of alternatives (6 figures, 17 references) (Author)

  17. SparRec: An effective matrix completion framework of missing data imputation for GWAS

    Science.gov (United States)

    Jiang, Bo; Ma, Shiqian; Causey, Jason; Qiao, Linbo; Hardin, Matthew Price; Bitts, Ian; Johnson, Daniel; Zhang, Shuzhong; Huang, Xiuzhen

    2016-10-01

    Genome-wide association studies present computational challenges for missing data imputation, while the advances of genotype technologies are generating datasets of large sample sizes with sample sets genotyped on multiple SNP chips. We present a new framework SparRec (Sparse Recovery) for imputation, with the following properties: (1) The optimization models of SparRec, based on low-rank and low number of co-clusters of matrices, are different from current statistics methods. While our low-rank matrix completion (LRMC) model is similar to Mendel-Impute, our matrix co-clustering factorization (MCCF) model is completely new. (2) SparRec, as other matrix completion methods, is flexible to be applied to missing data imputation for large meta-analysis with different cohorts genotyped on different sets of SNPs, even when there is no reference panel. This kind of meta-analysis is very challenging for current statistics based methods. (3) SparRec has consistent performance and achieves high recovery accuracy even when the missing data rate is as high as 90%. Compared with Mendel-Impute, our low-rank based method achieves similar accuracy and efficiency, while the co-clustering based method has advantages in running time. The testing results show that SparRec has significant advantages and competitive performance over other state-of-the-art existing statistics methods including Beagle and fastPhase.

  18. An approach to evaluating system well-being in engineering reliability applications

    Energy Technology Data Exchange (ETDEWEB)

    Billinton, Roy; Fotuhi-Firuzabad, Mahmud; Aboreshaid, Saleh

    1995-07-01

    This paper presents an approach to evaluating the degree of system well-being of an engineering system. The functionality of the system is identified by healthy, marginal and risk states. The state definitions permit the inclusion of deterministic considerations in the probabilistic indices used to monitor the system well-being. A technique is developed to determine the three operating state probabilities based on minimal path concepts. The identified indices provide system engineers with additional information on the degree of system well-being in the form of system health and margin state probabilities. A basic planning objective should be to design a system such that the probabilities of the health and risk states are acceptable. The application of the technique is illustrated in this paper using a relatively simple network.

  19. The reliability of evaluation of hip muscle strength in rehabilitation robot walking training.

    Science.gov (United States)

    Huang, Qiuchen; Zhou, Yue; Yu, Lili; Gu, Rui; Cui, Yao; Hu, Chunying

    2015-10-01

    [Purpose] The primary purpose of this study was to evaluate the intraclass correlation coefficient in obtaining the torque of the hip muscle strength during a robot-assisted rehabilitation treatment. [Subjects] Twenty-four patients (15 males, 9 females) with spinal cord injury participated in the study. [Methods] The subjects were asked to walk during robot-assisted rehabilitation, and the torque of the muscle strength which was measured at hip joint flexion angles of -15, -10, -5, 0, 5, 10, 15, 20, 25, and 30 degrees. [Results] The intraclass correlation coefficient of the torque of the hip muscle strength measured by the rehabilitation training robot was excellent. [Conclusion] Our results show that measurement of torque can be used as an objective assessment of treatment with RAT.

  20. A self-healing evaluation model for dedicated protection optical fiber sensor networks using All-terminal reliability function

    Science.gov (United States)

    Jia, Dagong; Chen, Jing; Yan, Yingzhan; Zhang, Hongxia; Liu, Tiegen; Zhang, Yimo

    2017-12-01

    With the increase of the number of sensors and nodes, the existing self-healing evaluation models for large-scale optical fiber sensor networks (OFSNs) have become insufficient to evaluate their self-healing capability. Here, we propose a self-healing evaluation model using the All-terminal reliability function for OFSNs. On the basis of this model, we establish equations for loop topology using the state enumeration method, and for more complicated star-ring and double-loop topologies, using both state enumeration method and Monte-Carlo method. In our self-healing evaluation model, the self-healing capability is a function of the number of the sensors (N) and the working probability of link fibers (p). We have conducted a comparative study on the effects of these two factors on the self-healing capability among these network topologies. The results show that with the increase of N or the decrease of p, the self-healing capability of the all topologies declines, and the star-ring topology displays the best self-healing capability.

  1. Dynamic Pruritus Score: Evaluation of the Validity and Reliability of a New Instrument to Assess the Course of Pruritus.

    Science.gov (United States)

    Ständer, Sonja; Blome, Christine; Anastasiadou, Zografia; Zeidler, Claudia; Jung, Katharina Anna; Tsianakas, Athanasios; Neufang, Gitta; Augustin, Matthias

    2017-02-08

    Currently valid itch intensity scales, such as the visual analogue scale (VAS), are indispensable, but they can be influenced by the patient's overall health status. The aim of this study was to evaluate the reliability and validity of the Dynamic Pruritus Score (DPS), a new instrument comparing reduction in current pruritus with a defined earlier time-point. Eighty-one randomly selected adults (50 females, mean age 53.9 years) recorded their pruritus at visit 1 and repeatedly at visit 2 on the DPS, VAS, numerical rating scale, and on health status questionnaires (EuroQol; EQ-5D), skin-related quality of life (Dermatology Life Quality Index; DLQI), anxiety and depression (Hospital Anxiety and Depression Scale; HADS) and patient benefit (Patient Benefit Index; PBI). Intraclass correlation showed high reliability for both DPS and VAS (r validity (rDPS to PBI = 0.570; p VAS for assessment of pruritus in adults. Further research is needed to confirm these results with a more representative sample size.

  2. Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar.

    Science.gov (United States)

    Clack, Christopher T M; Qvist, Staffan A; Apt, Jay; Bazilian, Morgan; Brandt, Adam R; Caldeira, Ken; Davis, Steven J; Diakov, Victor; Handschy, Mark A; Hines, Paul D H; Jaramillo, Paulina; Kammen, Daniel M; Long, Jane C S; Morgan, M Granger; Reed, Adam; Sivaram, Varun; Sweeney, James; Tynan, George R; Victor, David G; Weyant, John P; Whitacre, Jay F

    2017-06-27

    A number of analyses, meta-analyses, and assessments, including those performed by the Intergovernmental Panel on Climate Change, the National Oceanic and Atmospheric Administration, the National Renewable Energy Laboratory, and the International Energy Agency, have concluded that deployment of a diverse portfolio of clean energy technologies makes a transition to a low-carbon-emission energy system both more feasible and less costly than other pathways. In contrast, Jacobson et al. [Jacobson MZ, Delucchi MA, Cameron MA, Frew BA (2015) Proc Natl Acad Sci USA 112(49):15060-15065] argue that it is feasible to provide "low-cost solutions to the grid reliability problem with 100% penetration of WWS [wind, water and solar power] across all energy sectors in the continental United States between 2050 and 2055", with only electricity and hydrogen as energy carriers. In this paper, we evaluate that study and find significant shortcomings in the analysis. In particular, we point out that this work used invalid modeling tools, contained modeling errors, and made implausible and inadequately supported assumptions. Policy makers should treat with caution any visions of a rapid, reliable, and low-cost transition to entire energy systems that relies almost exclusively on wind, solar, and hydroelectric power.

  3. Assessing reliability and validity of the GroPromo audit tool for evaluation of grocery store marketing and promotional environments.

    Science.gov (United States)

    Kerr, Jacqueline; Sallis, James F; Bromby, Erica; Glanz, Karen

    2012-01-01

    To evaluate reliability and validity of a new tool for assessing the placement and promotional environment in grocery stores. Trained observers used the GroPromo instrument in 40 stores to code the placement of 7 products in 9 locations within a store, along with other promotional characteristics. To test construct validity, customers' receipts were coded for percentage of food purchases in each of the categories. Of the 22 categories tested, 21 demonstrated moderate to high interrater reliability (intraclass correlation ≥ 0.61). When more unhealthy items were placed in prominent locations, a higher percentage of money was spent on less-healthy items, and a lower percentage of food dollars were spent on fruits and vegetables. The prominence of locations was more important than the number of locations. The GroPromo tool can be used to assess promotional practices in stores. Data may help advocates campaign for more healthy food items in key promotional locations. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  4. Reliability of chemotherapy preparation processes: Evaluating independent double-checking and computer-assisted gravimetric control.

    Science.gov (United States)

    Carrez, Laurent; Bouchoud, Lucie; Fleury-Souverain, Sandrine; Combescure, Christophe; Falaschi, Ludivine; Sadeghipour, Farshid; Bonnabry, Pascal

    2017-03-01

    Background and objectives Centralized chemotherapy preparation units have established systematic strategies to avoid errors. Our work aimed to evaluate the accuracy of manual preparations associated with different control methods. Method A simulation study in an operational setting used phenylephrine and lidocaine as markers. Each operator prepared syringes that were controlled using a different method during each of three sessions (no control, visual double-checking, and gravimetric control). Eight reconstitutions and dilutions were prepared in each session, with variable doses and volumes, using different concentrations of stock solutions. Results were analyzed according to qualitative (choice of stock solution) and quantitative criteria (accurate, 30% deviation). Results Eleven operators carried out 19 sessions. No final preparation (n = 438) contained a wrong drug. The protocol involving no control failed to detect 1 of 3 dose errors made and double-checking failed to detect 3 of 7 dose errors. The gravimetric control method detected all 5 out of 5 dose errors. The accuracy of the doses measured was equivalent across the control methods ( p = 0.63 Kruskal-Wallis). The final preparations ranged from 58% to 60% accurate, 25% to 27% weakly accurate, 14% to 17% inaccurate and 0.9% wrong. A high variability was observed between operators. Discussion Gravimetric control was the only method able to detect all dose errors, but it did not improve dose accuracy. A dose accuracy with <5% deviation cannot always be guaranteed using manual production. Automation should be considered in the future.

  5. Evaluation of automated reticulocyte counts and their reliability in the presence of Howell-Jolly bodies.

    Science.gov (United States)

    Lofsness, K G; Kohnke, M L; Geier, N A

    1994-01-01

    An automated reticulocyte procedure using a flow cytometer and the fluorescent dye thiazole orange was evaluated for clinical use. The mean reticulocyte count on 118 hematologically healthy adults was 1.56% (standard deviation [SD] 0.54), with virtually no difference in percentage between women and men. The mean absolute values were 68.4 x 10(9)/L (SD 24.6) and 75.7 x 10(9)/L (SD 27.2), respectively. When compared with the standard microscopic technique, the automated method showed excellent correlation (r = 0.98) and greatly improved precision (coefficient of variation [CV] 4.1%) over the manual method (CV 22.8%). Preanalytic storage of blood samples at 4 degrees C for up to 48 hours did not significantly affect results, nor did varying the incubation time of diluted samples from 1/2 to 2 hours. In a group of patients with appreciable numbers of Howell-Jolly bodies, automated reticulocyte counts were spuriously elevated. The difference between the manual and automated counts on these patients approximated the percentage of Howell-Jolly bodies observed on their Wright-Giemsa stained blood smears.

  6. Reliability of statistic evaluation of microscopic pictures taken from knitted fabrics

    Science.gov (United States)

    Ehrmann, A.; Blachowicz, T.; Zghidi, H.; Weber, M. O.

    2015-09-01

    One of the techniques which can be used to quantitatively evaluate images statistically is the so-called random-walk approach. The resulting Hurst exponent is a measure of the complexity of the picture. Especially long, fine elements in the image, such as fibres, influence the Hurst exponent significantly. Thus, determination of the Hurst exponent has been suggested as new method to measure the hairiness of yarns or knitted fabrics, since existing hairiness measurement instruments are based on different measurement principles which are not comparable. While the principal usability of this method for hairiness detection has been shown in former projects, the absolute value of the calculated Hurst exponents depends on the technique to take the photographic image of a sample, to transfer it into a monochrome picture, and on possible image processing steps. This article gives an overview of edge detection filters, possible definitions of the threshold value between black and white for the transformation into a monochrome image, etc. It shows how these parameters should be chosen in case of typical textile samples and correlates the challenges of this novel method with well-known problems of common techniques to measure yarn and fabric hairiness.

  7. The Revision of Perceived Classroom Goal Structure Scale and the Evaluation of Its Reliability and Validity

    Directory of Open Access Journals (Sweden)

    Chung-Chin Wu

    2015-12-01

    Full Text Available The measurement of perceptions of classroom goal structure, rooted in achievement goal theory, argued that students could perceive contextual goals shaped by their teachers. Currently, achievement goal has suffered critics on the issues of measurement and theoretical re-conceptualization. However, the revisions of perceived classroom goal structure based on achievement goal theory were rarely investigated in terms of their utility in classroom settings. The purposes of the present study are to: (1 Revise the scale of perceived classroom goal structure according to the problems highlighted by achievement goal researchers; (2 Examine the extent to which the revised model fit the data; (3 Investigate the measurement stability of the theoretical model. In order to achieve the purposes, the present study adopted a three-step sampling procedure, which involved 373, 740, and 767 eighth graders in the pilot study, model verification, and cross-validitation test, respectively. Results showed: (1 The perception of classroom goal structure with six dimensions hold good internal and external validity. However, high correlations among six first-order factors indicated that there were two second-order factors, classroom mastery goal and classroom performance goal, behind the first-order ones. (2 The perceived classroom goal structure scale was good in cross- validation. It indicated that the measurement of classroom goal was fairly stable. In practice, the six aspects composed of task, authority, recognition, grouping, evaluation, and time could be adopted to form a mastery oriented classroom.

  8. Evaluating the reliability of equilibrium dissolution assumption from residual gasoline in contact with water saturated sands

    Science.gov (United States)

    Lekmine, Greg; Sookhak Lari, Kaveh; Johnston, Colin D.; Bastow, Trevor P.; Rayner, John L.; Davis, Greg B.

    2017-01-01

    Understanding dissolution dynamics of hazardous compounds from complex gasoline mixtures is a key to long-term predictions of groundwater risks. The aim of this study was to investigate if the local equilibrium assumption for BTEX and TMBs (trimethylbenzenes) dissolution was valid under variable saturation in two dimensional flow conditions and evaluate the impact of local heterogeneities when equilibrium is verified at the scale of investigation. An initial residual gasoline saturation was established over the upper two-thirds of a water saturated sand pack. A constant horizontal pore velocity was maintained and water samples were recovered across 38 sampling ports over 141 days. Inside the residual NAPL zone, BTEX and TMBs dissolution curves were in agreement with the TMVOC model based on the local equilibrium assumption. Results compared to previous numerical studies suggest the presence of small scale dissolution fingering created perpendicular to the horizontal dissolution front, mainly triggered by heterogeneities in the medium structure and the local NAPL residual saturation. In the transition zone, TMVOC was able to represent a range of behaviours exhibited by the data, confirming equilibrium or near-equilibrium dissolution at the scale of investigation. The model locally showed discrepancies with the most soluble compounds, i.e. benzene and toluene, due to local heterogeneities exhibiting that at lower scale flow bypassing and channelling may have occurred. In these conditions mass transfer rates were still high enough to fall under the equilibrium assumption in TMVOC at the scale of investigation. Comparisons with other models involving upscaled mass transfer rates demonstrated that such approximations with TMVOC could lead to overestimate BTEX dissolution rates and underestimate the total remediation time.

  9. Evaluating the reliability of equilibrium dissolution assumption from residual gasoline in contact with water saturated sands.

    Science.gov (United States)

    Lekmine, Greg; Sookhak Lari, Kaveh; Johnston, Colin D; Bastow, Trevor P; Rayner, John L; Davis, Greg B

    2017-01-01

    Understanding dissolution dynamics of hazardous compounds from complex gasoline mixtures is a key to long-term predictions of groundwater risks. The aim of this study was to investigate if the local equilibrium assumption for BTEX and TMBs (trimethylbenzenes) dissolution was valid under variable saturation in two dimensional flow conditions and evaluate the impact of local heterogeneities when equilibrium is verified at the scale of investigation. An initial residual gasoline saturation was established over the upper two-thirds of a water saturated sand pack. A constant horizontal pore velocity was maintained and water samples were recovered across 38 sampling ports over 141days. Inside the residual NAPL zone, BTEX and TMBs dissolution curves were in agreement with the TMVOC model based on the local equilibrium assumption. Results compared to previous numerical studies suggest the presence of small scale dissolution fingering created perpendicular to the horizontal dissolution front, mainly triggered by heterogeneities in the medium structure and the local NAPL residual saturation. In the transition zone, TMVOC was able to represent a range of behaviours exhibited by the data, confirming equilibrium or near-equilibrium dissolution at the scale of investigation. The model locally showed discrepancies with the most soluble compounds, i.e. benzene and toluene, due to local heterogeneities exhibiting that at lower scale flow bypassing and channelling may have occurred. In these conditions mass transfer rates were still high enough to fall under the equilibrium assumption in TMVOC at the scale of investigation. Comparisons with other models involving upscaled mass transfer rates demonstrated that such approximations with TMVOC could lead to overestimate BTEX dissolution rates and underestimate the total remediation time. Copyright © 2016. Published by Elsevier B.V.

  10. Evaluation of an automated protocol for efficient and reliable DNA extraction of dietary samples.

    Science.gov (United States)

    Wallinger, Corinna; Staudacher, Karin; Sint, Daniela; Thalinger, Bettina; Oehm, Johannes; Juen, Anita; Traugott, Michael

    2017-08-01

    Molecular techniques have become an important tool to empirically assess feeding interactions. The increased usage of next-generation sequencing approaches has stressed the need of fast DNA extraction that does not compromise DNA quality. Dietary samples here pose a particular challenge, as these demand high-quality DNA extraction procedures for obtaining the minute quantities of short-fragmented food DNA. Automatic high-throughput procedures significantly decrease time and costs and allow for standardization of extracting total DNA. However, these approaches have not yet been evaluated for dietary samples. We tested the efficiency of an automatic DNA extraction platform and a traditional CTAB protocol, employing a variety of dietary samples including invertebrate whole-body extracts as well as invertebrate and vertebrate gut content samples and feces. Extraction efficacy was quantified using the proportions of successful PCR amplifications of both total and prey DNA, and cost was estimated in terms of time and material expense. For extraction of total DNA, the automated platform performed better for both invertebrate and vertebrate samples. This was also true for prey detection in vertebrate samples. For the dietary analysis in invertebrates, there is still room for improvement when using the high-throughput system for optimal DNA yields. Overall, the automated DNA extraction system turned out as a promising alternative to labor-intensive, low-throughput manual extraction methods such as CTAB. It is opening up the opportunity for an extensive use of this cost-efficient and innovative methodology at low contamination risk also in trophic ecology.

  11. A feasible, aesthetic quality evaluation of implant-supported single crowns: an analysis of validity and reliability.

    Science.gov (United States)

    Hosseini, Mandana; Gotfredsen, Klaus

    2012-04-01

    To test the reliability and validity of six aesthetic parameters and to compare the professional- and patient-reported aesthetic outcomes. Thirty-four patients with 66 implant-supported premolar crowns were included. Two prosthodontists and 11 dental students evaluated six aesthetic parameters, the Copenhagen Index Score (CIS): (i) crown morphology score, (ii) crown colour match score, (iii) symmetry/harmony score, (iv) mucosal discolouration score, (v) papilla index score, mesially and (vi) papilla index score, distally. The intra- and inter-observer agreement and the internal consistency were analysed by Cohen's κ and Cronbach's α, respectively. The validity of CIS parameters was tested against the corresponding Visual Analogue Scales (VAS) scores. The Spearman correlation coefficients were used. Six aesthetic Oral Health Impact Profile (OHIP) questions were correlated to the CIS and the overall VAS scores. The intra-observer agreement was >70% in 2/3 and >50% in all observations. The inter-observed agreement was >50% in 4/5 of all observations. The mucosal discolouration score had the overall highest observed agreement followed by the papilla index scores. The crown morphology and the symmetry/harmony scores had the overall lowest agreement. The Cronbach α value was over 0.8 for all observers. All CIS scores demonstrated significant (PVAS scores. Low correlation coefficients (CIS/OHIP: r(s) VAS/OHIP: r(s) >-0,24) were found between patient and professional evaluations. The feasibility, reliability and validity of the CIS make the parameters useful for quality control of implant-supported restorations. The professional- and patient-reported aesthetic outcomes had no significant correlation. © 2011 John Wiley & Sons A/S.

  12. A New Missing Data Imputation Algorithm Applied to Electrical Data Loggers

    Directory of Open Access Journals (Sweden)

    Concepción Crespo Turrado

    2015-12-01

    Full Text Available Nowadays, data collection is a key process in the study of electrical power networks when searching for harmonics and a lack of balance among phases. In this context, the lack of data of any of the main electrical variables (phase-to-neutral voltage, phase-to-phase voltage, and current in each phase and power factor adversely affects any time series study performed. When this occurs, a data imputation process must be accomplished in order to substitute the data that is missing for estimated values. This paper presents a novel missing data imputation method based on multivariate adaptive regression splines (MARS and compares it with the well-known technique called multivariate imputation by chained equations (MICE. The results obtained demonstrate how the proposed method outperforms the MICE algorithm.

  13. An Improved Generalized-Trend-Diffusion-Based Data Imputation for Steel Industry

    Directory of Open Access Journals (Sweden)

    Ying Liu

    2013-01-01

    Full Text Available Integrality and validity of industrial data are the fundamental factors in the domain of data-driven modeling. Aiming at the data missing problem of gas flow in steel industry, an improved Generalized-Trend-Diffusion (iGTD algorithm is proposed in this study, where in particular it considers the sort of problem with data properties of consecutively missing and small samples. And, the imputation accuracy can be greatly increased by the proposed Gaussian membership-based GTD which expands the useful knowledge of data samples. In addition, the imputation order is further discussed to enhance the sequential forecasting accuracy of gas flow. To verify the effectiveness of the proposed method, a series of experiments that consists of three categories of data features in the gas system is presented, and the results indicate that this method is comprehensively better for the imputation of the periodical-like data and the time-series-like data.

  14. Multiple imputation for item scores when test data are factorially complex.

    Science.gov (United States)

    van Ginkel, Joost R; van der Ark, L Andries; Sijtsma, Klaas

    2007-11-01

    Multiple imputation under a two-way model with error is a simple and effective method that has been used to handle missing item scores in unidimensional test and questionnaire data. Extensions of this method to multidimensional data are proposed. A simulation study is used to investigate whether these extensions produce biased estimates of important statistics in multidimensional data, and to compare them with lower benchmark listwise deletion, two-way with error and multivariate normal imputation. The new methods produce smaller bias in several psychometrically interesting statistics than the existing methods of two-way with error and multivariate normal imputation. One of these new methods clearly is preferable for handling missing item scores in multidimensional test data.

  15. Evaluating odour control technologies using reliability and sustainability criteria--a case study for water treatment plants.

    Science.gov (United States)

    Kraakman, N J R; Estrada, J M; Lebrero, R; Cesca, J; Muñoz, R

    2014-01-01

    Technologies for odour control have been widely reviewed and their optimal range of application and performance has been clearly established. Selection criteria, mainly driven by process economics, are usually based on the air flow volume, the inlet concentrations and the required removal efficiency. However, these criteria are shifting with social and environmental issues becoming as important as process economics. A methodology is illustrated to quantify sustainability and robustness of odour control technology in the context of odour control at wastewater treatment or water recycling plants. The most commonly used odour abatement techniques (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing, activated sludge diffusion and biotrickling filtration coupled with activated carbon adsorption) are evaluated in terms of: (1) sustainability, with quantification of process economics, environmental performance and social impact using the sustainability metrics of the Institution of Chemical Engineers; (2) sensitivity towards design and operating parameters like utility prices (energy and labour), inlet odour concentration (H2S) and design safety (gas contact time); (3) robustness, quantifications of operating reliability, with recommendations to improve reliability during their lifespan of operations. The results show that the odour treatment technologies with the highest investments presented the lowest operating costs, which means that the net present value (NPV) should be used as a selection criterion rather than investment costs. Economies of scale are more important in biotechniques (biofiltration and biotrickling filtration) as, at increased airflows, their reduction in overall costs over 20 years (NPV20) is more extreme when compared to the physical/chemical technologies (chemical scrubbing and activated carbon filtration). Due to their low NPV and their low environmental impact, activated sludge diffusion and biotrickling

  16. A suggested approach for imputation of missing dietary data for young children in daycare

    Science.gov (United States)

    Stevens, June; Ou, Fang-Shu; Truesdale, Kimberly P.; Zeng, Donglin; Vaughn, Amber E.; Pratt, Charlotte; Ward, Dianne S.

    2015-01-01

    Background Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult. Objective The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method. Design Data were from children aged 2-5 years in the My Parenting SOS project (n=308; 870 24-h diet recalls). Mixed models were used to simultaneously predict breakfast, dinner, and evening snacks (B+D+ES); lunch; and daytime snacks for all children after adjusting for age, sex, and body mass index (BMI). From these models, we imputed the missing weekday daycare lunches by interpolation using the mean lunch to B+D+ES [L/(B+D+ES)] ratio among non-daycare children on weekdays and the L/(B+D+ES) ratio for all children on weekends. Daytime snack data were used to impute snacks. Results The reported mean (± standard deviation) weekday intake was lower for daycare children [725 (±324) kcal] compared to non-daycare children [1,048 (±463) kcal]. Weekend intake for all children was 1,173 (±427) kcal. After imputation, weekday caloric intake for daycare children was 1,230 (±409) kcal. Daily intakes that included imputed data were associated with age and sex but not with BMI. Conclusion This work indicates that imputation is a promising method for improving the precision of daily nutrient data from young children. PMID:26689313

  17. A suggested approach for imputation of missing dietary data for young children in daycare

    Directory of Open Access Journals (Sweden)

    June Stevens

    2015-12-01

    Full Text Available Background: Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult. Objective: The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method. Design: Data were from children aged 2-5 years in the My Parenting SOS project (n=308; 870 24-h diet recalls. Mixed models were used to simultaneously predict breakfast, dinner, and evening snacks (B+D+ES; lunch; and daytime snacks for all children after adjusting for age, sex, and body mass index (BMI. From these models, we imputed the missing weekday daycare lunches by interpolation using the mean lunch to B+D+ES [L/(B+D+ES] ratio among non-daycare children on weekdays and the L/(B+D+ES ratio for all children on weekends. Daytime snack data were used to impute snacks. Results: The reported mean (± standard deviation weekday intake was lower for daycare children [725 (±324 kcal] compared to non-daycare children [1,048 (±463 kcal]. Weekend intake for all children was 1,173 (±427 kcal. After imputation, weekday caloric intake for daycare children was 1,230 (±409 kcal. Daily intakes that included imputed data were associated with age and sex but not with BMI. Conclusion: This work indicates that imputation is a promising method for improving the precision of daily nutrient data from young children.

  18. A suggested approach for imputation of missing dietary data for young children in daycare.

    Science.gov (United States)

    Stevens, June; Ou, Fang-Shu; Truesdale, Kimberly P; Zeng, Donglin; Vaughn, Amber E; Pratt, Charlotte; Ward, Dianne S

    2015-01-01

    Parent-reported 24-h diet recalls are an accepted method of estimating intake in young children. However, many children eat while at childcare making accurate proxy reports by parents difficult. The goal of this study was to demonstrate a method to impute missing weekday lunch and daytime snack nutrient data for daycare children and to explore the concurrent predictive and criterion validity of the method. Data were from children aged 2-5 years in the My Parenting SOS project (n=308; 870 24-h diet recalls). Mixed models were used to simultaneously predict breakfast, dinner, and evening snacks (B+D+ES); lunch; and daytime snacks for all children after adjusting for age, sex, and body mass index (BMI). From these models, we imputed the missing weekday daycare lunches by interpolation using the mean lunch to B+D+ES [L/(B+D+ES)] ratio among non-daycare children on weekdays and the L/(B+D+ES) ratio for all children on weekends. Daytime snack data were used to impute snacks. The reported mean (± standard deviation) weekday intake was lower for daycare children [725 (±324) kcal] compared to non-daycare children [1,048 (±463) kcal]. Weekend intake for all children was 1,173 (±427) kcal. After imputation, weekday caloric intake for daycare children was 1,230 (±409) kcal. Daily intakes that included imputed data were associated with age and sex but not with BMI. This work indicates that imputation is a promising method for improving the precision of daily nutrient data from young children.

  19. Is missing geographic positioning system data in accelerometry studies a problem, and is imputation the solution?

    Directory of Open Access Journals (Sweden)

    Kristin Meseck

    2016-05-01

    Full Text Available The main purpose of the present study was to assess the impact of global positioning system (GPS signal lapse on physical activity analyses, discover any existing associations between missing GPS data and environmental and demographics attributes, and to determine whether imputation is an accurate and viable method for correcting GPS data loss. Accelerometer and GPS data of 782 participants from 8 studies were pooled to represent a range of lifestyles and interactions with the built environment. Periods of GPS signal lapse were identified and extracted. Generalised linear mixed models were run with the number of lapses and the length of lapses as outcomes. The signal lapses were imputed using a simple ruleset, and imputation was validated against person-worn camera imagery. A final generalised linear mixed model was used to identify the difference between the amount of GPS minutes pre- and post-imputation for the activity categories of sedentary, light, and moderate-to-vigorous physical activity. Over 17% of the dataset was comprised of GPS data lapses. No strong associations were found between increasing lapse length and number of lapses and the demographic and built environment variables. A significant difference was found between the pre- and postimputation minutes for each activity category. No demographic or environmental bias was found for length or number of lapses, but imputation of GPS data may make a significant difference for inclusion of physical activity data that occurred during a lapse. Imputing GPS data lapses is a viable technique for returning spatial context to accelerometer data and improving the completeness of the dataset.

  20. FISH: fast and accurate diploid genotype imputation via segmental hidden Markov model.

    Science.gov (United States)

    Zhang, Lei; Pei, Yu-Fang; Fu, Xiaoying; Lin, Yong; Wang, Yu-Ping; Deng, Hong-Wen

    2014-07-01

    Fast and accurate genotype imputation is necessary for facilitating gene-mapping studies, especially with the ever increasing numbers of both common and rare variants generated by high-throughput-sequencing experiments. However, most of the existing imputation approaches suffer from either inaccurate results or heavy computational demand. In this article, aiming to perform fast and accurate genotype-imputation analysis, we propose a novel, fast and yet accurate method to impute diploid genotypes. Specifically, we extend a hidden Markov model that is widely used to describe haplotype structures. But we model hidden states onto single reference haplotypes rather than onto pairs of haplotypes. Consequently the computational complexity is linear to size of reference haplotypes. We further develop an algorithm 'merge-and-recover (MAR)' to speed up the calculation. Working on compact representation of segmental reference haplotypes, the MAR algorithm always calculates an exact form of transition probabilities regardless of partition of segments. Both simulation studies and real-data analyses demonstrated that our proposed method was comparable to most of the existing popular methods in terms of imputation accuracy, but was much more efficient in terms of computation. The MAR algorithm can further speed up the calculation by several folds without loss of accuracy. The proposed method will be useful in large-scale imputation studies with a large number of reference subjects. The implemented multi-threading software FISH is freely available for academic use at https://sites.google.com/site/lzhanghomepage/FISH. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. A reference panel of 64,976 haplotypes for genotype imputation

    Science.gov (United States)

    McCarthy, Shane; Das, Sayantan; Kretzschmar, Warren; Delaneau, Olivier; Wood, Andrew R.; Teumer, Alexander; Kang, Hyun Min; Fuchsberger, Christian; Danecek, Petr; Sharp, Kevin; Luo, Yang; Sidore, Carlo; Kwong, Alan; Timpson, Nicholas; Koskinen, Seppo; Vrieze, Scott; Scott, Laura J.; Zhang, He; Mahajan, Anubha; Veldink, Jan; Peters, Ulrike; Pato, Carlos; van Duijn, Cornelia M.; Gillies, Christopher E.; Gandin, Ilaria; Mezzavilla, Massimo; Gilly, Arthur; Cocca, Massimiliano; Traglia, Michela; Angius, Andrea; Barrett, Jeffrey; Boomsma, Dorret I.; Branham, Kari; Breen, Gerome; Brummet, Chad; Busonero, Fabio; Campbell, Harry; Chan, Andrew; Chen, Sai; Chew, Emily; Collins, Francis S.; Corbin, Laura; Davey Smith, George; Dedoussis, George; Dorr, Marcus; Farmaki, Aliki-Eleni; Ferrucci, Luigi; Forer, Lukas; Fraser, Ross M.; Gabriel, Stacey; Levy, Shawn; Groop, Leif; Harrison, Tabitha; Hattersley, Andrew; Holmen, Oddgeir L.; Hveem, Kristian; Kretzler, Matthias; Lee, James; McGue, Matt; Meitinger, Thomas; Melzer, David; Min, Josine; Mohlke, Karen L.; Vincent, John; Nauck, Matthias; Nickerson, Deborah; Palotie, Aarno; Pato, Michele; Pirastu, Nicola; McInnis, Melvin; Richards, Brent; Sala, Cinzia; Salomaa, Veikko; Schlessinger, David; Schoenheer, Sebastian; Slagboom, P Eline; Small, Kerrin; Spector, Timothy; Stambolian, Dwight; Tuke, Marcus; Tuomilehto, Jaakko; Van den Berg, Leonard; Van Rheenen, Wouter; Volker, Uwe; Wijmenga, Cisca; Toniolo, Daniela; Zeggini, Eleftheria; Gasparini, Paolo; Sampson, Matthew G.; Wilson, James F.; Frayling, Timothy; de Bakker, Paul; Swertz, Morris A.; McCarroll, Steven; Kooperberg, Charles; Dekker, Annelot; Altshuler, David; Willer, Cristen; Iacono, William; Ripatti, Samuli; Soranzo, Nicole; Walter, Klaudia; Swaroop, Anand; Cucca, Francesco; Anderson, Carl; Boehnke, Michael; McCarthy, Mark I.; Durbin, Richard; Abecasis, Gonçalo; Marchini, Jonathan

    2017-01-01

    We describe a reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole genome sequence data from 20 studies of predominantly European ancestry. Using this resource leads to accurate genotype imputation at minor allele frequencies as low as 0.1%, a large increase in the number of SNPs tested in association studies and can help to discover and refine causal loci. We describe remote server resources that allow researchers to carry out imputation and phasing consistently and efficiently. PMID:27548312

  2. A flexible and accurate genotype imputation method for the next generation of genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Bryan N Howie

    2009-06-01

    Full Text Available Genotype imputation methods are now being widely used in the analysis of genome-wide association studies. Most imputation analyses to date have used the HapMap as a reference dataset, but new reference panels (such as controls genotyped on multiple SNP chips and densely typed samples from the 1,000 Genomes Project will soon allow a broader range of SNPs to be imputed with higher accuracy, thereby increasing power. We describe a genotype imputation method (IMPUTE version 2 that is designed to address the challenges presented by these new datasets. The main innovation of our approach is a flexible modelling framework that increases accuracy and combines information across multiple reference panels while remaining computationally feasible. We find that IMPUTE v2 attains higher accuracy than other methods when the HapMap provides the sole reference panel, but that the size of the panel constrains the improvements that can be made. We also find that imputation accuracy can be greatly enhanced by expanding the reference panel to contain thousands of chromosomes and that IMPUTE v2 outperforms other methods in this setting at both rare and common SNPs, with overall error rates that are 15%-20% lower than those of the closest competing method. One particularly challenging aspect of next-generation association studies is to integrate information across multiple reference panels genotyped on different sets of SNPs; we show that our approach to this problem has practical advantages over other suggested solutions.

  3. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  4. Influence of Imputation and EM Methods on Factor Analysis When Item Nonresponse in Questionnaire Data Is Nonignorable.

    Science.gov (United States)

    Bernaards, Coen A.; Sijtsma, Klaas

    2000-01-01

    Using simulation, studied the influence of each of 12 imputation methods and 2 methods using the EM algorithm on the results of maximum likelihood factor analysis as compared with results from the complete data factor analysis (no missing scores). Discusses why EM methods recovered complete data factor loadings better than imputation methods. (SLD)

  5. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...

  6. Towards objective evaluation of balance in the elderly: validity and reliability of a measurement instrument applied to the Tinetti test.

    Science.gov (United States)

    Panella, Lorenzo; Tinelli, Carmine; Buizza, Angelo; Lombardi, Remo; Gandolfi, Roberto

    2008-03-01

    The aim of the present study was the validation of an instrument for evaluating balance, applied to the Tinetti test. Trunk inclination was measured by inclinometers during the Tinetti test in 163 healthy participants scoring 28/28 in the Tinetti scale (controls: 92 women, 71 men; age 19-85 years), and 111 residents in old people's homes, able to autonomously perform the test, but scoring less than 28/28 (test group: 78 women, 33 men; age 55-96 years). Trunk inclination was quantified by 20 parameters, whose standardized values were summed and provided an overall performance index (PTOT). PTOT reliability was evaluated by Cronbach's alpha, and its validity by item scale correlation, discriminant validity and concurrent validity. Influence of age and sex was assessed by a logistic regression model. Repeatable and consistent measurements were obtained (Cronbach's alpha=0.88). Parameter distribution was significantly different in controls and patients (PTinetti scale score, its partial, balance-related score and Barthel's Index, but not with the Mini Mental State score. PTOT correlated with age and level of performance but not with sex; correlation with age did not prevent the possibility of discriminating between different levels of performance and between normal and abnormal performance. The instrument provided objective discrimination between different performance levels, in particular, between normal and altered performance.

  7. A Performance Evaluation of NACK-Oriented Protocols as the Foundation of Reliable Delay- Tolerant Networking Convergence Layers

    Science.gov (United States)

    Iannicca, Dennis; Hylton, Alan; Ishac, Joseph

    2012-01-01

    Delay-Tolerant Networking (DTN) is an active area of research in the space communications community. DTN uses a standard layered approach with the Bundle Protocol operating on top of transport layer protocols known as convergence layers that actually transmit the data between nodes. Several different common transport layer protocols have been implemented as convergence layers in DTN implementations including User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Licklider Transmission Protocol (LTP). The purpose of this paper is to evaluate several stand-alone implementations of negative-acknowledgment based transport layer protocols to determine how they perform in a variety of different link conditions. The transport protocols chosen for this evaluation include Consultative Committee for Space Data Systems (CCSDS) File Delivery Protocol (CFDP), Licklider Transmission Protocol (LTP), NACK-Oriented Reliable Multicast (NORM), and Saratoga. The test parameters that the protocols were subjected to are characteristic of common communications links ranging from terrestrial to cis-lunar and apply different levels of delay, line rate, and error.

  8. Helicopter Reliability Growth Evaluation

    Science.gov (United States)

    1976-04-01

    0 ) 0 C 0 C) 0 ta NIf ) N, c-- N c4 - ,, 4 ci K 04 0r 1Id 0 C; H 0) InH M~ E-1 HO 0l 0 to C 4-) 00 P H H -i rd O %D ~ i ~ L A A L 0’ - - -. r- qw... C4 H 4 H H4 H4r 4)C 400 m~r- Hri n i A0 0 CH coC OHI r- In I H C) C) C’) 0’ 0 00 00 H mN H 0~ H r~o 0 C 0 0> 0) 0 0 0 0 * 00 0 0 CD 0C00D rn-o m a) U...4O U U U) r.) ui E -1 -H-H 0) 4 4.) U) ::$ m ~ Q- 0 4 H-I4 rd U) HH MU C -Hrq 44 . 4-40 Or ~U) -1H r- -~ >H r- NH 0 (0 C4 -)* C4 ) rd -H rd 0 -H rUd (a

  9. MEMS reliability

    CERN Document Server

    Hartzell, Allyson L; Shea, Herbert R

    2010-01-01

    This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.

  10. The Urostomy Education Scale: a reliable and valid tool to evaluate urostomy self-care skills among cystectomy patients.

    Science.gov (United States)

    Kristensen, Susanne Ammitzbøll; Laustsen, Sussie; Kiesbye, Berit; Jensen, Bente Thoft

    2013-01-01

    : The purpose of this study was to validate a quantitative scale for nurses to evaluate self-care skills among patients undergoing cystectomy with creation of a urostomy. Twelve patients undergoing cystectomy with formation of a urostomy participated in the research. The study took place at Aarhus University Hospital, Denmark-a bladder cancer center performing approximately 100 cystectomies annually. The Urostomy Education Scale was developed in 2010 based on review of stoma care literature. Areas recognized as standard procedure in urostomy care were identified and categorized into 7 self-care skills necessary for changing the pouching system. The 7 skills were reaction to the stoma, removing the pouching system, measuring the stoma diameter, adjusting the size of the urostomy diameter in a new stoma appliance, skin care, fitting a new stoma appliance, and emptying procedure. Each skill is rated on a 4-point scale according to the patient's need of assistance from the nurse. Higher scores indicate a higher level of patient self-care skills related to changing a urostomy pouching system. Content, criterion, and construct validity were evaluated by a panel of experts using the Delphi method in 2010. To test interrater reliability and criterion validity, 4 nurses attended 12 patient training sessions at different postoperative days. Each patient was taught how to change a urostomy appliance using a standardized approach. One experienced enterostomal therapy nurse acted as the instructor and 3 other nurses observed and scored the patient's self-care skills. The 3 nurses' scores were analyzed using Bland Altman Plots with Limits of Agreements.To test construct validity, patients were categorized into 3 groups. The mean score in each group was used to analyze differences between groups using one way analysis of variance. Analysis revealed that the Urostomy Education Scale distinguished urostomy self-care skills practice by beginners versus experienced patients (P= .01

  11. The Threat of Uncertainty: Why Using Traditional Approaches for Evaluating Spacecraft Reliability are Insufficient for Future Human Mars Missions

    Science.gov (United States)

    Stromgren, Chel; Goodliff, Kandyce; Cirillo, William; Owens, Andrew

    2016-01-01

    Through the Evolvable Mars Campaign (EMC) study, the National Aeronautics and Space Administration (NASA) continues to evaluate potential approaches for sending humans beyond low Earth orbit (LEO). A key aspect of these missions is the strategy that is employed to maintain and repair the spacecraft systems, ensuring that they continue to function and support the crew. Long duration missions beyond LEO present unique and severe maintainability challenges due to a variety of factors, including: limited to no opportunities for resupply, the distance from Earth, mass and volume constraints of spacecraft, high sensitivity of transportation element designs to variation in mass, the lack of abort opportunities to Earth, limited hardware heritage information, and the operation of human-rated systems in a radiation environment with little to no experience. The current approach to maintainability, as implemented on ISS, which includes a large number of spares pre-positioned on ISS, a larger supply sitting on Earth waiting to be flown to ISS, and an on demand delivery of logistics from Earth, is not feasible for future deep space human missions. For missions beyond LEO, significant modifications to the maintainability approach will be required.Through the EMC evaluations, several key findings related to the reliability and safety of the Mars spacecraft have been made. The nature of random and induced failures presents significant issues for deep space missions. Because spare parts cannot be flown as needed for Mars missions, all required spares must be flown with the mission or pre-positioned. These spares must cover all anticipated failure modes and provide a level of overall reliability and safety that is satisfactory for human missions. This will require a large amount of mass and volume be dedicated to storage and transport of spares for the mission. Further, there is, and will continue to be, a significant amount of uncertainty regarding failure rates for spacecraft

  12. Accuracy of genome-wide imputation of untyped markers and impacts on statistical power for association studies

    Directory of Open Access Journals (Sweden)

    McElwee Joshua

    2009-06-01

    Full Text Available Abstract Background Although high-throughput genotyping arrays have made whole-genome association studies (WGAS feasible, only a small proportion of SNPs in the human genome are actually surveyed in such studies. In addition, various SNP arrays assay different sets of SNPs, which leads to challenges in comparing results and merging data for meta-analyses. Genome-wide imputation of untyped markers allows us to address these issues in a direct fashion. Methods 384 Caucasian American liver donors were genotyped using Illumina 650Y (Ilmn650Y arrays, from which we also derived genotypes from the Ilmn317K array. On these data, we compared two imputation methods: MACH and BEAGLE. We imputed 2.5 million HapMap Release22 SNPs, and conducted GWAS on ~40,000 liver mRNA expression traits (eQTL analysis. In addition, 200 Caucasian American and 200 African American subjects were genotyped using the Affymetrix 500 K array plus a custom 164 K fill-in chip. We then imputed the HapMap SNPs and quantified the accuracy by randomly masking observed SNPs. Results MACH and BEAGLE perform similarly with respect to imputation accuracy. The Ilmn650Y results in excellent imputation performance, and it outperforms Affx500K or Ilmn317K sets. For Caucasian Americans, 90% of the HapMap SNPs were imputed at 98% accuracy. As expected, imputation of poorly tagged SNPs (untyped SNPs in weak LD with typed markers was not as successful. It was more challenging to impute genotypes in the African American population, given (1 shorter LD blocks and (2 admixture with Caucasian populations in this population. To address issue (2, we pooled HapMap CEU and YRI data as an imputation reference set, which greatly improved overall performance. The approximate 40,000 phenotypes scored in these populations provide a path to determine empirically how the power to detect associations is affected by the imputation procedures. That is, at a fixed false discovery rate, the number of cis

  13. Multiple imputation for model checking: completed-data plots with missing and latent data.

    Science.gov (United States)

    Gelman, Andrew; Van Mechelen, Iven; Verbeke, Geert; Heitjan, Daniel F; Meulders, Michel

    2005-03-01

    In problems with missing or latent data, a standard approach is to first impute the unobserved data, then perform all statistical analyses on the completed dataset--corresponding to the observed data and imputed unobserved data--using standard procedures for complete-data inference. Here, we extend this approach to model checking by demonstrating the advantages of the use of completed-data model diagnostics on imputed completed datasets. The approach is set in the theoretical framework of Bayesian posterior predictive checks (but, as with missing-data imputation, our methods of missing-data model checking can also be interpreted as "predictive inference" in a non-Bayesian context). We consider the graphical diagnostics within this framework. Advantages of the completed-data approach include: (1) One can often check model fit in terms of quantities that are of key substantive interest in a natural way, which is not always possible using observed data alone. (2) In problems with missing data, checks may be devised that do not require to model the missingness or inclusion mechanism; the latter is useful for the analysis of ignorable but unknown data collection mechanisms, such as are often assumed in the analysis of sample surveys and observational studies. (3) In many problems with latent data, it is possible to check qualitative features of the model (for example, independence of two variables) that can be naturally formalized with the help of the latent data. We illustrate with several applied examples.

  14. Sixteen new lung function signals identified through 1000 Genomes Project reference panel imputation

    NARCIS (Netherlands)

    Artigas, Maria Soler; Wain, Louise V.; Miller, Suzanne; Kheirallah, Abdul Kader; Huffman, Jennifer E.; Ntalla, Ioanna; Shrine, Nick; Obeidat, Ma'en; Trochet, Holly; McArdle, Wendy L.; Alves, Alexessander Couto; Hui, Jennie; Zhao, Jing Hua; Joshi, Peter K.; Teumer, Alexander; Albrecht, Eva; Imboden, Medea; Rawal, Rajesh; Lopez, Lorna M.; Marten, Jonathan; Enroth, Stefan; Surakka, Ida; Polasek, Ozren; Lyytikainen, Leo-Pekka; Granell, Raquel; Hysi, Pirro G.; Flexeder, Claudia; Mahajan, Anubha; Beilby, John; Bosse, Yohan; Brandsma, Corry-Anke; Campbell, Harry; Gieger, Christian; Glaeser, Sven; Gonzalez, Juan R.; Grallert, Harald; Hammond, Chris J.; Harris, Sarah E.; Hartikainen, Anna-Liisa; Heliovaara, Markku; Henderson, John; Hocking, Lynne; Horikoshi, Momoko; Hutri-Kahonen, Nina; Ingelsson, Erik; Johansson, Asa; Kemp, John P.; Kolcic, Ivana; Kumar, Ashish; Lind, Lars; Melen, Erik; Musk, Arthur W.; Navarro, Pau; Nickle, David C.; Padmanabhan, Sandosh; Raitakari, Olli T.; Ried, Janina S.; Ripatti, Samuli; Schulz, Holger; Scott, Robert A.; Sin, Don D.; Starr, John M.; Vinuela, Ana; Voelzke, Henry; Wild, Sarah H.; Wright, Alan F.; Zemunik, Tatijana; Jarvis, Deborah L.; Spector, Tim D.; Evans, David M.; Lehtimaki, Terho; Vitart, Veronique; Kahonen, Mika; Gyllensten, Ulf; Rudan, Igor; Deary, Ian J.; Karrasch, Stefan; Probst-Hensch, Nicole M.; Heinrich, Joachim; Stubbe, Beate; Wilson, James F.; Wareham, Nicholas J.; James, Alan L.; Morris, Andrew P.; Jarvelin, Marjo-Riitta; Hayward, Caroline; Sayers, Ian; Strachan, David P.; Hall, Ian P.; Tobin, Martin D.; Deloukas, Panos; Hansell, Anna L.; Hubbard, Richard; Jackson, Victoria E.; Marchini, Jonathan; Pavord, Ian; Thomson, Neil C.; Zeggini, Eleftheria

    2015-01-01

    Lung function measures are used in the diagnosis of chronic obstructive pulmonary disease. In 38,199 European ancestry individuals, we studied genome-wide association of forced expiratory volume in 1 s (FEV1), forced vital capacity (FVC) and FEV1/FVC with 1000 Genomes Project (phase 1)-imputed

  15. Is missing geographic positioning system data in accelerometry studies a problem, and is imputation the solution?

    DEFF Research Database (Denmark)

    Meseck, Kristin; Jankowska, Marta M; Schipperijn, Jasper

    2016-01-01

    The main purpose of the present study was to assess the impact of global positioning system (GPS) signal lapse on physical activity analyses, discover any existing associations between missing GPS data and environmental and demographics attributes, and to determine whether imputation is an accura...

  16. Accuracy of imputation to whole-genome sequence data in Holstein Friesian cattle

    NARCIS (Netherlands)

    Binsbergen, van R.; Bink, M.C.A.M.; Calus, M.P.L.; Eeuwijk, van F.A.; Hayes, B.J.; Hulsegge, B.; Veerkamp, R.F.

    2014-01-01

    Background The use of whole-genome sequence data can lead to higher accuracy in genome-wide association studies and genomic predictions. However, to benefit from whole-genome sequence data, a large dataset of sequenced individuals is needed. Imputation from SNP panels, such as the Illumina

  17. Consequences of splitting whole-genome sequencing effort over multiple breeds on imputation accuracy

    NARCIS (Netherlands)

    Bouwman, A.C.; Veerkamp, R.F.

    2014-01-01

    The aim of this study was to determine the consequences of splitting sequencing effort over multiple breeds for imputation accuracy from a high-density SNP chip towards whole-genome sequence. Such information would assist for instance numerical smaller cattle breeds, but also pig and chicken

  18. Limitations in Using Multiple Imputation to Harmonize Individual Participant Data for Meta-Analysis.

    Science.gov (United States)

    Siddique, Juned; de Chavez, Peter J; Howe, George; Cruden, Gracelyn; Brown, C Hendricks

    2017-02-27

    Individual participant data (IPD) meta-analysis is a meta-analysis in which the individual-level data for each study are obtained and used for synthesis. A common challenge in IPD meta-analysis is when variables of interest are measured differently in different studies. The term harmonization has been coined to describe the procedure of placing variables on the same scale in order to permit pooling of data from a large number of studies. Using data from an IPD meta-analysis of 19 adolescent depression trials, we describe a multiple imputation approach for harmonizing 10 depression measures across the 19 trials by treating those depression measures that were not used in a study as missing data. We then apply diagnostics to address the fit of our imputation model. Even after reducing the scale of our application, we were still unable to produce accurate imputations of the missing values. We describe those features of the data that made it difficult to harmonize the depression measures and provide some guidelines for using multiple imputation for harmonization in IPD meta-analysis.

  19. The search for stable prognostic models in multiple imputed data sets

    NARCIS (Netherlands)

    Vergouw, D.; Heijmans, M.W.; Peat, G.M.; Kuijpers, T.; Croft, P.R.; de Vet, H.C.W.; van der Horst, H.E.; van der Windt, D.A.W.M.

    2010-01-01

    Background: In prognostic studies model instability and missing data can be troubling factors. Proposed methods for handling these situations are bootstrapping (B) and Multiple imputation (MI). The authors examined the influence of these methods on model composition. Methods: Models were constructed

  20. Learning-Based Adaptive Imputation Methodwith kNN Algorithm for Missing Power Data

    Directory of Open Access Journals (Sweden)

    Minkyung Kim

    2017-10-01

    Full Text Available This paper proposes a learning-based adaptive imputation method (LAI for imputing missing power data in an energy system. This method estimates the missing power data by using the pattern that appears in the collected data. Here, in order to capture the patterns from past power data, we newly model a feature vector by using past data and its variations. The proposed LAI then learns the optimal length of the feature vector and the optimal historical length, which are significant hyper parameters of the proposed method, by utilizing intentional missing data. Based on a weighted distance between feature vectors representing a missing situation and past situation, missing power data are estimated by referring to the k most similar past situations in the optimal historical length. We further extend the proposed LAI to alleviate the effect of unexpected variation in power data and refer to this new approach as the extended LAI method (eLAI. The eLAI selects a method between linear interpolation (LI and the proposed LAI to improve accuracy under unexpected variations. Finally, from a simulation under various energy consumption profiles, we verify that the proposed eLAI achieves about a 74% reduction of the average imputation error in an energy system, compared to the existing imputation methods.

  1. A reference panel of 64,976 haplotypes for genotype imputation

    NARCIS (Netherlands)

    McCarthy, Shane; Das, Sayantan; Kretzschmar, Warren; Delaneau, Olivier; Wood, Andrew R.; Teumer, Alexander; Kang, Hyun Min; Fuchsberger, Christian; Danecek, Petr; Sharp, Kevin; Luo, Yang; Sidorel, Carlo; Kwong, Alan; Timpson, Nicholas; Koskinen, Seppo; Vrieze, Scott; Scott, Laura J.; Zhang, He; Mahajan, Anubha; Veldink, Jan; Peters, Ulrike; Pato, Carlos; van Duijn, Cornelia M.; Gillies, Christopher E.; Gandin, Ilaria; Mezzavilla, Massimo; Gilly, Arthur; Cocca, Massimiliano; Traglia, Michela; Angius, Andrea; Barrett, Jeffrey C.; Boomsma, Dorrett; Branham, Kari; Breen, Gerome; Brummett, Chad M.; Busonero, Fabio; Campbell, Harry; Chan, Andrew; Che, Sai; Chew, Emily; Collins, Francis S.; Corbin, Laura J.; Smith, George Davey; Dedoussis, George; Dorr, Marcus; Farmaki, Aliki-Eleni; Ferrucci, Luigi; Forer, Lukas; Fraser, Ross M.; Gabriel, Stacey; Levy, Shawn; Groop, Leif; Harrison, Tabitha; Hattersley, Andrew; Holmen, Oddgeir L.; Hveem, Kristian; Kretzler, Matthias; Lee, James C.; McGue, Matt; Meitinger, Thomas; Melzer, David; Min, Josine L.; Mohlke, Karen L.; Vincent, John B.; Nauck, Matthias; Nickerson, Deborah; Palotie, Aarno; Pato, Michele; Pirastu, Nicola; McInnis, Melvin; Richards, J. Brent; Sala, Cinzia; Salomaa, Veikko; Schlessinger, David; Schoenherr, Sebastian; Slagboom, P. Eline; Small, Kerrin; Spector, Timothy; Stambolian, Dwight; Tuke, Marcus; Tuomilehto, Jaakko; van den Berg, Leonard H.; Van Rheenen, Wouter; Volker, Uwe; Wijmenga, Cisca; Toniolo, Daniela; Zeggini, Eleftheria; Gasparini, Paolo; Sampson, Matthew G.; Wilson, James F.; Frayling, Timothy; de Bakker, Paul I. W.; Swertz, Morris A.; McCarroll, Steven; Kooperberg, Charles; Dekker, Annelot; Altshuler, David; Willer, Cristen; Iacono, William; Ripatti, Samuli; Soranzo, Nicole; Walter, Klaudia; Swaroop, Anand; Cucca, Francesco; Anderson, Carl A.; Myers, Richard M.; Boehnke, Michael; McCarthy, Mark I.; Durbin, Richard; Abecasis, Goncalo; Marchini, Jonathan

    2016-01-01

    We describe a reference panel of 64,976 human haplotypes at 39,235,157 SNPs constructed using whole-genome sequence data from 20 studies of predominantly European ancestry. Using this resource leads to accurate genotype imputation at minor allele frequencies as low as 0.1% and a large increase in

  2. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Science.gov (United States)

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  3. Estimation of Tree Lists from Airborne Laser Scanning Using Tree Model Clustering and k-MSN Imputation

    Directory of Open Access Journals (Sweden)

    Jörgen Wallerman

    2013-04-01

    Full Text Available Individual tree crowns may be delineated from airborne laser scanning (ALS data by segmentation of surface models or by 3D analysis. Segmentation of surface models benefits from using a priori knowledge about the proportions of tree crowns, which has not yet been utilized for 3D analysis to any great extent. In this study, an existing surface segmentation method was used as a basis for a new tree model 3D clustering method applied to ALS returns in 104 circular field plots with 12 m radius in pine-dominated boreal forest (64°14'N, 19°50'E. For each cluster below the tallest canopy layer, a parabolic surface was fitted to model a tree crown. The tree model clustering identified more trees than segmentation of the surface model, especially smaller trees below the tallest canopy layer. Stem attributes were estimated with k-Most Similar Neighbours (k-MSN imputation of the clusters based on field-measured trees. The accuracy at plot level from the k-MSN imputation (stem density root mean square error or RMSE 32.7%; stem volume RMSE 28.3% was similar to the corresponding results from the surface model (stem density RMSE 33.6%; stem volume RMSE 26.1% with leave-one-out cross-validation for one field plot at a time. Three-dimensional analysis of ALS data should also be evaluated in multi-layered forests since it identified a larger number of small trees below the tallest canopy layer.

  4. Development of Reliable and Validated Tools to Evaluate Technical Resuscitation Skills in a Pediatric Simulation Setting: Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics.

    Science.gov (United States)

    Faudeux, Camille; Tran, Antoine; Dupont, Audrey; Desmontils, Jonathan; Montaudié, Isabelle; Bréaud, Jean; Braun, Marc; Fournier, Jean-Paul; Bérard, Etienne; Berlengi, Noémie; Schweitzer, Cyril; Haas, Hervé; Caci, Hervé; Gatin, Amélie; Giovannini-Chami, Lisa

    2017-09-01

    To develop a reliable and validated tool to evaluate technical resuscitation skills in a pediatric simulation setting. Four Resuscitation and Emergency Simulation Checklist for Assessment in Pediatrics (RESCAPE) evaluation tools were created, following international guidelines: intraosseous needle insertion, bag mask ventilation, endotracheal intubation, and cardiac massage. We applied a modified Delphi methodology evaluation to binary rating items. Reliability was assessed comparing the ratings of 2 observers (1 in real time and 1 after a video-recorded review). The tools were assessed for content, construct, and criterion validity, and for sensitivity to change. Inter-rater reliability, evaluated with Cohen kappa coefficients, was perfect or near-perfect (>0.8) for 92.5% of items and each Cronbach alpha coefficient was ≥0.91. Principal component analyses showed that all 4 tools were unidimensional. Significant increases in median scores with increasing levels of medical expertise were demonstrated for RESCAPE-intraosseous needle insertion (P = .0002), RESCAPE-bag mask ventilation (P = .0002), RESCAPE-endotracheal intubation (P = .0001), and RESCAPE-cardiac massage (P = .0037). Significantly increased median scores over time were also demonstrated during a simulation-based educational program. RESCAPE tools are reliable and validated tools for the evaluation of technical resuscitation skills in pediatric settings during simulation-based educational programs. They might also be used for medical practice performance evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  6. Building model analysis applications with the Joint Universal Parameter IdenTification and Evaluation of Reliability (JUPITER) API

    Science.gov (United States)

    Banta, E.R.; Hill, M.C.; Poeter, E.; Doherty, J.E.; Babendreier, J.

    2008-01-01

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input and output conventions allow application users to access various applications and the analysis methods they embody with a minimum of time and effort. Process models simulate, for example, physical, chemical, and (or) biological systems of interest using phenomenological, theoretical, or heuristic approaches. The types of model analyses supported by the JUPITER API include, but are not limited to, sensitivity analysis, data needs assessment, calibration, uncertainty analysis, model discrimination, and optimization. The advantages provided by the JUPITER API for users and programmers allow for rapid programming and testing of new ideas. Application-specific coding can be in languages other than the Fortran-90 of the API. This article briefly describes the capabilities and utility of the JUPITER API, lists existing applications, and uses UCODE_2005 as an example.

  7. Evaluation and Reliability Assessment of GaN-on-Si MIS-HEMT for Power Switching Applications

    Directory of Open Access Journals (Sweden)

    Po-Chien Chou

    2017-02-01

    Full Text Available This paper reports an extensive analysis of the physical mechanisms responsible for the failure of GaN-based metal–insulator–semiconductor (MIS high electron mobility transistors (HEMTs. When stressed under high applied electric fields, the traps at the dielectric/III-N barrier interface and inside the III-N barrier cause an increase in dynamic on-resistance and a shift of threshold voltage, which might affect the long term stability of these devices. More detailed investigations are needed to identify epitaxy- or process-related degradation mechanisms and to understand their impact on electrical properties. The present paper proposes a suitable methodology to characterize the degradation and failure mechanisms of GaN MIS-HEMTs subjected to stress under various off-state conditions. There are three major stress conditions that include: VDS = 0 V, off, and off (cascode-connection states. Changes of direct current (DC figures of merit in voltage step-stress experiments are measured, statistics are studied, and correlations are investigated. Hot electron stress produces permanent change which can be attributed to charge trapping phenomena and the generation of deep levels or interface states. The simultaneous generation of interface (and/or bulk and buffer traps can account for the observed degradation modes and mechanisms. These findings provide several critical characteristics to evaluate the electrical reliability of GaN MIS-HEMTs which are borne out by step-stress experiments.

  8. Hap-seq: an optimal algorithm for haplotype phasing with imputation using sequencing data.

    Science.gov (United States)

    He, Dan; Han, Buhm; Eskin, Eleazar

    2013-02-01

    Inference of haplotypes, or the sequence of alleles along each chromosome, is a fundamental problem in genetics and is important for many analyses, including admixture mapping, identifying regions of identity by descent, and imputation. Traditionally, haplotypes are inferred from genotype data obtained from microarrays using information on population haplotype frequencies inferred from either a large sample of genotyped individuals or a reference dataset such as the HapMap. Since the availability of large reference datasets, modern approaches for haplotype phasing along these lines are closely related to imputation methods. When applied to data obtained from sequencing studies, a straightforward way to obtain haplotypes is to first infer genotypes from the sequence data and then apply an imputation method. However, this approach does not take into account that alleles on the same sequence read originate from the same chromosome. Haplotype assembly approaches take advantage of this insight and predict haplotypes by assigning the reads to chromosomes in such a way that minimizes the number of conflicts between the reads and the predicted haplotypes. Unfortunately, assembly approaches require very high sequencing coverage and are usually not able to fully reconstruct the haplotypes. In this work, we present a novel approach, Hap-seq, which is simultaneously an imputation and assembly method that combines information from a reference dataset with the information from the reads using a likelihood framework. Our method applies a dynamic programming algorithm to identify the predicted haplotype, which maximizes the joint likelihood of the haplotype with respect to the reference dataset and the haplotype with respect to the observed reads. We show that our method requires only low sequencing coverage and can reconstruct haplotypes containing both common and rare alleles with higher accuracy compared to the state-of-the-art imputation methods.

  9. Evaluation of loop-mediated isothermal amplification for the rapid, reliable, and robust detection of Salmonella in produce.

    Science.gov (United States)

    Yang, Qianru; Wang, Fei; Jones, Kelly L; Meng, Jianghong; Prinyawiwatkul, Witoon; Ge, Beilei

    2015-04-01

    Rapid, reliable, and robust detection of Salmonella in produce remains a challenge. In this study, loop-mediated isothermal amplification (LAMP) was comprehensively evaluated against real-time quantitative PCR (qPCR) for detecting diverse Salmonella serovars in various produce items (cantaloupe, pepper, and several varieties of lettuce, sprouts, and tomato). To mimic real-world contamination events, produce samples were surface-inoculated with low concentrations (1.1-2.9 CFU/25 g) of individual Salmonella strains representing ten serovars and tested after aging at 4 °C for 48 h. Four DNA extraction methods were also compared using produce enrichment broths. False-positive or false-negative results were not observed among 178 strains (151 Salmonella and 27 non-Salmonella) used to evaluate assay specificity. The detection limits for LAMP were 1.8-4 CFU per reaction in pure culture and 10(4)-10(6) CFU per 25 g (i.e., 10(2)-10(4) CFU per g) in produce without enrichment, comparable to those obtained by qPCR. After 6-8 h of enrichment, both LAMP and qPCR consistently detected these low concentrations of Salmonella of diverse serovars in all produce items except sprouts. The PrepMan Ultra sample preparation reagent yielded the best results among the four DNA extraction methods. Upon further validation, LAMP may be a valuable tool for routine Salmonella testing in produce. The difficulty of detecting Salmonella in sprouts, whether using LAMP or qPCR, warrants further study. Published by Elsevier Ltd.

  10. Designing and Evaluation of Reliability and Validity of Visual Cue-Induced Craving Assessment Task for Methamphetamine Smokers

    Directory of Open Access Journals (Sweden)

    Hamed Ekhtiari

    2010-08-01

    Full Text Available A B S T R A C TIntroduction: Craving to methamphetamine is a significant health concern and exposure to methamphetamine cues in laboratory can induce craving. In this study, a task designing procedure for evaluating methamphetamine cue-induced craving in laboratory conditions is examined. Methods: First a series of visual cues which could induce craving was identified by 5 discussion sessions between expert clinicians and 10 methamphetamine smokers. Cues were categorized in 4 main clusters and photos were taken for each cue in studio, then 60 most evocative photos were selected and 10 neutral photos were added. In this phase, 50 subjects with methamphetamine dependence, had exposure to cues and rated craving intensity induced by the 72 cues (60 active evocative photos + 10 neutral photos on self report Visual Analogue Scale (ranging from 0-100. In this way, 50 photos with high levels of evocative potency (CICT 50 and 10 photos with the most evocative potency (CICT 10 were obtained and subsequently, the task was designed. Results: The task reliability (internal consistency was measured by Cronbach’s alpha which was 91% for (CICT 50 and 71% for (CICT 10. The most craving induced was reported for category Drug use procedure (66.27±30.32 and least report for category Cues associated with drug use (31.38±32.96. Difference in cue-induced craving in (CICT 50 and (CICT 10 were not associated with age, education, income, marital status, employment and sexual activity in the past 30 days prior to study entry. Family living condition was marginally correlated with higher scores in (CICT 50. Age of onset for (opioids, cocaine and methamphetamine was negatively correlated with (CICT 50 and (CICT 10 and age of first opiate use was negatively correlated with (CICT 50. Discussion: Cue-induced craving for methamphetamine may be reliably measured by tasks designed in laboratory and designed assessment tasks can be used in cue reactivity paradigm, and

  11. Reliability-Based and Cost-Oriented Product Optimization Integrating Fuzzy Reasoning Petri Nets, Interval Expert Evaluation and Cultural-Based DMOPSO Using Crowding Distance Sorting

    Directory of Open Access Journals (Sweden)

    Zhaoxi Hong

    2017-08-01

    Full Text Available In reliability-based and cost-oriented product optimization, the target product reliability is apportioned to subsystems or components to achieve the maximum reliability and minimum cost. Main challenges to conducting such optimization design lie in how to simultaneously consider subsystem division, uncertain evaluation provided by experts for essential factors, and dynamic propagation of product failure. To overcome these problems, a reliability-based and cost-oriented product optimization method integrating fuzzy reasoning Petri net (FRPN, interval expert evaluation and cultural-based dynamic multi-objective particle swarm optimization (DMOPSO using crowding distance sorting is proposed in this paper. Subsystem division is performed based on failure decoupling, and then subsystem weights are calculated with FRPN reflecting dynamic and uncertain failure propagation, as well as interval expert evaluation considering six essential factors. A mathematical model of reliability-based and cost-oriented product optimization is established, and the cultural-based DMOPSO with crowding distance sorting is utilized to obtain the optimized design scheme. The efficiency and effectiveness of the proposed method are demonstrated by the numerical example of the optimization design for a computer numerically controlled (CNC machine tool.

  12. Multi-task Gaussian process for imputing missing data in multi-trait and multi-environment trials.

    Science.gov (United States)

    Hori, Tomoaki; Montcho, David; Agbangla, Clement; Ebana, Kaworu; Futakuchi, Koichi; Iwata, Hiroyoshi

    2016-11-01

    A method based on a multi-task Gaussian process using self-measuring similarity gave increased accuracy for imputing missing phenotypic data in multi-trait and multi-environment trials. Multi-environmental trial (MET) data often encounter the problem of missing data. Accurate imputation of missing data makes subsequent analysis more effective and the results easier to understand. Moreover, accurate imputation may help to reduce the cost of phenotyping for thinned-out lines tested in METs. METs are generally performed for multiple traits that are correlated to each other. Correlation among traits can be useful information for imputation, but single-trait-based methods cannot utilize information shared by traits that are correlated. In this paper, we propose imputation methods based on a multi-task Gaussian process (MTGP) using self-measuring similarity kernels reflecting relationships among traits, genotypes, and environments. This framework allows us to use genetic correlation among multi-trait multi-environment data and also to combine MET data and marker genotype data. We compared the accuracy of three MTGP methods and iterative regularized PCA using rice MET data. Two scenarios for the generation of missing data at various missing rates were considered. The MTGP performed a better imputation accuracy than regularized PCA, especially at high missing rates. Under the 'uniform' scenario, in which missing data arise randomly, inclusion of marker genotype data in the imputation increased the imputation accuracy at high missing rates. Under the 'fiber' scenario, in which missing data arise in all traits for some combinations between genotypes and environments, the inclusion of marker genotype data decreased the imputation accuracy for most traits while increasing the accuracy in a few traits remarkably. The proposed methods will be useful for solving the missing data problem in MET data.

  13. The reliability of workplace-based assessment in postgraduate medical education and training: a national evaluation in general practice in the United Kingdom.

    Science.gov (United States)

    Murphy, Douglas J; Bruce, David A; Mercer, Stewart W; Eva, Kevin W

    2009-05-01

    To investigate the reliability and feasibility of six potential workplace-based assessment methods in general practice training: criterion audit, multi-source feedback from clinical and non-clinical colleagues, patient feedback (the CARE Measure), referral letters, significant event analysis, and video analysis of consultations. Performance of GP registrars (trainees) was evaluated with each tool to assess the reliabilities of the tools and feasibility, given raters and number of assessments needed. Participant experience of process determined by questionnaire. 171 GP registrars and their trainers, drawn from nine deaneries (representing all four countries in the UK), participated. The ability of each tool to differentiate between doctors (reliability) was assessed using generalisability theory. Decision studies were then conducted to determine the number of observations required to achieve an acceptably high reliability for "high-stakes assessment" using each instrument. Finally, descriptive statistics were used to summarise participants' ratings of their experience using these tools. Multi-source feedback from colleagues and patient feedback on consultations emerged as the two methods most likely to offer a reliable and feasible opinion of workplace performance. Reliability co-efficients of 0.8 were attainable with 41 CARE Measure patient questionnaires and six clinical and/or five non-clinical colleagues per doctor when assessed on two occasions. For the other four methods tested, 10 or more assessors were required per doctor in order to achieve a reliable assessment, making the feasibility of their use in high-stakes assessment extremely low. Participant feedback did not raise any major concerns regarding the acceptability, feasibility, or educational impact of the tools. The combination of patient and colleague views of doctors' performance, coupled with reliable competence measures, may offer a suitable evidence-base on which to monitor progress and

  14. Reliability of the modified Paediatric Evaluation of Disability Inventory, Dutch version (PEDI-NL) for children with cerebral palsy and cerebral visual impairment

    NARCIS (Netherlands)

    Salavati, M.; Waninge, A.; Rameckers, E.A.A.; Blecourt, A.C. de; Krijnen, W.P.; Steenbergen, B.; Schans, C.P. van der

    2015-01-01

    PURPOSE: The aims of this study were to adapt the Paediatric Evaluation of Disability Inventory, Dutch version (PEDI-NL) for children with cerebral visual impairment (CVI) and cerebral palsy (CP) and determine test-retest and inter-respondent reliability. METHOD: The Delphi method was used to gain

  15. Reliability Evaluation of a Concrete Crown Wall on a Rubble Mound Breakwater considering Sliding Failure, Overturning and Rupture Failure of the Foundation

    DEFF Research Database (Denmark)

    Christiani, E.; Sørensen, Jørgen S.; Burcharth, Hans F.

    1994-01-01

    Wave breaking forces on a crown wall will be determined from Burcharth 's wave force formula. Based on these formulae a deterministic design is found. A reliability evaluation of the same structure is then performed using a level II FORM analysis. In this only the failure modes sliding1 overturning...

  16. Reliability of the modified Paediatric Evaluation of Disability Inventory, Dutch version (PEDI-NL) for children with cerebral palsy and cerebral visual impairment

    NARCIS (Netherlands)

    Salavati, M.; Waninge, A.; Rameckers, E. A. A.; de Blecourt, A. C. E.; Krijnen, W. P.; Steenbergen, B.; van der Schans, C. P.

    Purpose: The aims of this study were to adapt the Paediatric Evaluation of Disability Inventory, Dutch version (PEDI-NL) for children with cerebral visual impairment (CVI) and cerebral palsy (CP) and determine test-retest and inter-respondent reliability. Method: The Delphi method was used to gain

  17. Zero crossings properties of a narrow band process to determine the reliability of a two frequency encoding - Application to evaluating its autocorrelation envelope

    Science.gov (United States)

    Hay, J.

    1980-09-01

    The development of an appropriate mathematical model of narrow band Gaussian noise zero crossings was required in order to facilitate the calibration of electronic equipment detecting two-frequency codes used on single track tape recordings with a wide dynamic range, and to find out its decoding reliability. The possibilities of evaluating the autocorrelation envelope is also mentioned.

  18. Zero crossings properties of a narrow-band process to determine the reliability of a two-frequency encoding - Application to evaluating its autocorrelation envelope

    Science.gov (United States)

    Hay, J.

    1980-08-01

    The development of an appropriate mathematical model of narrow band Gaussian noise zero crossings was required in order to facilitate the calibration of electronic equipment detecting two-frequency codes used on single track tape recordings with a wide dynamic range, and to find out its decoding reliability. The possibilities of evaluating the autocorrelation envelope is also mentioned.

  19. A Pilot Evaluation of the Test-Retest Score Reliability of the Dimensions of Mastery Questionnaire in Preschool-Aged Children

    Science.gov (United States)

    Igoe, Deirdre; Peralta, Christopher; Jean, Lindsey; Vo, Sandra; Yep, Linda Ngan; Zabjek, Karl; Wright, F. Virginia

    2011-01-01

    Preschool-aged children continually learn new skills and perfect existing ones. "Mastery motivation" is theorized to be a personality trait linked to skill learning. The Dimensions of Mastery Questionnaire (DMQ) quantifies mastery motivation. This pilot study evaluated DMQ test-retest score reliability (preschool-version) and included…

  20. Reliability and validity of Functional Capacity Evaluation methods: a systematic review with reference to Blankenship system, Ergos work simulator, Ergo-Kit and Isernhagen work system

    NARCIS (Netherlands)

    Gouttebarge, Vincent; Wind, Haije; Kuijer, P. Paul F. M.; Frings-Dresen, Monique H. W.

    2004-01-01

    Objectives: Functional Capacity Evaluation methods (FCE) claim to measure the functional physical ability of a person to perform work-related tasks. The purpose of the present study was to systematically review the literature on the reliability and validity of four FCEs: the Blankenship system (BS),