WorldWideScience

Sample records for evaluate imputation reliability

  1. Effect of imputing markers from a low-density chip on the reliability of genomic breeding values in Holstein populations

    DEFF Research Database (Denmark)

    Dassonneville, R; Brøndum, Rasmus Froberg; Druet, T

    2011-01-01

    The purpose of this study was to investigate the imputation error and loss of reliability of direct genomic values (DGV) or genomically enhanced breeding values (GEBV) when using genotypes imputed from a 3,000-marker single nucleotide polymorphism (SNP) panel to a 50,000-marker SNP panel. Data...... of missing markers and prediction of breeding values were performed using 2 different reference populations in each country: either a national reference population or a combined EuroGenomics reference population. Validation for accuracy of imputation and genomic prediction was done based on national test...... with a national reference data set gave an absolute loss of 0.05 in mean reliability of GEBV in the French study, whereas a loss of 0.03 was obtained for reliability of DGV in the Nordic study. When genotypes were imputed using the EuroGenomics reference, a loss of 0.02 in mean reliability of GEBV was detected...

  2. An Overview and Evaluation of Recent Machine Learning Imputation Methods Using Cardiac Imaging Data.

    Science.gov (United States)

    Liu, Yuzhe; Gopalakrishnan, Vanathi

    2017-03-01

    Many clinical research datasets have a large percentage of missing values that directly impacts their usefulness in yielding high accuracy classifiers when used for training in supervised machine learning. While missing value imputation methods have been shown to work well with smaller percentages of missing values, their ability to impute sparse clinical research data can be problem specific. We previously attempted to learn quantitative guidelines for ordering cardiac magnetic resonance imaging during the evaluation for pediatric cardiomyopathy, but missing data significantly reduced our usable sample size. In this work, we sought to determine if increasing the usable sample size through imputation would allow us to learn better guidelines. We first review several machine learning methods for estimating missing data. Then, we apply four popular methods (mean imputation, decision tree, k-nearest neighbors, and self-organizing maps) to a clinical research dataset of pediatric patients undergoing evaluation for cardiomyopathy. Using Bayesian Rule Learning (BRL) to learn ruleset models, we compared the performance of imputation-augmented models versus unaugmented models. We found that all four imputation-augmented models performed similarly to unaugmented models. While imputation did not improve performance, it did provide evidence for the robustness of our learned models.

  3. A comprehensive evaluation of popular proteomics software workflows for label-free proteome quantification and imputation.

    Science.gov (United States)

    Välikangas, Tommi; Suomi, Tomi; Elo, Laura L

    2017-05-31

    Label-free mass spectrometry (MS) has developed into an important tool applied in various fields of biological and life sciences. Several software exist to process the raw MS data into quantified protein abundances, including open source and commercial solutions. Each software includes a set of unique algorithms for different tasks of the MS data processing workflow. While many of these algorithms have been compared separately, a thorough and systematic evaluation of their overall performance is missing. Moreover, systematic information is lacking about the amount of missing values produced by the different proteomics software and the capabilities of different data imputation methods to account for them.In this study, we evaluated the performance of five popular quantitative label-free proteomics software workflows using four different spike-in data sets. Our extensive testing included the number of proteins quantified and the number of missing values produced by each workflow, the accuracy of detecting differential expression and logarithmic fold change and the effect of different imputation and filtering methods on the differential expression results. We found that the Progenesis software performed consistently well in the differential expression analysis and produced few missing values. The missing values produced by the other software decreased their performance, but this difference could be mitigated using proper data filtering or imputation methods. Among the imputation methods, we found that the local least squares (lls) regression imputation consistently increased the performance of the software in the differential expression analysis, and a combination of both data filtering and local least squares imputation increased performance the most in the tested data sets. © The Author 2017. Published by Oxford University Press.

  4. Multiple imputation strategies for zero-inflated cost data in economic evaluations : which method works best?

    NARCIS (Netherlands)

    MacNeil Vroomen, Janet; Eekhout, Iris; Dijkgraaf, Marcel G; van Hout, Hein; de Rooij, Sophia E; Heymans, Martijn W; Bosmans, Judith E

    2016-01-01

    Cost and effect data often have missing data because economic evaluations are frequently added onto clinical studies where cost data are rarely the primary outcome. The objective of this article was to investigate which multiple imputation strategy is most appropriate to use for missing

  5. Evaluation and application of summary statistic imputation to discover new height-associated loci.

    Science.gov (United States)

    Rüeger, Sina; McDaid, Aaron; Kutalik, Zoltán

    2018-05-01

    As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian

  6. Evaluating Imputation Algorithms for Low-Depth Genotyping-By-Sequencing (GBS Data.

    Directory of Open Access Journals (Sweden)

    Ariel W Chan

    Full Text Available Well-powered genomic studies require genome-wide marker coverage across many individuals. For non-model species with few genomic resources, high-throughput sequencing (HTS methods, such as Genotyping-By-Sequencing (GBS, offer an inexpensive alternative to array-based genotyping. Although affordable, datasets derived from HTS methods suffer from sequencing error, alignment errors, and missing data, all of which introduce noise and uncertainty to variant discovery and genotype calling. Under such circumstances, meaningful analysis of the data is difficult. Our primary interest lies in the issue of how one can accurately infer or impute missing genotypes in HTS-derived datasets. Many of the existing genotype imputation algorithms and software packages were primarily developed by and optimized for the human genetics community, a field where a complete and accurate reference genome has been constructed and SNP arrays have, in large part, been the common genotyping platform. We set out to answer two questions: 1 can we use existing imputation methods developed by the human genetics community to impute missing genotypes in datasets derived from non-human species and 2 are these methods, which were developed and optimized to impute ascertained variants, amenable for imputation of missing genotypes at HTS-derived variants? We selected Beagle v.4, a widely used algorithm within the human genetics community with reportedly high accuracy, to serve as our imputation contender. We performed a series of cross-validation experiments, using GBS data collected from the species Manihot esculenta by the Next Generation (NEXTGEN Cassava Breeding Project. NEXTGEN currently imputes missing genotypes in their datasets using a LASSO-penalized, linear regression method (denoted 'glmnet'. We selected glmnet to serve as a benchmark imputation method for this reason. We obtained estimates of imputation accuracy by masking a subset of observed genotypes, imputing, and

  7. Evaluating Imputation Algorithms for Low-Depth Genotyping-By-Sequencing (GBS) Data.

    Science.gov (United States)

    Chan, Ariel W; Hamblin, Martha T; Jannink, Jean-Luc

    2016-01-01

    Well-powered genomic studies require genome-wide marker coverage across many individuals. For non-model species with few genomic resources, high-throughput sequencing (HTS) methods, such as Genotyping-By-Sequencing (GBS), offer an inexpensive alternative to array-based genotyping. Although affordable, datasets derived from HTS methods suffer from sequencing error, alignment errors, and missing data, all of which introduce noise and uncertainty to variant discovery and genotype calling. Under such circumstances, meaningful analysis of the data is difficult. Our primary interest lies in the issue of how one can accurately infer or impute missing genotypes in HTS-derived datasets. Many of the existing genotype imputation algorithms and software packages were primarily developed by and optimized for the human genetics community, a field where a complete and accurate reference genome has been constructed and SNP arrays have, in large part, been the common genotyping platform. We set out to answer two questions: 1) can we use existing imputation methods developed by the human genetics community to impute missing genotypes in datasets derived from non-human species and 2) are these methods, which were developed and optimized to impute ascertained variants, amenable for imputation of missing genotypes at HTS-derived variants? We selected Beagle v.4, a widely used algorithm within the human genetics community with reportedly high accuracy, to serve as our imputation contender. We performed a series of cross-validation experiments, using GBS data collected from the species Manihot esculenta by the Next Generation (NEXTGEN) Cassava Breeding Project. NEXTGEN currently imputes missing genotypes in their datasets using a LASSO-penalized, linear regression method (denoted 'glmnet'). We selected glmnet to serve as a benchmark imputation method for this reason. We obtained estimates of imputation accuracy by masking a subset of observed genotypes, imputing, and calculating the

  8. Evaluating geographic imputation approaches for zip code level data: an application to a study of pediatric diabetes

    Directory of Open Access Journals (Sweden)

    Puett Robin C

    2009-10-01

    Full Text Available Abstract Background There is increasing interest in the study of place effects on health, facilitated in part by geographic information systems. Incomplete or missing address information reduces geocoding success. Several geographic imputation methods have been suggested to overcome this limitation. Accuracy evaluation of these methods can be focused at the level of individuals and at higher group-levels (e.g., spatial distribution. Methods We evaluated the accuracy of eight geo-imputation methods for address allocation from ZIP codes to census tracts at the individual and group level. The spatial apportioning approaches underlying the imputation methods included four fixed (deterministic and four random (stochastic allocation methods using land area, total population, population under age 20, and race/ethnicity as weighting factors. Data included more than 2,000 geocoded cases of diabetes mellitus among youth aged 0-19 in four U.S. regions. The imputed distribution of cases across tracts was compared to the true distribution using a chi-squared statistic. Results At the individual level, population-weighted (total or under age 20 fixed allocation showed the greatest level of accuracy, with correct census tract assignments averaging 30.01% across all regions, followed by the race/ethnicity-weighted random method (23.83%. The true distribution of cases across census tracts was that 58.2% of tracts exhibited no cases, 26.2% had one case, 9.5% had two cases, and less than 3% had three or more. This distribution was best captured by random allocation methods, with no significant differences (p-value > 0.90. However, significant differences in distributions based on fixed allocation methods were found (p-value Conclusion Fixed imputation methods seemed to yield greatest accuracy at the individual level, suggesting use for studies on area-level environmental exposures. Fixed methods result in artificial clusters in single census tracts. For studies

  9. Defining, evaluating, and removing bias induced by linear imputation in longitudinal clinical trials with MNAR missing data.

    Science.gov (United States)

    Helms, Ronald W; Reece, Laura Helms; Helms, Russell W; Helms, Mary W

    2011-03-01

    Missing not at random (MNAR) post-dropout missing data from a longitudinal clinical trial result in the collection of "biased data," which leads to biased estimators and tests of corrupted hypotheses. In a full rank linear model analysis the model equation, E[Y] = Xβ, leads to the definition of the primary parameter β = (X'X)(-1)X'E[Y], and the definition of linear secondary parameters of the form θ = Lβ = L(X'X)(-1)X'E[Y], including, for example, a parameter representing a "treatment effect." These parameters depend explicitly on E[Y], which raises the questions: What is E[Y] when some elements of the incomplete random vector Y are not observed and MNAR, or when such a Y is "completed" via imputation? We develop a rigorous, readily interpretable definition of E[Y] in this context that leads directly to definitions of β, Bias(β) = E[β] - β, Bias(θ) = E[θ] - Lβ, and the extent of hypothesis corruption. These definitions provide a basis for evaluating, comparing, and removing biases induced by various linear imputation methods for MNAR incomplete data from longitudinal clinical trials. Linear imputation methods use earlier data from a subject to impute values for post-dropout missing values and include "Last Observation Carried Forward" (LOCF) and "Baseline Observation Carried Forward" (BOCF), among others. We illustrate the methods of evaluating, comparing, and removing biases and the effects of testing corresponding corrupted hypotheses via a hypothetical but very realistic longitudinal analgesic clinical trial.

  10. Reliability evaluation of power systems

    CERN Document Server

    Billinton, Roy

    1996-01-01

    The Second Edition of this well-received textbook presents over a decade of new research in power system reliability-while maintaining the general concept, structure, and style of the original volume. This edition features new chapters on the growing areas of Monte Carlo simulation and reliability economics. In addition, chapters cover the latest developments in techniques and their application to real problems. The text also explores the progress occurring in the structure, planning, and operation of real power systems due to changing ownership, regulation, and access. This work serves as a companion volume to Reliability Evaluation of Engineering Systems: Second Edition (1992).

  11. Missing data imputation: focusing on single imputation.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-01-01

    Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations.

  12. Reliability evaluation programmable logic devices

    International Nuclear Information System (INIS)

    Srivani, L.; Murali, N.; Thirugnana Murthy, D.; Satya Murty, S.A.V.

    2014-01-01

    Programmable Logic Devices (PLD) are widely used as basic building modules in high integrity systems, considering their robust features such as gate density, performance, speed etc. PLDs are used to implement digital design such as bus interface logic, control logic, sequencing logic, glue logic etc. Due to semiconductor evolution, new PLDs with state-of-the-art features are arriving to the market. Since these devices are reliable as per the manufacturer's specification, they were used in the design of safety systems. But due to their reduced market life, the availability of performance data is limited. So evaluating the PLD before deploying in a safety system is very important. This paper presents a survey on the use of PLDs in the nuclear domain and the steps involved in the evaluation of PLD using Quantitative Accelerated Life Testing. (author)

  13. Interim reliability evaluation program (IREP)

    International Nuclear Information System (INIS)

    Carlson, D.D.; Murphy, J.A.

    1981-01-01

    The Interim Reliability Evaluation Program (IREP), sponsored by the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission, is currently applying probabilistic risk analysis techniques to two PWR and two BWR type power plants. Emphasis was placed on the systems analysis portion of the risk assessment, as opposed to accident phenomenology or consequence analysis, since the identification of risk significant plant features was of primary interest. Traditional event tree/fault tree modeling was used for the analysis. However, the study involved a more thorough investigation of transient initiators and of support system faults than studies in the past and substantially improved techniques were used to quantify accident sequence frequencies. This study also attempted to quantify the potential for operator recovery actions in the course of each significant accident

  14. Evaluation of MHTGR fuel reliability

    International Nuclear Information System (INIS)

    Wichner, R.P.; Barthold, W.P.

    1992-07-01

    Modular High-Temperature Gas-Cooled Reactor (MHTGR) concepts that house the reactor vessel in a tight but unsealed reactor building place heightened importance on the reliability of the fuel particle coatings as fission product barriers. Though accident consequence analyses continue to show favorable results, the increased dependence on one type of barrier, in addition to a number of other factors, has caused the Nuclear Regulatory Commission (NRC) to consider conservative assumptions regarding fuel behavior. For this purpose, the concept termed ''weak fuel'' has been proposed on an interim basis. ''Weak fuel'' is a penalty imposed on consequence analyses whereby the fuel is assumed to respond less favorably to environmental conditions than predicted by behavioral models. The rationale for adopting this penalty, as well as conditions that would permit its reduction or elimination, are examined in this report. The evaluation includes an examination of possible fuel-manufacturing defects, quality-control procedures for defect detection, and the mechanisms by which fuel defects may lead to failure

  15. Multiple Imputation of Groundwater Data to Evaluate Spatial and Temporal Anthropogenic Influences on Subsurface Water Fluxes in Los Angeles, CA

    Science.gov (United States)

    Manago, K. F.; Hogue, T. S.; Hering, A. S.

    2014-12-01

    In the City of Los Angeles, groundwater accounts for 11% of the total water supply on average, and 30% during drought years. Due to ongoing drought in California, increased reliance on local water supply highlights the need for better understanding of regional groundwater dynamics and estimating sustainable groundwater supply. However, in an urban setting, such as Los Angeles, understanding or modeling groundwater levels is extremely complicated due to various anthropogenic influences such as groundwater pumping, artificial recharge, landscape irrigation, leaking infrastructure, seawater intrusion, and extensive impervious surfaces. This study analyzes anthropogenic effects on groundwater levels using groundwater monitoring well data from the County of Los Angeles Department of Public Works. The groundwater data is irregularly sampled with large gaps between samples, resulting in a sparsely populated dataset. A multiple imputation method is used to fill the missing data, allowing for multiple ensembles and improved error estimates. The filled data is interpolated to create spatial groundwater maps utilizing information from all wells. The groundwater data is evaluated at a monthly time step over the last several decades to analyze the effect of land cover and identify other influencing factors on groundwater levels spatially and temporally. Preliminary results show irrigated parks have the largest influence on groundwater fluctuations, resulting in large seasonal changes, exceeding changes in spreading grounds. It is assumed that these fluctuations are caused by watering practices required to sustain non-native vegetation. Conversely, high intensity urbanized areas resulted in muted groundwater fluctuations and behavior decoupling from climate patterns. Results provides improved understanding of anthropogenic effects on groundwater levels in addition to providing high quality datasets for validation of regional groundwater models.

  16. Reliability evaluation for offshore wind farms

    DEFF Research Database (Denmark)

    Zhao, Menghua; Blåbjerg, Frede; Chen, Zhe

    2005-01-01

    In this paper, a new reliability index - Loss Of Generation Ratio Probability (LOGRP) is proposed for evaluating the reliability of an electrical system for offshore wind farms, which emphasizes the design of wind farms rather than the adequacy for specific load demand. A practical method...... to calculate LOGRP of offshore wind farms is proposed and evaluated....

  17. Avoid Filling Swiss Cheese with Whipped Cream; Imputation Techniques and Evaluation Procedures for Cross-Country Time Series

    OpenAIRE

    Michael Weber; Michaela Denk

    2011-01-01

    International organizations collect data from national authorities to create multivariate cross-sectional time series for their analyses. As data from countries with not yet well-established statistical systems may be incomplete, the bridging of data gaps is a crucial challenge. This paper investigates data structures and missing data patterns in the cross-sectional time series framework, reviews missing value imputation techniques used for micro data in official statistics, and discusses the...

  18. Classifier Fusion With Contextual Reliability Evaluation.

    Science.gov (United States)

    Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You

    2018-05-01

    Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.

  19. Multi-generational imputation of single nucleotide polymorphism marker genotypes and accuracy of genomic selection.

    Science.gov (United States)

    Toghiani, S; Aggrey, S E; Rekaya, R

    2016-07-01

    Availability of high-density single nucleotide polymorphism (SNP) genotyping platforms provided unprecedented opportunities to enhance breeding programmes in livestock, poultry and plant species, and to better understand the genetic basis of complex traits. Using this genomic information, genomic breeding values (GEBVs), which are more accurate than conventional breeding values. The superiority of genomic selection is possible only when high-density SNP panels are used to track genes and QTLs affecting the trait. Unfortunately, even with the continuous decrease in genotyping costs, only a small fraction of the population has been genotyped with these high-density panels. It is often the case that a larger portion of the population is genotyped with low-density and low-cost SNP panels and then imputed to a higher density. Accuracy of SNP genotype imputation tends to be high when minimum requirements are met. Nevertheless, a certain rate of genotype imputation errors is unavoidable. Thus, it is reasonable to assume that the accuracy of GEBVs will be affected by imputation errors; especially, their cumulative effects over time. To evaluate the impact of multi-generational selection on the accuracy of SNP genotypes imputation and the reliability of resulting GEBVs, a simulation was carried out under varying updating of the reference population, distance between the reference and testing sets, and the approach used for the estimation of GEBVs. Using fixed reference populations, imputation accuracy decayed by about 0.5% per generation. In fact, after 25 generations, the accuracy was only 7% lower than the first generation. When the reference population was updated by either 1% or 5% of the top animals in the previous generations, decay of imputation accuracy was substantially reduced. These results indicate that low-density panels are useful, especially when the generational interval between reference and testing population is small. As the generational interval

  20. Scale Reliability Evaluation with Heterogeneous Populations

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…

  1. Gaussian mixture clustering and imputation of microarray data.

    Science.gov (United States)

    Ouyang, Ming; Welsh, William J; Georgopoulos, Panos

    2004-04-12

    In microarray experiments, missing entries arise from blemishes on the chips. In large-scale studies, virtually every chip contains some missing entries and more than 90% of the genes are affected. Many analysis methods require a full set of data. Either those genes with missing entries are excluded, or the missing entries are filled with estimates prior to the analyses. This study compares methods of missing value estimation. Two evaluation metrics of imputation accuracy are employed. First, the root mean squared error measures the difference between the true values and the imputed values. Second, the number of mis-clustered genes measures the difference between clustering with true values and that with imputed values; it examines the bias introduced by imputation to clustering. The Gaussian mixture clustering with model averaging imputation is superior to all other imputation methods, according to both evaluation metrics, on both time-series (correlated) and non-time series (uncorrelated) data sets.

  2. MOV reliability evaluation and periodic verification scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Bunte, B.D.

    1996-12-01

    The purpose of this paper is to establish a periodic verification testing schedule based on the expected long term reliability of gate or globe motor operated valves (MOVs). The methodology in this position paper determines the nominal (best estimate) design margin for any MOV based on the best available information pertaining to the MOVs design requirements, design parameters, existing hardware design, and present setup. The uncertainty in this margin is then determined using statistical means. By comparing the nominal margin to the uncertainty, the reliability of the MOV is estimated. The methodology is appropriate for evaluating the reliability of MOVs in the GL 89-10 program. It may be used following periodic testing to evaluate and trend MOV performance and reliability. It may also be used to evaluate the impact of proposed modifications and maintenance activities such as packing adjustments. In addition, it may be used to assess the impact of new information of a generic nature which impacts safety related MOVs.

  3. MOV reliability evaluation and periodic verification scheduling

    International Nuclear Information System (INIS)

    Bunte, B.D.

    1996-01-01

    The purpose of this paper is to establish a periodic verification testing schedule based on the expected long term reliability of gate or globe motor operated valves (MOVs). The methodology in this position paper determines the nominal (best estimate) design margin for any MOV based on the best available information pertaining to the MOVs design requirements, design parameters, existing hardware design, and present setup. The uncertainty in this margin is then determined using statistical means. By comparing the nominal margin to the uncertainty, the reliability of the MOV is estimated. The methodology is appropriate for evaluating the reliability of MOVs in the GL 89-10 program. It may be used following periodic testing to evaluate and trend MOV performance and reliability. It may also be used to evaluate the impact of proposed modifications and maintenance activities such as packing adjustments. In addition, it may be used to assess the impact of new information of a generic nature which impacts safety related MOVs

  4. Evaluation criteria of structural steel reliability

    International Nuclear Information System (INIS)

    Zav'yalov, A.S.

    1980-01-01

    Different low-carbon and medium-carbon structural steels are investigated. It is stated that steel reliability evaluation criteria depend on the fracture mode, steel suffering from the brittle fracture under the influence of the stresses (despite their great variety) arising in articles during the production and operation. Fibrous steel fracture at the given temperature and article thickness says about its high ductility and toughness and brittle fractures are impossible. Brittle fractures take place in case of a crystalline and mixed fracture with a predominant crystalline component. Evaluation methods of article and sample steel structural strength differing greatly from real articles in a thickness (diameter) or used at temperatures higher than possible operation temperatures cannot be reliability evaluation criteria because at a great thickness (diameter) and lower operation temperatures steel fracture and its strain mode can change resulting in a sharp reliability degradation

  5. Composite reliability evaluation for transmission network planning

    Directory of Open Access Journals (Sweden)

    Jiashen Teh

    2018-01-01

    Full Text Available As the penetration of wind power into the power system increases, the ability to assess the reliability impact of such interaction becomes more important. The composite reliability evaluations involving wind energy provide ample opportunities for assessing the benefits of different wind farm connection points. A connection to the weak area of the transmission network will require network reinforcement for absorbing the additional wind energy. Traditionally, the reinforcements are performed by constructing new transmission corridors. However, a new state-of-art technology such as the dynamic thermal rating (DTR system, provides new reinforcement strategy and this requires new reliability assessment method. This paper demonstrates a methodology for assessing the cost and the reliability of network reinforcement strategies by considering the DTR systems when large scale wind farms are connected to the existing power network. Sequential Monte Carlo simulations were performed and all DTRs and wind speed were simulated using the auto-regressive moving average (ARMA model. Various reinforcement strategies were assessed from their cost and reliability aspects. Practical industrial standards are used as guidelines when assessing costs. Due to this, the proposed methodology in this paper is able to determine the optimal reinforcement strategies when both the cost and reliability requirements are considered.

  6. Public Undertakings and Imputability

    DEFF Research Database (Denmark)

    Ølykke, Grith Skovgaard

    2013-01-01

    In this article, the issue of impuability to the State of public undertakings’ decision-making is analysed and discussed in the context of the DSBFirst case. DSBFirst is owned by the independent public undertaking DSB and the private undertaking FirstGroup plc and won the contracts in the 2008...... Oeresund tender for the provision of passenger transport by railway. From the start, the services were provided at a loss, and in the end a part of DSBFirst was wound up. In order to frame the problems illustrated by this case, the jurisprudence-based imputability requirement in the definition of State aid...... in Article 107(1) TFEU is analysed. It is concluded that where the public undertaking transgresses the control system put in place by the State, conditions for imputability are not fulfilled, and it is argued that in the current state of law, there is no conditional link between the level of control...

  7. Estimating the accuracy of geographical imputation

    Directory of Open Access Journals (Sweden)

    Boscoe Francis P

    2008-01-01

    Full Text Available Abstract Background To reduce the number of non-geocoded cases researchers and organizations sometimes include cases geocoded to postal code centroids along with cases geocoded with the greater precision of a full street address. Some analysts then use the postal code to assign information to the cases from finer-level geographies such as a census tract. Assignment is commonly completed using either a postal centroid or by a geographical imputation method which assigns a location by using both the demographic characteristics of the case and the population characteristics of the postal delivery area. To date no systematic evaluation of geographical imputation methods ("geo-imputation" has been completed. The objective of this study was to determine the accuracy of census tract assignment using geo-imputation. Methods Using a large dataset of breast, prostate and colorectal cancer cases reported to the New Jersey Cancer Registry, we determined how often cases were assigned to the correct census tract using alternate strategies of demographic based geo-imputation, and using assignments obtained from postal code centroids. Assignment accuracy was measured by comparing the tract assigned with the tract originally identified from the full street address. Results Assigning cases to census tracts using the race/ethnicity population distribution within a postal code resulted in more correctly assigned cases than when using postal code centroids. The addition of age characteristics increased the match rates even further. Match rates were highly dependent on both the geographic distribution of race/ethnicity groups and population density. Conclusion Geo-imputation appears to offer some advantages and no serious drawbacks as compared with the alternative of assigning cases to census tracts based on postal code centroids. For a specific analysis, researchers will still need to consider the potential impact of geocoding quality on their results and evaluate

  8. Reliability evaluation of a natural circulation system

    International Nuclear Information System (INIS)

    Jafari, Jalil; D'Auria, Francesco; Kazeminejad, Hossein; Davilu, Hadi

    2003-01-01

    This paper discusses a reliability study performed with reference to a passive thermohydraulic natural circulation (NC) system, named TTL-1. A methodology based on probabilistic techniques has been applied with the main purpose to optimize the system design. The obtained results have been adopted to estimate the thermal-hydraulic reliability (TH-R) of the same system. A total of 29 relevant parameters (including nominal values and plausible ranges of variations) affecting the design and the NC performance of the TTL-1 loop are identified and a probability of occurrence is assigned for each value based on expert judgment. Following procedures established for the uncertainty evaluation of thermal-hydraulic system codes results, 137 system configurations have been selected and each configuration has been analyzed via the Relap5 best-estimate code. The reference system configuration and the failure criteria derived from the 'mission' of the passive system are adopted for the evaluation of the system TH-R. Four different definitions of a less-than-unity 'reliability-values' (where unity represents the maximum achievable reliability) are proposed for the performance of the selected passive system. This is normally considered fully reliable, i.e. reliability-value equal one, in typical Probabilistic Safety Assessment (PSA) applications in nuclear reactor safety. The two 'point' TH-R values for the considered NC system were found equal to 0.70 and 0.85, i.e. values comparable with the reliability of a pump installed in an 'equivalent' forced circulation (active) system having the same 'mission'. The design optimization study was completed by a regression analysis addressing the output of the 137 calculations: heat losses, undetected leakage, loop length, riser diameter, and equivalent diameter of the test section have been found as the most important parameters bringing to the optimal system design and affecting the TH-R. As added values for this work, the comparison has

  9. Cost reduction for web-based data imputation

    KAUST Repository

    Li, Zhixu

    2014-01-01

    Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity of these keywords and the data complexity on the Web, different queries may retrieve different answers to the same absent field value. To decide the most probable right answer to each absent filed value, existing method issues quite a few available imputation queries for each absent value, and then vote on deciding the most probable right answer. As a result, we have to issue a large number of imputation queries for filling all absent values in an incomplete data set, which brings a large overhead. In this paper, we work on reducing the cost of Web-based Data Imputation in two aspects: First, we propose a query execution scheme which can secure the most probable right answer to an absent field value by issuing as few imputation queries as possible. Second, we recognize and prune queries that probably will fail to return any answers a priori. Our extensive experimental evaluation shows that our proposed techniques substantially reduce the cost of Web-based Imputation without hurting its high imputation accuracy. © 2014 Springer International Publishing Switzerland.

  10. Missing in space: an evaluation of imputation methods for missing data in spatial analysis of risk factors for type II diabetes.

    Science.gov (United States)

    Baker, Jannah; White, Nicole; Mengersen, Kerrie

    2014-11-20

    Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.

  11. A quantitative calculation for software reliability evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young-Jun; Lee, Jang-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    To meet these regulatory requirements, the software used in the nuclear safety field has been ensured through the development, validation, safety analysis, and quality assurance activities throughout the entire process life cycle from the planning phase to the installation phase. A variety of activities, such as the quality assurance activities are also required to improve the quality of a software. However, there are limitations to ensure that the quality is improved enough. Therefore, the effort to calculate the reliability of the software continues for a quantitative evaluation instead of a qualitative evaluation. In this paper, we propose a quantitative calculation method for the software to be used for a specific operation of the digital controller in an NPP. After injecting random faults in the internal space of a developed controller and calculating the ability to detect the injected faults using diagnostic software, we can evaluate the software reliability of a digital controller in an NPP. We tried to calculate the software reliability of the controller in an NPP using a new method that differs from a traditional method. It calculates the fault detection coverage after injecting the faults into the software memory space rather than the activity through the life cycle process. We attempt differentiation by creating a new definition of the fault, imitating the software fault using the hardware, and giving a consideration and weights for injection faults.

  12. Human Reliability Data Bank: evaluation results

    International Nuclear Information System (INIS)

    Comer, M.K.; Donovan, M.D.; Gaddy, C.D.

    1985-01-01

    The US Nuclear Regulatory Commission (NRC), Sandia National Laboratories (SNL), and General Physics Corporation are conducting a research program to determine the practicality, acceptability, and usefulness of a Human Reliability Data Bank for nuclear power industry probabilistic risk assessment (PRA). As part of this program, a survey was conducted of existing human reliability data banks from other industries, and a detailed concept of a Data Bank for the nuclear industry was developed. Subsequently, a detailed specification for implementing the Data Bank was developed. An evaluation of this specification was conducted and is described in this report. The evaluation tested data treatment, storage, and retrieval using the Data Bank structure, as modified from NUREG/CR-2744, and detailed procedures for data processing and retrieval, developed prior to this evaluation and documented in the test specification. The evaluation consisted of an Operability Demonstration and Evaluation of the data processing procedures, a Data Retrieval Demonstration and Evaluation, a Retrospective Analysis that included a survey of organizations currently operating data banks for the nuclear power industry, and an Internal Analysis of the current Data Bank System

  13. Missing value imputation for epistatic MAPs

    LENUS (Irish Health Repository)

    Ryan, Colm

    2010-04-20

    Abstract Background Epistatic miniarray profiling (E-MAPs) is a high-throughput approach capable of quantifying aggravating or alleviating genetic interactions between gene pairs. The datasets resulting from E-MAP experiments typically take the form of a symmetric pairwise matrix of interaction scores. These datasets have a significant number of missing values - up to 35% - that can reduce the effectiveness of some data analysis techniques and prevent the use of others. An effective method for imputing interactions would therefore increase the types of possible analysis, as well as increase the potential to identify novel functional interactions between gene pairs. Several methods have been developed to handle missing values in microarray data, but it is unclear how applicable these methods are to E-MAP data because of their pairwise nature and the significantly larger number of missing values. Here we evaluate four alternative imputation strategies, three local (Nearest neighbor-based) and one global (PCA-based), that have been modified to work with symmetric pairwise data. Results We identify different categories for the missing data based on their underlying cause, and show that values from the largest category can be imputed effectively. We compare local and global imputation approaches across a variety of distinct E-MAP datasets, showing that both are competitive and preferable to filling in with zeros. In addition we show that these methods are effective in an E-MAP from a different species, suggesting that pairwise imputation techniques will be increasingly useful as analogous epistasis mapping techniques are developed in different species. We show that strongly alleviating interactions are significantly more difficult to predict than strongly aggravating interactions. Finally we show that imputed interactions, generated using nearest neighbor methods, are enriched for annotations in the same manner as measured interactions. Therefore our method potentially

  14. Validation of Land Cover Products Using Reliability Evaluation Methods

    Directory of Open Access Journals (Sweden)

    Wenzhong Shi

    2015-06-01

    Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.

  15. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  16. Advancing Usability Evaluation through Human Reliability Analysis

    International Nuclear Information System (INIS)

    Ronald L. Boring; David I. Gertman

    2005-01-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probability of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues

  17. Clustering with Missing Values: No Imputation Required

    Science.gov (United States)

    Wagstaff, Kiri

    2004-01-01

    Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.

  18. Interim Reliability Evaluation Program procedures guide

    International Nuclear Information System (INIS)

    Carlson, D.D.; Gallup, D.R.; Kolaczkowski, A.M.; Kolb, G.J.; Stack, D.W.; Lofgren, E.; Horton, W.H.; Lobner, P.R.

    1983-01-01

    This document presents procedures for conducting analyses of a scope similar to those performed in Phase II of the Interim Reliability Evaluation Program (IREP). It documents the current state of the art in performing the plant systems analysis portion of a probabilistic risk assessment. Insights gained into managing such an analysis are discussed. Step-by-step procedures and methodological guidance constitute the major portion of the document. While not to be viewed as a cookbook, the procedures set forth the principal steps in performing an IREP analysis. Guidance for resolving the problems encountered in previous analyses is offered. Numerous examples and representative products from previous analyses clarify the discussion

  19. An Evaluation Method of Equipment Reliability Configuration Management

    Science.gov (United States)

    Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan

    2018-01-01

    At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.

  20. Novel approach for evaluation of service reliability for electricity customers

    Institute of Scientific and Technical Information of China (English)

    JIANG; John; N

    2009-01-01

    Understanding reliability value for electricity customer is important to market-based reliability management. This paper proposes a novel approach to evaluate the reliability for electricity customers by using indifference curve between economic compensation for power interruption and service reliability of electricity. Indifference curve is formed by calculating different planning schemes of network expansion for different reliability requirements of customers, which reveals the information about economic values for different reliability levels for electricity customers, so that the reliability based on market supply demand mechanism can be established and economic signals can be provided for reliability management and enhancement.

  1. Improved imputation accuracy of rare and low-frequency variants using population-specific high-coverage WGS-based imputation reference panel.

    Science.gov (United States)

    Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit

    2017-06-01

    Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.

  2. Reliability evaluation of smart distribution grids

    OpenAIRE

    Kazemi, Shahram

    2011-01-01

    The term "Smart Grid" generally refers to a power grid equipped with the advanced technologies dedicated for purposes such as reliability improvement, ease of control and management, integrating of distributed energy resources and electricity market operations. Improving the reliability of electric power delivered to the end users is one of the main targets of employing smart grid technologies. The smart grid investments targeted for reliability improvement can be directed toward the generati...

  3. Multiply-Imputed Synthetic Data: Advice to the Imputer

    Directory of Open Access Journals (Sweden)

    Loong Bronwyn

    2017-12-01

    Full Text Available Several statistical agencies have started to use multiply-imputed synthetic microdata to create public-use data in major surveys. The purpose of doing this is to protect the confidentiality of respondents’ identities and sensitive attributes, while allowing standard complete-data analyses of microdata. A key challenge, faced by advocates of synthetic data, is demonstrating that valid statistical inferences can be obtained from such synthetic data for non-confidential questions. Large discrepancies between observed-data and synthetic-data analytic results for such questions may arise because of uncongeniality; that is, differences in the types of inputs available to the imputer, who has access to the actual data, and to the analyst, who has access only to the synthetic data. Here, we discuss a simple, but possibly canonical, example of uncongeniality when using multiple imputation to create synthetic data, which specifically addresses the choices made by the imputer. An initial, unanticipated but not surprising, conclusion is that non-confidential design information used to impute synthetic data should be released with the confidential synthetic data to allow users of synthetic data to avoid possible grossly conservative inferences.

  4. A reliability evaluation method for NPP safety DCS application software

    International Nuclear Information System (INIS)

    Li Yunjian; Zhang Lei; Liu Yuan

    2014-01-01

    In the field of nuclear power plant (NPP) digital i and c application, reliability evaluation for safety DCS application software is a key obstacle to be removed. In order to quantitatively evaluate reliability of NPP safety DCS application software, this paper propose a reliability evaluating method based on software development life cycle every stage's v and v defects density characteristics, by which the operating reliability level of the software can be predicted before its delivery, and helps to improve the reliability of NPP safety important software. (authors)

  5. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  6. Are Validity and Reliability "Relevant" in Qualitative Evaluation Research?

    Science.gov (United States)

    Goodwin, Laura D.; Goodwin, William L.

    1984-01-01

    The views of prominant qualitative methodologists on the appropriateness of validity and reliability estimation for the measurement strategies employed in qualitative evaluations are summarized. A case is made for the relevance of validity and reliability estimation. Definitions of validity and reliability for qualitative measurement are presented…

  7. Evaluation for nuclear safety-critical software reliability of DCS

    International Nuclear Information System (INIS)

    Liu Ying

    2015-01-01

    With the development of control and information technology at NPPs, software reliability is important because software failure is usually considered as one form of common cause failures in Digital I and C Systems (DCS). The reliability analysis of DCS, particularly qualitative and quantitative evaluation on the nuclear safety-critical software reliability belongs to a great challenge. To solve this problem, not only comprehensive evaluation model and stage evaluation models are built in this paper, but also prediction and sensibility analysis are given to the models. It can make besement for evaluating the reliability and safety of DCS. (author)

  8. Accounting for one-channel depletion improves missing value imputation in 2-dye microarray data.

    Science.gov (United States)

    Ritz, Cecilia; Edén, Patrik

    2008-01-19

    For 2-dye microarray platforms, some missing values may arise from an un-measurably low RNA expression in one channel only. Information of such "one-channel depletion" is so far not included in algorithms for imputation of missing values. Calculating the mean deviation between imputed values and duplicate controls in five datasets, we show that KNN-based imputation gives a systematic bias of the imputed expression values of one-channel depleted spots. Evaluating the correction of this bias by cross-validation showed that the mean square deviation between imputed values and duplicates were reduced up to 51%, depending on dataset. By including more information in the imputation step, we more accurately estimate missing expression values.

  9. Multiple imputation and its application

    CERN Document Server

    Carpenter, James

    2013-01-01

    A practical guide to analysing partially observed data. Collecting, analysing and drawing inferences from data is central to research in the medical and social sciences. Unfortunately, it is rarely possible to collect all the intended data. The literature on inference from the resulting incomplete  data is now huge, and continues to grow both as methods are developed for large and complex data structures, and as increasing computer power and suitable software enable researchers to apply these methods. This book focuses on a particular statistical method for analysing and drawing inferences from incomplete data, called Multiple Imputation (MI). MI is attractive because it is both practical and widely applicable. The authors aim is to clarify the issues raised by missing data, describing the rationale for MI, the relationship between the various imputation models and associated algorithms and its application to increasingly complex data structures. Multiple Imputation and its Application: Discusses the issues ...

  10. Flexible Imputation of Missing Data

    CERN Document Server

    van Buuren, Stef

    2012-01-01

    Missing data form a problem in every scientific discipline, yet the techniques required to handle them are complicated and often lacking. One of the great ideas in statistical science--multiple imputation--fills gaps in the data with plausible values, the uncertainty of which is coded in the data itself. It also solves other problems, many of which are missing data problems in disguise. Flexible Imputation of Missing Data is supported by many examples using real data taken from the author's vast experience of collaborative research, and presents a practical guide for handling missing data unde

  11. System evaluations by means of reliability analyses

    International Nuclear Information System (INIS)

    Breiling, G.

    1976-01-01

    The objective of this study is to show which analysis requirements are associated with the claim that a reliability analysis, as practised at present, can provide a quantitative risk assessment in absolute terms. The question arises of whether this claim can be substantiated without direct access to the specialist technical departments of a manufacturer and to the multifarious detail information available in these departments. The individual problems arising in the course of such an analysis are discussed on the example of a reliability analysis of a core flooding system. The questions discussed relate to analysis organisation, sequence analysis, fault-tree analysis, and the treatment of operational processes superimposed on the failure and repair processes. (orig.) [de

  12. Reliability and Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Cizmar, Dean; Sørensen, John Dalsgaard; Kirkegaard, Poul Henning

    In the last few decades there have been intensely research concerning reliability of timber structures. This is primarily because there is an increased focus on society on sustainability and environmental aspects. Modern timber as a building material is also being competitive compared to concrete...... and steel. However, reliability models applied to timber were always related to individual components but not the systems. as any real structure is a complex system, system behaviour must be of a particular interest. In the chapter 1 of this document an overview of stochastic models for strength and loads...... (deterministic, probabilistic and risk based approaches) of the robustness are given. Chapter 3 deals more detailed with the robustness of timber structures....

  13. Imputation and quality control steps for combining multiple genome-wide datasets

    Directory of Open Access Journals (Sweden)

    Shefali S Verma

    2014-12-01

    Full Text Available The electronic MEdical Records and GEnomics (eMERGE network brings together DNA biobanks linked to electronic health records (EHRs from multiple institutions. Approximately 52,000 DNA samples from distinct individuals have been genotyped using genome-wide SNP arrays across the nine sites of the network. The eMERGE Coordinating Center and the Genomics Workgroup developed a pipeline to impute and merge genomic data across the different SNP arrays to maximize sample size and power to detect associations with a variety of clinical endpoints. The 1000 Genomes cosmopolitan reference panel was used for imputation. Imputation results were evaluated using the following metrics: accuracy of imputation, allelic R2 (estimated correlation between the imputed and true genotypes, and the relationship between allelic R2 and minor allele frequency. Computation time and memory resources required by two different software packages (BEAGLE and IMPUTE2 were also evaluated. A number of challenges were encountered due to the complexity of using two different imputation software packages, multiple ancestral populations, and many different genotyping platforms. We present lessons learned and describe the pipeline implemented here to impute and merge genomic data sets. The eMERGE imputed dataset will serve as a valuable resource for discovery, leveraging the clinical data that can be mined from the EHR.

  14. R package imputeTestbench to compare imputations methods for univariate time series

    OpenAIRE

    Bokde, Neeraj; Kulat, Kishore; Beck, Marcus W; Asencio-Cortés, Gualberto

    2016-01-01

    This paper describes the R package imputeTestbench that provides a testbench for comparing imputation methods for missing data in univariate time series. The imputeTestbench package can be used to simulate the amount and type of missing data in a complete dataset and compare filled data using different imputation methods. The user has the option to simulate missing data by removing observations completely at random or in blocks of different sizes. Several default imputation methods are includ...

  15. Reliability evaluation of nuclear power plants

    International Nuclear Information System (INIS)

    Rondiris, I.L.

    1978-10-01

    The research described in this thesis is concerned with the reliability/safety analysis of complex systems, such as nuclear power stations, basically using the event tree methodology. The thesis introduces and assesses a computational technique which applies the methodology to complex systems by simulating their topology and operational logic. The technique develops the system event tree and relates each branch of this tree to its qualitative and quantitative impact on specified system outcomes following an abnormal operating condition. Then, the thesis aims at deducing the critical failure modes of complex systems. This is achieved by a new technique for deducing the minimal cut or tie sets of various system outcomes. The technique is, furthermore, expanded to identify potential common mode failures and cut or tie sets containing dependent failures of some components. After dealing with the qualitative part of a reliability study, the thesis introduces two methods for calculating the probability of a component being either in the failure or in the partial failure state. The first method deals with revealed faults and makes use of the concept of Markov processes. The second one deals with unrevealed faults and can be used to calculate the relevant probability of component taking into account its inspection and replacement process. (author)

  16. Highly accurate sequence imputation enables precise QTL mapping in Brown Swiss cattle.

    Science.gov (United States)

    Frischknecht, Mirjam; Pausch, Hubert; Bapst, Beat; Signer-Hasler, Heidi; Flury, Christine; Garrick, Dorian; Stricker, Christian; Fries, Ruedi; Gredler-Grandl, Birgit

    2017-12-29

    Within the last few years a large amount of genomic information has become available in cattle. Densities of genomic information vary from a few thousand variants up to whole genome sequence information. In order to combine genomic information from different sources and infer genotypes for a common set of variants, genotype imputation is required. In this study we evaluated the accuracy of imputation from high density chips to whole genome sequence data in Brown Swiss cattle. Using four popular imputation programs (Beagle, FImpute, Impute2, Minimac) and various compositions of reference panels, the accuracy of the imputed sequence variant genotypes was high and differences between the programs and scenarios were small. We imputed sequence variant genotypes for more than 1600 Brown Swiss bulls and performed genome-wide association studies for milk fat percentage at two stages of lactation. We found one and three quantitative trait loci for early and late lactation fat content, respectively. Known causal variants that were imputed from the sequenced reference panel were among the most significantly associated variants of the genome-wide association study. Our study demonstrates that whole-genome sequence information can be imputed at high accuracy in cattle populations. Using imputed sequence variant genotypes in genome-wide association studies may facilitate causal variant detection.

  17. The Ability of Different Imputation Methods to Preserve the Significant Genes and Pathways in Cancer

    Directory of Open Access Journals (Sweden)

    Rosa Aghdam

    2017-12-01

    Full Text Available Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/.

  18. The Ability of Different Imputation Methods to Preserve the Significant Genes and Pathways in Cancer.

    Science.gov (United States)

    Aghdam, Rosa; Baghfalaki, Taban; Khosravi, Pegah; Saberi Ansari, Elnaz

    2017-12-01

    Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM) method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/. Copyright © 2017. Production and hosting by Elsevier B.V.

  19. Outlier Removal in Model-Based Missing Value Imputation for Medical Datasets

    Directory of Open Access Journals (Sweden)

    Min-Wei Huang

    2018-01-01

    Full Text Available Many real-world medical datasets contain some proportion of missing (attribute values. In general, missing value imputation can be performed to solve this problem, which is to provide estimations for the missing values by a reasoning process based on the (complete observed data. However, if the observed data contain some noisy information or outliers, the estimations of the missing values may not be reliable or may even be quite different from the real values. The aim of this paper is to examine whether a combination of instance selection from the observed data and missing value imputation offers better performance than performing missing value imputation alone. In particular, three instance selection algorithms, DROP3, GA, and IB3, and three imputation algorithms, KNNI, MLP, and SVM, are used in order to find out the best combination. The experimental results show that that performing instance selection can have a positive impact on missing value imputation over the numerical data type of medical datasets, and specific combinations of instance selection and imputation methods can improve the imputation results over the mixed data type of medical datasets. However, instance selection does not have a definitely positive impact on the imputation result for categorical medical datasets.

  20. Fast Monte Carlo reliability evaluation using support vector machine

    International Nuclear Information System (INIS)

    Rocco, Claudio M.; Moreno, Jose Ali

    2002-01-01

    This paper deals with the feasibility of using support vector machine (SVM) to build empirical models for use in reliability evaluation. The approach takes advantage of the speed of SVM in the numerous model calculations typically required to perform a Monte Carlo reliability evaluation. The main idea is to develop an estimation algorithm, by training a model on a restricted data set, and replace system performance evaluation by a simpler calculation, which provides reasonably accurate model outputs. The proposed approach is illustrated by several examples. Excellent system reliability results are obtained by training a SVM with a small amount of information

  1. Reliability evaluation of an impedance-source PV microconverter

    DEFF Research Database (Denmark)

    Shen, Yanfeng; Liivik, Elizaveta; Blaabjerg, Frede

    2018-01-01

    The reliability of an impedance-source PV microconverter is evaluated based on the real-field mission profile. As part of a PV microinverter, the dc-dc microconverter is firstly described. Then the electro-thermal and lifetime models are built for the most reliability-critical components, i...

  2. Reliability evaluation of the ECCS of LWR No.2

    International Nuclear Information System (INIS)

    Tsujimura, Yasuhiro; Suzuki, Eiji

    1987-01-01

    In this paper, a new characteristic function of probability importance is proposed and discussed. The function represents overall characteristics of the system reliability relating to a failure probability of each system component. Further, results of evaluation brought about by the method for practical system reliability design are shown. (author)

  3. Using imputation to provide location information for nongeocoded addresses.

    Directory of Open Access Journals (Sweden)

    Frank C Curriero

    2010-02-01

    Full Text Available The importance of geography as a source of variation in health research continues to receive sustained attention in the literature. The inclusion of geographic information in such research often begins by adding data to a map which is predicated by some knowledge of location. A precise level of spatial information is conventionally achieved through geocoding, the geographic information system (GIS process of translating mailing address information to coordinates on a map. The geocoding process is not without its limitations, though, since there is always a percentage of addresses which cannot be converted successfully (nongeocodable. This raises concerns regarding bias since traditionally the practice has been to exclude nongeocoded data records from analysis.In this manuscript we develop and evaluate a set of imputation strategies for dealing with missing spatial information from nongeocoded addresses. The strategies are developed assuming a known zip code with increasing use of collateral information, namely the spatial distribution of the population at risk. Strategies are evaluated using prostate cancer data obtained from the Maryland Cancer Registry. We consider total case enumerations at the Census county, tract, and block group level as the outcome of interest when applying and evaluating the methods. Multiple imputation is used to provide estimated total case counts based on complete data (geocodes plus imputed nongeocodes with a measure of uncertainty. Results indicate that the imputation strategy based on using available population-based age, gender, and race information performed the best overall at the county, tract, and block group levels.The procedure allows for the potentially biased and likely under reported outcome, case enumerations based on only the geocoded records, to be presented with a statistically adjusted count (imputed count with a measure of uncertainty that are based on all the case data, the geocodes and imputed

  4. Peer Evaluation Can Reliably Measure Local Knowledge

    Science.gov (United States)

    Reyes-García, Victoria; Díaz-Reviriego, Isabel; Duda, Romain; Fernández-Llamazares, Álvaro; Gallois, Sandrine; Guèze, Maximilien; Napitupulu, Lucentezza; Pyhälä, Aili

    2016-01-01

    We assess the consistency of measures of individual local ecological knowledge obtained through peer evaluation against three standard measures: identification tasks, structured questionnaires, and self-reported skills questionnaires. We collected ethnographic information among the Baka (Congo), the Punan (Borneo), and the Tsimane' (Amazon) to…

  5. Missing value imputation: with application to handwriting data

    Science.gov (United States)

    Xu, Zhen; Srihari, Sargur N.

    2015-01-01

    Missing values make pattern analysis difficult, particularly with limited available data. In longitudinal research, missing values accumulate, thereby aggravating the problem. Here we consider how to deal with temporal data with missing values in handwriting analysis. In the task of studying development of individuality of handwriting, we encountered the fact that feature values are missing for several individuals at several time instances. Six algorithms, i.e., random imputation, mean imputation, most likely independent value imputation, and three methods based on Bayesian network (static Bayesian network, parameter EM, and structural EM), are compared with children's handwriting data. We evaluate the accuracy and robustness of the algorithms under different ratios of missing data and missing values, and useful conclusions are given. Specifically, static Bayesian network is used for our data which contain around 5% missing data to provide adequate accuracy and low computational cost.

  6. JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY

    Science.gov (United States)

    The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.

  7. thermal power stations' reliability evaluation in a hydrothermal system

    African Journals Online (AJOL)

    Dr Obe

    A quantitative tool for the evaluation of thermal power stations reliability in a hydrothermal system is presented. ... (solar power); wind (wind power) and the rest, thermal power and ... probability of a system performing its function adequately for ...

  8. Reliability evaluation of deregulated electric power systems for planning applications

    International Nuclear Information System (INIS)

    Ehsani, A.; Ranjbar, A.M.; Jafari, A.; Fotuhi-Firuzabad, M.

    2008-01-01

    In a deregulated electric power utility industry in which a competitive electricity market can influence system reliability, market risks cannot be ignored. This paper (1) proposes an analytical probabilistic model for reliability evaluation of competitive electricity markets and (2) develops a methodology for incorporating the market reliability problem into HLII reliability studies. A Markov state space diagram is employed to evaluate the market reliability. Since the market is a continuously operated system, the concept of absorbing states is applied to it in order to evaluate the reliability. The market states are identified by using market performance indices and the transition rates are calculated by using historical data. The key point in the proposed method is the concept that the reliability level of a restructured electric power system can be calculated using the availability of the composite power system (HLII) and the reliability of the electricity market. Two case studies are carried out over Roy Billinton Test System (RBTS) to illustrate interesting features of the proposed methodology

  9. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    Science.gov (United States)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  10. Data imputation analysis for Cosmic Rays time series

    Science.gov (United States)

    Fernandes, R. C.; Lucio, P. S.; Fernandez, J. H.

    2017-05-01

    The occurrence of missing data concerning Galactic Cosmic Rays time series (GCR) is inevitable since loss of data is due to mechanical and human failure or technical problems and different periods of operation of GCR stations. The aim of this study was to perform multiple dataset imputation in order to depict the observational dataset. The study has used the monthly time series of GCR Climax (CLMX) and Roma (ROME) from 1960 to 2004 to simulate scenarios of 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80% and 90% of missing data compared to observed ROME series, with 50 replicates. Then, the CLMX station as a proxy for allocation of these scenarios was used. Three different methods for monthly dataset imputation were selected: AMÉLIA II - runs the bootstrap Expectation Maximization algorithm, MICE - runs an algorithm via Multivariate Imputation by Chained Equations and MTSDI - an Expectation Maximization algorithm-based method for imputation of missing values in multivariate normal time series. The synthetic time series compared with the observed ROME series has also been evaluated using several skill measures as such as RMSE, NRMSE, Agreement Index, R, R2, F-test and t-test. The results showed that for CLMX and ROME, the R2 and R statistics were equal to 0.98 and 0.96, respectively. It was observed that increases in the number of gaps generate loss of quality of the time series. Data imputation was more efficient with MTSDI method, with negligible errors and best skill coefficients. The results suggest a limit of about 60% of missing data for imputation, for monthly averages, no more than this. It is noteworthy that CLMX, ROME and KIEL stations present no missing data in the target period. This methodology allowed reconstructing 43 time series.

  11. A novel reliability evaluation method for large engineering systems

    Directory of Open Access Journals (Sweden)

    Reda Farag

    2016-06-01

    Full Text Available A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first- or second-order reliability method (FORM/SORM will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM developed by the author and his research team using FORM, response surface method (RSM, an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples.

  12. Reliability evaluation of microgrid considering incentive-based demand response

    Science.gov (United States)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  13. Evaluation of mobile ad hoc network reliability using propagation-based link reliability model

    International Nuclear Information System (INIS)

    Padmavathy, N.; Chaturvedi, Sanjay K.

    2013-01-01

    A wireless mobile ad hoc network (MANET) is a collection of solely independent nodes (that can move randomly around the area of deployment) making the topology highly dynamic; nodes communicate with each other by forming a single hop/multi-hop network and maintain connectivity in decentralized manner. MANET is modelled using geometric random graphs rather than random graphs because the link existence in MANET is a function of the geometric distance between the nodes and the transmission range of the nodes. Among many factors that contribute to the MANET reliability, the reliability of these networks also depends on the robustness of the link between the mobile nodes of the network. Recently, the reliability of such networks has been evaluated for imperfect nodes (transceivers) with binary model of communication links based on the transmission range of the mobile nodes and the distance between them. However, in reality, the probability of successful communication decreases as the signal strength deteriorates due to noise, fading or interference effects even up to the nodes' transmission range. Hence, in this paper, using a propagation-based link reliability model rather than a binary-model with nodes following a known failure distribution to evaluate the network reliability (2TR m , ATR m and AoTR m ) of MANET through Monte Carlo Simulation is proposed. The method is illustrated with an application and some imperative results are also presented

  14. Evaluation of Stock Management Strategies Reliability at Dependent Demand

    Directory of Open Access Journals (Sweden)

    Lukinskiy Valery

    2017-03-01

    Full Text Available For efficiently increasing the logistic systems, the core specialists’ attention has to be directed to reducing costs and increasing supply chains reliability. A decent attention to costs reduction has already been paid, so it can be stated that in this way there is a significant progress. But the problem of reliability evaluation is still insufficiently explored, particularly, in such an important sphere as inventory management at the dependent demand.

  15. 3D-MICE: integration of cross-sectional and longitudinal imputation for multi-analyte longitudinal clinical data.

    Science.gov (United States)

    Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M

    2018-06-01

    A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.

  16. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  17. Missing data imputation using statistical and machine learning methods in a real breast cancer problem.

    Science.gov (United States)

    Jerez, José M; Molina, Ignacio; García-Laencina, Pedro J; Alba, Emilio; Ribelles, Nuria; Martín, Miguel; Franco, Leonardo

    2010-10-01

    Missing data imputation is an important task in cases where it is crucial to use all available data and not discard records with missing values. This work evaluates the performance of several statistical and machine learning imputation methods that were used to predict recurrence in patients in an extensive real breast cancer data set. Imputation methods based on statistical techniques, e.g., mean, hot-deck and multiple imputation, and machine learning techniques, e.g., multi-layer perceptron (MLP), self-organisation maps (SOM) and k-nearest neighbour (KNN), were applied to data collected through the "El Álamo-I" project, and the results were then compared to those obtained from the listwise deletion (LD) imputation method. The database includes demographic, therapeutic and recurrence-survival information from 3679 women with operable invasive breast cancer diagnosed in 32 different hospitals belonging to the Spanish Breast Cancer Research Group (GEICAM). The accuracies of predictions on early cancer relapse were measured using artificial neural networks (ANNs), in which different ANNs were estimated using the data sets with imputed missing values. The imputation methods based on machine learning algorithms outperformed imputation statistical methods in the prediction of patient outcome. Friedman's test revealed a significant difference (p=0.0091) in the observed area under the ROC curve (AUC) values, and the pairwise comparison test showed that the AUCs for MLP, KNN and SOM were significantly higher (p=0.0053, p=0.0048 and p=0.0071, respectively) than the AUC from the LD-based prognosis model. The methods based on machine learning techniques were the most suited for the imputation of missing values and led to a significant enhancement of prognosis accuracy compared to imputation methods based on statistical procedures. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Incorporating Cyber Layer Failures in Composite Power System Reliability Evaluations

    Directory of Open Access Journals (Sweden)

    Yuqi Han

    2015-08-01

    Full Text Available This paper proposes a novel approach to analyze the impacts of cyber layer failures (i.e., protection failures and monitoring failures on the reliability evaluation of composite power systems. The reliability and availability of the cyber layer and its protection and monitoring functions with various topologies are derived based on a reliability block diagram method. The availability of the physical layer components are modified via a multi-state Markov chain model, in which the component protection and monitoring strategies, as well as the cyber layer topology, are simultaneously considered. Reliability indices of composite power systems are calculated through non-sequential Monte-Carlo simulation. Case studies demonstrate that operational reliability downgrades in cyber layer function failure situations. Moreover, protection function failures have more significant impact on the downgraded reliability than monitoring function failures do, and the reliability indices are especially sensitive to the change of the cyber layer function availability in the range from 0.95 to 1.

  19. Missing Value Imputation Based on Gaussian Mixture Model for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiaobo Yan

    2015-01-01

    Full Text Available This paper addresses missing value imputation for the Internet of Things (IoT. Nowadays, the IoT has been used widely and commonly by a variety of domains, such as transportation and logistics domain and healthcare domain. However, missing values are very common in the IoT for a variety of reasons, which results in the fact that the experimental data are incomplete. As a result of this, some work, which is related to the data of the IoT, can’t be carried out normally. And it leads to the reduction in the accuracy and reliability of the data analysis results. This paper, for the characteristics of the data itself and the features of missing data in IoT, divides the missing data into three types and defines three corresponding missing value imputation problems. Then, we propose three new models to solve the corresponding problems, and they are model of missing value imputation based on context and linear mean (MCL, model of missing value imputation based on binary search (MBS, and model of missing value imputation based on Gaussian mixture model (MGI. Experimental results showed that the three models can improve the accuracy, reliability, and stability of missing value imputation greatly and effectively.

  20. Reliability Evaluation of Power Capacitors in a Wind Turbine System

    DEFF Research Database (Denmark)

    Zhou, Dao; Blaabjerg, Frede

    2018-01-01

    With the increasing penetration of wind power, reliable and cost-effective wind energy production is of more and more importance. The doubly-fed induction generator based partial-scale wind power converter is still dominating in the existing wind farms. In this paper, the reliability assessment...... block diagram is used to bridge the gap between the Weibull distribution based component-level individual capacitor and the capacitor bank. A case study of a 2 MW wind power converter shows that the lifetime is significantly reduced from the individual capacitor to the capacitor bank. Besides, the dc...... of power capacitors is studied considering the annual mission profile. According to an electro-thermal stress evaluation, the time-to-failure distribution of both the dc-link capacitor and ac-side filter capacitor is detailed investigated. Aiming for the systemlevel reliability analysis, a reliability...

  1. Composite system reliability evaluation by stochastic calculation of system operation

    Energy Technology Data Exchange (ETDEWEB)

    Haubrick, H -J; Hinz, H -J; Landeck, E [Dept. of Power Systems and Power Economics (Germany)

    1994-12-31

    This report describes a new developed probabilistic approach for steady-state composite system reliability evaluation and its exemplary application to a bulk power test system. The new computer program called PHOENIX takes into consideration transmission limitations, outages of lines and power stations and, as a central element, a highly sophisticated model to the dispatcher performing remedial actions after disturbances. The kernel of the new method is a procedure for optimal power flow calculation that has been specially adapted for the use in reliability evaluations under the above mentioned conditions. (author) 11 refs., 8 figs., 1 tab.

  2. Reliability Evaluation for Clustered WSNs under Malware Propagation

    Directory of Open Access Journals (Sweden)

    Shigen Shen

    2016-06-01

    Full Text Available We consider a clustered wireless sensor network (WSN under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  3. Reliability Evaluation for Clustered WSNs under Malware Propagation.

    Science.gov (United States)

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying

    2016-06-10

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  4. Reliability Evaluation for Clustered WSNs under Malware Propagation

    Science.gov (United States)

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C.; Yu, Shui; Cao, Qiying

    2016-01-01

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN. PMID:27294934

  5. evaluation of willingness to pay for reliable and sustainable ...

    African Journals Online (AJOL)

    Osondu

    evaluate the WTP for reliable and sustainable service delivery. The findings of the study .... In doing this, the revenue base and cost recovery will as well be enhanced in ..... delays at traffic warden controlled urban intersections: case study of ...

  6. Relativity evaluation of reliability on operation in nuclear power plant

    International Nuclear Information System (INIS)

    Inata, Takashi

    1987-01-01

    The report presents a quantitative method for evaluating the reliability of operations conducted in nuclear power plants. The quantitative reliability evaluation method is based on the 'detailed block diagram analysis (De-BDA)'. All units of a series of operations are separately displayed for each block and combined sequentially. Then, calculation is performed to evaluate the reliability. Basically, De-BDA calculation is made for pairs of operation labels, which are connected in parallel or in series at different subordination levels. The applicability of the De-BDA method is demonstrated by carrying out calculation for three model cases: operations in the event of malfunction of the control valve in the main water supply system for PWR, switching from an electrically-operated water supply pump to a turbin-operated water supply pump, and isolation and water removal operation for a low-pressure condensate pump. It is shown that the relative importance of each unit of a series of operations can be evaluated, making it possible to extract those units of greater importance, and that the priority among the factors which affect the reliability of operations can be determined. Results of the De-BDA calculation can serve to find important points to be considered in developing an operation manual, conducting education and training, and improving facilities. (Nogami, K.)

  7. Requirements for an evaluation infrastructure for reliable pervasive healthcare research

    DEFF Research Database (Denmark)

    Wagner, Stefan Rahr; Toftegaard, Thomas Skjødeberg; Bertelsen, Olav W.

    2012-01-01

    The need for a non-intrusive evaluation infrastructure platform to support research on reliable pervasive healthcare in the unsupervised setting is analyzed and challenges and possibilities are identified. A list of requirements is presented and a solution is suggested that would allow researchers...

  8. Evaluation of Information Requirements of Reliability Methods in Engineering Design

    DEFF Research Database (Denmark)

    Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema

    2010-01-01

    This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...

  9. Reliability Evaluation of Primary Cells | Anyaka | Nigerian Journal of ...

    African Journals Online (AJOL)

    Evaluation of the reliability of a primary cell took place in three stages: 192 cells went through a slow-discharged test. A designed experiment was conducted on 144 cells; there were three factors in the experiment: Storage temperature (three levels), thermal shock (two levels) and date code (two levels). 16 cells ...

  10. Evaluation of aileron actuator reliability with censored data

    Directory of Open Access Journals (Sweden)

    Li Huaiyuan

    2015-08-01

    Full Text Available For the purpose of enhancing reliability of aileron of Airbus new-generation A350XWB, an evaluation of aileron reliability on the basis of maintenance data is presented in this paper. Practical maintenance data contains large number of censoring samples, information uncertainty of which makes it hard to evaluate reliability of aileron actuator. Considering that true lifetime of censoring sample has identical distribution with complete sample, if censoring sample is transformed into complete sample, conversion frequency of censoring sample can be estimated according to frequency of complete sample. On the one hand, standard life table estimation and product limit method are improved on the basis of such conversion frequency, enabling accurate estimation of various censoring samples. On the other hand, by taking such frequency as one of the weight factors and integrating variance of order statistics under standard distribution, weighted least square estimation is formed for accurately estimating various censoring samples. Large amounts of experiments and simulations show that reliabilities of improved life table and improved product limit method are closer to the true value and more conservative; moreover, weighted least square estimate (WLSE, with conversion frequency of censoring sample and variances of order statistics as the weights, can still estimate accurately with high proportion of censored data in samples. Algorithm in this paper has good effect and can accurately estimate the reliability of aileron actuator even with small sample and high censoring rate. This research has certain significance in theory and engineering practice.

  11. Interim reliability evaluation program, Browns Ferry fault trees

    International Nuclear Information System (INIS)

    Stewart, M.E.

    1981-01-01

    An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible

  12. RELIABILITY OF CERTAIN TESTS FOR EVALUATION OF JUDO TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Slavko Obadov

    2007-05-01

    Full Text Available The sample included 106 judokas. Assessment of the level of mastership of judo techniques was carried out by evaluation of fi ve competent studies. Each subject performed a technique three times and each performance was evaluated by the judges. In order to evaluate measurement of each technique, Cronbach’s coeffi cient of reliability  was calculated. During the procedure the subjects's results were also transformed to factor scores i.e. the results of each performer at the main component of evaluation in the fi ve studies. These factor scores could be used in the subsequent procedure of multivariant statistical analysis.

  13. Evaluation of reliability assurance approaches to operational nuclear safety

    International Nuclear Information System (INIS)

    Mueller, C.J.; Bezella, W.A.

    1984-01-01

    This report discusses the results of research to evaluate existing and/or recommended safety/reliability assurance activities among nuclear and other high technology industries for potential nuclear industry implementation. Since the Three Mile Island (TMI) accident, there has been increased interest in the use of reliability programs (RP) to assure the performance of nuclear safety systems throughout the plant's lifetime. Recently, several Nuclear Regulatory Commission (NRC) task forces or safety issue review groups have recommended RPs for assuring the continuing safety of nuclear reactor plants. 18 references

  14. Genotype Imputation for Latinos Using the HapMap and 1000 Genomes Project Reference Panels

    Directory of Open Access Journals (Sweden)

    Xiaoyi eGao

    2012-06-01

    Full Text Available Genotype imputation is a vital tool in genome-wide association studies (GWAS and meta-analyses of multiple GWAS results. Imputation enables researchers to increase genomic coverage and to pool data generated using different genotyping platforms. HapMap samples are often employed as the reference panel. More recently, the 1000 Genomes Project resource is becoming the primary source for reference panels. Multiple GWAS and meta-analyses are targeting Latinos, the most populous and fastest growing minority group in the US. However, genotype imputation resources for Latinos are rather limited compared to individuals of European ancestry at present, largely because of the lack of good reference data. One choice of reference panel for Latinos is one derived from the population of Mexican individuals in Los Angeles contained in the HapMap Phase 3 project and the 1000 Genomes Project. However, a detailed evaluation of the quality of the imputed genotypes derived from the public reference panels has not yet been reported. Using simulation studies, the Illumina OmniExpress GWAS data from the Los Angles Latino Eye Study and the MACH software package, we evaluated the accuracy of genotype imputation in Latinos. Our results show that the 1000 Genomes Project AMR+CEU+YRI reference panel provides the highest imputation accuracy for Latinos, and that also including Asian samples in the panel can reduce imputation accuracy. We also provide the imputation accuracy for each autosomal chromosome using the 1000 Genomes Project panel for Latinos. Our results serve as a guide to future imputation-based analysis in Latinos.

  15. Environmental education curriculum evaluation questionnaire: A reliability and validity study

    Science.gov (United States)

    Minner, Daphne Diane

    The intention of this research project was to bridge the gap between social science research and application to the environmental domain through the development of a theoretically derived instrument designed to give educators a template by which to evaluate environmental education curricula. The theoretical base for instrument development was provided by several developmental theories such as Piaget's theory of cognitive development, Developmental Systems Theory, Life-span Perspective, as well as curriculum research within the area of environmental education. This theoretical base fueled the generation of a list of components which were then translated into a questionnaire with specific questions relevant to the environmental education domain. The specific research question for this project is: Can a valid assessment instrument based largely on human development and education theory be developed that reliably discriminates high, moderate, and low quality in environmental education curricula? The types of analyses conducted to answer this question were interrater reliability (percent agreement, Cohen's Kappa coefficient, Pearson's Product-Moment correlation coefficient), test-retest reliability (percent agreement, correlation), and criterion-related validity (correlation). Face validity and content validity were also assessed through thorough reviews. Overall results indicate that 29% of the questions on the questionnaire demonstrated a high level of interrater reliability and 43% of the questions demonstrated a moderate level of interrater reliability. Seventy-one percent of the questions demonstrated a high test-retest reliability and 5% a moderate level. Fifty-five percent of the questions on the questionnaire were reliable (high or moderate) both across time and raters. Only eight questions (8%) did not show either interrater or test-retest reliability. The global overall rating of high, medium, or low quality was reliable across both coders and time, indicating

  16. The evaluation of operator reliability factors on power reactor

    International Nuclear Information System (INIS)

    Karlina, Itjeu; Supriatna, Piping; W, Suharyo; Santosa, Kussigit; Darlis; S, Bambang; Y, Sasongko

    1999-01-01

    The sophisticated technology system was not assured the reliability system itself because it has contained a part of human dependence affected successfully of reactor operation either how work smoothly and safe or failure ac cured and then accident appears promptly. The evaluation of operator reliability factor on ABWR power reactor has been carried out which consist of criterion skill and workload according to NUREG/CR-2254, NUREG/CR-4016 and NUREG-0835 the reactor operation reliability emphasize to the operator are synergic between skill and workload themselves. The employee's skill will affect to the type and level of their tasks. The operator's skill depend on education and experiences, position or responsibility of tasks, physical conditions (age uninvalid of physic/mental

  17. Reliability evaluation of nuclear power plants by fault tree analysis

    International Nuclear Information System (INIS)

    Iwao, H.; Otsuka, T.; Fujita, I.

    1993-01-01

    As a work sponsored by the Ministry of International Trade and Industry, the Safety Information Research Center of NUPEC, using reliability data based on the operational experience of the domestic LWR Plants, has implemented FTA for the standard PWRs and BWRs in Japan with reactor scram due to system failures being at the top event. Up to this point, we have obtained the FT chart and minimal cut set for each type of system failure for qualitative evaluation, and we have estimated system unavailability, Fussell-Vesely importance and risk worth for components for quantitative evaluation. As the second stage of a series in our reliability evaluation work, another program was started to establish a support system. The aim of this system is to assist foreign and domestic plants in creating countermeasures when incidents occur, by providing them with the necessary information using the above analytical method and its results. (author)

  18. How reliable are forensic evaluations of legal sanity?

    Science.gov (United States)

    Gowensmith, W Neil; Murrie, Daniel C; Boccaccini, Marcus T

    2013-04-01

    When different clinicians evaluate the same criminal defendant's legal sanity, do they reach the same conclusion? Because Hawaii law requires multiple, independent evaluations when questions about legal sanity arise, Hawaii allows for the first contemporary study of the reliability of legal sanity opinions in routine practice in the United States. We examined 483 evaluation reports, addressing 165 criminal defendants, in which up to three forensic psychiatrists or psychologists offered independent opinions on a defendant's legal sanity. Evaluators reached unanimous agreement regarding legal sanity in only 55.1% of cases. Evaluators tended to disagree more often when a defendant was under the influence of drugs or alcohol at the time of the offense. But evaluators tended to agree more often when they agreed about diagnosing a psychotic disorder, or when the defendant had been psychiatrically hospitalized shortly before the offense. In court, judges followed the majority opinion among evaluators in 91% of cases. But when judges disagreed with the majority opinion, they usually did so to find defendants legally sane, rather than insane. Overall, this study indicates that reliability among practicing forensic evaluators addressing legal sanity may be poorer than the field has tended to assume. Although agreement appears more likely in some cases than others, the frequent disagreements suggest a need for improved training and practice.

  19. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  20. Evaluation of Network Reliability for Computer Networks with Multiple Sources

    Directory of Open Access Journals (Sweden)

    Yi-Kuei Lin

    2012-01-01

    Full Text Available Evaluating the reliability of a network with multiple sources to multiple sinks is a critical issue from the perspective of quality management. Due to the unrealistic definition of paths of network models in previous literature, existing models are not appropriate for real-world computer networks such as the Taiwan Advanced Research and Education Network (TWAREN. This paper proposes a modified stochastic-flow network model to evaluate the network reliability of a practical computer network with multiple sources where data is transmitted through several light paths (LPs. Network reliability is defined as being the probability of delivering a specified amount of data from the sources to the sink. It is taken as a performance index to measure the service level of TWAREN. This paper studies the network reliability of the international portion of TWAREN from two sources (Taipei and Hsinchu to one sink (New York that goes through a submarine and land surface cable between Taiwan and the United States.

  1. Accident Sequence Evaluation Program: Human reliability analysis procedure

    International Nuclear Information System (INIS)

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs

  2. The Data Evaluation for Obtaining Accuracy and Reliability

    International Nuclear Information System (INIS)

    Kim, Chang Geun; Chae, Kyun Shik; Lee, Sang Tae; Bhang, Gun Woong

    2012-01-01

    Nemours scientific measurement results are flooded from the paper, data book, etc. as fast growing of internet. We meet many different measurement results on the same measurand. In this moment, we are face to choose most reliable one out of them. But it is not easy to choose and use the accurate and reliable data as we do at an ice cream parlor. Even expert users feel difficult to distinguish the accurate and reliable scientific data from huge amount of measurement results. For this reason, the data evaluation is getting more important as the fast growing of internet and globalization. Furthermore the expressions of measurement results are not in standardi-zation. As these need, the international movement has been enhanced. At the first step, the global harmonization of terminology used in metrology and the expression of uncertainty in measurement were published in ISO. These methods are wide spread to many area of science on their measurement to obtain the accuracy and reliability. In this paper, it is introduced that the GUM, SRD and data evaluation on atomic collisions.

  3. An Imputation Model for Dropouts in Unemployment Data

    Directory of Open Access Journals (Sweden)

    Nilsson Petra

    2016-09-01

    Full Text Available Incomplete unemployment data is a fundamental problem when evaluating labour market policies in several countries. Many unemployment spells end for unknown reasons; in the Swedish Public Employment Service’s register as many as 20 percent. This leads to an ambiguity regarding destination states (employment, unemployment, retired, etc.. According to complete combined administrative data, the employment rate among dropouts was close to 50 for the years 1992 to 2006, but from 2007 the employment rate has dropped to 40 or less. This article explores an imputation approach. We investigate imputation models estimated both on survey data from 2005/2006 and on complete combined administrative data from 2005/2006 and 2011/2012. The models are evaluated in terms of their ability to make correct predictions. The models have relatively high predictive power.

  4. Ultrasound evaluation of the abductor hallucis muscle: Reliability study

    Directory of Open Access Journals (Sweden)

    Hing Wayne A

    2008-09-01

    Full Text Available Abstract Background The Abductor hallucis muscle (AbdH plays an integral role during gait and is often affected in pathological foot conditions. The aim of this study was to evaluate the within and between-session intra-tester reliability using diagnostic ultrasound of the dorso-plantar thickness, medio-lateral width and cross-sectional area, of the AbdH in asymptomatic adults. Methods The AbdH muscles of thirty asymptomatic subjects were imaged and then measured using a Philips HD11 Ultrasound machine. Interclass correlation coefficients (ICC with 95% confidence intervals (CI were used to calculate both within and between session intra-tester reliability. Results The within-session reliability results demonstrated for dorso-plantar thickness an ICC of 0.97 (95% CI: 0.99–0.99; medio-lateral width an ICC: of 0.97 (95% CI: 0.92–0.97 and cross-sectional area an ICC of 0.98 (95% CI: 0.98–0.99. Between-session reliability results demonstrated for dorso-plantar thickness an ICC of 0.97 (95% CI: 0.95 to 0.98; medio-lateral width an ICC of 0.94 (95% CI 0.90 to 0.96 and for cross-sectional area an ICC of 0.79 (95% CI 0.65 to 0.88. Conclusion Diagnostic ultrasound has the potential to be a reliable tool for evaluating the AbdH muscle in asymptomatic subjects. Subsequent studies may be conducted to provide a better understanding of the AbdH function in foot and ankle pathologies.

  5. Comparison of results from different imputation techniques for missing data from an anti-obesity drug trial

    DEFF Research Database (Denmark)

    Jørgensen, Anders W.; Lundstrøm, Lars H; Wetterslev, Jørn

    2014-01-01

    BACKGROUND: In randomised trials of medical interventions, the most reliable analysis follows the intention-to-treat (ITT) principle. However, the ITT analysis requires that missing outcome data have to be imputed. Different imputation techniques may give different results and some may lead to bias...... of handling missing data in a 60-week placebo controlled anti-obesity drug trial on topiramate. METHODS: We compared an analysis of complete cases with datasets where missing body weight measurements had been replaced using three different imputation methods: LOCF, baseline carried forward (BOCF) and MI...

  6. Use of reliability data for QA program evaluation

    International Nuclear Information System (INIS)

    Guarro, S.B.

    1985-01-01

    Possible analytical approaches for evaluation of the effectiveness in the operation of US commercial nuclear power plants are discussed. These approaches may be based on key plant component performance comparisons, correlation models, or comprehensive cost-benefit evaluation frameworks. As plant availability and reliability data must be used to quantify the models, the quality of these data conditions the amount of information that can ultimately be extracted. The potential impact of uncertainties in the data must be considered carefully, especially before application of the more complex models. 10 refs., 4 tabs

  7. Imputation methods for filling missing data in urban air pollution data for Malaysia

    Directory of Open Access Journals (Sweden)

    Nur Afiqah Zakaria

    2018-06-01

    Full Text Available The air quality measurement data obtained from the continuous ambient air quality monitoring (CAAQM station usually contained missing data. The missing observations of the data usually occurred due to machine failure, routine maintenance and human error. In this study, the hourly monitoring data of CO, O3, PM10, SO2, NOx, NO2, ambient temperature and humidity were used to evaluate four imputation methods (Mean Top Bottom, Linear Regression, Multiple Imputation and Nearest Neighbour. The air pollutants observations were simulated into four percentages of simulated missing data i.e. 5%, 10%, 15% and 20%. Performance measures namely the Mean Absolute Error, Root Mean Squared Error, Coefficient of Determination and Index of Agreement were used to describe the goodness of fit of the imputation methods. From the results of the performance measures, Mean Top Bottom method was selected as the most appropriate imputation method for filling in the missing values in air pollutants data.

  8. Reliability evaluation of the Savannah River reactor leak detection system

    International Nuclear Information System (INIS)

    Daugherty, W.L.; Sindelar, R.L.; Wallace, I.T.

    1991-01-01

    The Savannah River Reactors have been in operation since the mid-1950's. The primary degradation mode for the primary coolant loop piping is intergranular stress corrosion cracking. The leak-before-break (LBB) capability of the primary system piping has been demonstrated as part of an overall structural integrity evaluation. One element of the LBB analyses is a reliability evaluation of the leak detection system. The most sensitive element of the leak detection system is the airborne tritium monitors. The presence of small amounts of tritium in the heavy water coolant provide the basis for a very sensitive system of leak detection. The reliability of the tritium monitors to properly identify a crack leaking at a rate of either 50 or 300 lb/day (0.004 or 0.023 gpm, respectively) has been characterized. These leak rates correspond to action points for which specific operator actions are required. High reliability has been demonstrated using standard fault tree techniques. The probability of not detecting a leak within an assumed mission time of 24 hours is estimated to be approximately 5 x 10 -5 per demand. This result is obtained for both leak rates considered. The methodology and assumptions used to obtain this result are described in this paper. 3 refs., 1 fig., 1 tab

  9. Evaluation of Reliability Parameters in Micro-grid

    Directory of Open Access Journals (Sweden)

    H. Hasanzadeh Fard

    2015-06-01

    Full Text Available Evaluation of the reliability parameters in micro-grids based on renewable energy sources is one of the main problems that are investigated in this paper. Renewable energy sources such as solar and wind energy, battery as an energy storage system and fuel cell as a backup system are used to provide power to the electrical loads of the micro-grid. Loads in the micro-grid consist of interruptible and uninterruptible loads. In addition to the reliability parameters, Forced Outage Rate of each component and also uncertainty of wind power, PV power and demand are considered for micro-grid. In this paper, the problem is formulated as a nonlinear integer minimization problem which minimizes the sum of the total capital, operational, maintenance and replacement cost of DERs. This paper proposes PSO for solving this minimization problem.

  10. Evaluation of the reliability of a passive system

    International Nuclear Information System (INIS)

    Bianchi, F.; Burgazzi, L.; D'Auria, F.; Galassi, G.M.; Ricotti, M.E.; Oriani, L.

    2001-01-01

    A passive system should be theoretically more reliable than an active one. In fact its operation is independent by any external input or energy and is relied only upon natural physical laws (e.g., gravity, natural circulation, etc.) and/or 'intelligent' use of the energy inherently available in the system (e.g., chemical reaction, decay heat, etc.). Nevertheless the passive system may fail its mission as consequences of component failures, deviation of physical phenomena, boundary and/or initial conditions from the expectation. This document describes at first the methodology developed by ENEA, in collaboration of University of Pisa and Polytechnic of Milano, allowing the evaluation of the reliability for a passive system, which operation is based on moving working fluids (type B and C, cf. IAEA). It reports the results of an exercise performed on a system, which operation is based on Natural Circulation.(author)

  11. Evaluation of reliability of EC inspection of VVER SG tubes

    International Nuclear Information System (INIS)

    Stanic, D.

    2001-01-01

    Evaluation of eddy current data collected during inspection of VVER steam generators is very complex task because of numerous parameters which have affect on eddy current signals. That was the reason that recently ago INETEC has started related scientific project in order to evaluate the reliability of eddy current (EC) inspection of VVER steam generator (SG) tubing. In the scope of project the following objectives will be investigated: 1. Determination of POD (Probability of detection) of various types degradation cracks, where their basic parameters are variables (basic parameters are depth, length, width, orientation, number) on three different sets of tubes (clean ideal tubes, tubes with pilgering, tubes electroplated with copper) 2. Sizing quality (accuracy, repeatability) (same data sets as defined in 1.) 3. Effect of fill factor on POD and sizing quality. 4. Effect of tube bends on POD and sizing quality. 5. Effect of other tube geometry variations on POD and sizing quality (tube ovality, transition zone region, expanded (rolled) part of tube, dents, dings). Investigation will start with bobbin probe technique which is the most used technique for general purpose VVER tube examination. Since INETEC is the only world company which successfully developed and applied rotating probe technique for VVER SG tubes, scope of the project will be extended on rotating probe technique utilizing 'pancake' and 'point' coil. Method reliability will be investigated first on the huge set of EDM notches representing various defect morphologies and simulating different factors, and the second part will be investigated on sets of degradation defects obtained by artificial corrosion. In the scope of the project the measures for enhancing the method reliability have to be determined. This considers the proper definition of parameters of examination system, as well as establishment of the suitable analysis procedures. This article presents the temporary results of the first part of

  12. An evaluation of the multi-state node networks reliability using the traditional binary-state networks reliability algorithm

    International Nuclear Information System (INIS)

    Yeh, W.-C.

    2003-01-01

    A system where the components and system itself are allowed to have a number of performance levels is called the Multi-state system (MSS). A multi-state node network (MNN) is a generalization of the MSS without satisfying the flow conservation law. Evaluating the MNN reliability arises at the design and exploitation stage of many types of technical systems. Up to now, the known existing methods can only evaluate a special MNN reliability called the multi-state node acyclic network (MNAN) in which no cyclic is allowed. However, no method exists for evaluating the general MNN reliability. The main purpose of this article is to show first that each MNN reliability can be solved using any the traditional binary-state networks (TBSN) reliability algorithm with a special code for the state probability. A simple heuristic SDP algorithm based on minimal cuts (MC) for estimating the MNN reliability is presented as an example to show how the TBSN reliability algorithm is revised to solve the MNN reliability problem. To the author's knowledge, this study is the first to discuss the relationships between MNN and TBSN and also the first to present methods to solve the exact and approximated MNN reliability. One example is illustrated to show how the exact MNN reliability is obtained using the proposed algorithm

  13. Reliability evaluation for hinges of folder devices using ESPI

    International Nuclear Information System (INIS)

    Lee, Tae Hun; Chang, Seok Weon; Jhang, Kyung Young

    2004-01-01

    Folder type electronic devices have hinge to support the rotational motion of folder. This hinge is stressed by the rotational inertia moment of folder at the maximum open limit position of folder. This stress is repeated whenever the folder is open, and it is a cause of hinge fracture. In this paper, the reliability evaluation for the hinge fracture in the folder type cellular phone is discussed. For this, the durability testing machine using crack-rocker mechanism is developed to evaluate the life cycle of the hinge, and the degradation after repetitions of opening and shutting is evaluated from the deformation around the hinge, where the deformation is measured by ESPI (electronic speckle pattern interferometer). Experimental results showed that ESPI was able to measure the deformation of hinge precisely, so we could monitor the change of deformation around the hinge as the repetition number of folder open is increased.

  14. Reliability evaluation of containments including soil-structure interaction

    International Nuclear Information System (INIS)

    Pires, J.; Hwang, H.; Reich, M.

    1985-12-01

    Soil-structure interaction effects on the reliability assessment of containment structures are examined. The probability-based method for reliability evaluation of nuclear structures developed at Brookhaven National Laboratory is extended to include soil-structure interaction effects. In this method, reliability of structures is expressed in terms of limit state probabilities. Furthermore, random vibration theory is utilized to calculate limit state probabilities under random seismic loads. Earthquake ground motion is modeled by a segment of a zero-mean, stationary, filtered Gaussian white noise random process, represented by its power spectrum. All possible seismic hazards at a site, represented by a hazard curve, are also included in the analysis. The soil-foundation system is represented by a rigid surface foundation on an elastic halfspace. Random and other uncertainties in the strength properties of the structure, in the stiffness and internal damping of the soil, are also included in the analysis. Finally, a realistic reinforced concrete containment is analyzed to demonstrate the application of the method. For this containment, the soil-structure interaction effects on; (1) limit state probabilities, (2) structural fragility curves, (3) floor response spectra with probabilistic content, and (4) correlation coefficients for total acceleration response at specified structural locations, are examined in detail. 25 refs., 21 figs., 12 tabs

  15. Reliability evaluation of oil pipelines operating in aggressive environment

    Science.gov (United States)

    Magomedov, R. M.; Paizulaev, M. M.; Gebel, E. S.

    2017-08-01

    In connection with modern increased requirements for ecology and safety, the development of diagnostic services complex is obligatory and necessary enabling to ensure the reliable operation of the gas transportation infrastructure. Estimation of oil pipelines technical condition should be carried out not only to establish the current values of the equipment technological parameters in operation, but also to predict the dynamics of changes in the physical and mechanical characteristics of the material, the appearance of defects, etc. to ensure reliable and safe operation. In the paper, existing Russian and foreign methods for evaluation of the oil pipelines reliability are considered, taking into account one of the main factors leading to the appearance of crevice in the pipeline material, i.e. change the shape of its cross-section, - corrosion. Without compromising the generality of the reasoning, the assumption of uniform corrosion wear for the initial rectangular cross section has been made. As a result a formula for calculation the probability of failure-free operation was formulated. The proposed mathematical model makes it possible to predict emergency situations, as well as to determine optimal operating conditions for oil pipelines.

  16. Non-destructive Reliability Evaluation of Electronic Device by ESPI

    International Nuclear Information System (INIS)

    Yoon, Sung Un; Kim, Koung Suk; Kang, Ki Soo; Jo, Seon Hyung

    2001-01-01

    This paper propose electronic speckle pattern interferometry(ESPI) for reliability evaluation of electronic device. Especially, vibration problem in a fan of air conditioner, motor of washing machine and etc. is important factor to design the devices. But, it is difficult to apply previous method, accelerometer to the devices with complex geometry. ESPI, non-contact measurement technique applies a commercial fan of air conditioner to vibration analysis. Vibration mode shapes, natural frequency and the range of the frequency are decided and compared with that of FEM analysis. In mechanical deign of new product, ESPI adds weak point of previous method to supply effective design information

  17. Study of evaluation techniques of software configuration management and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Youn, Cheong; Baek, Y. W.; Kim, H. C.; Han, H. C.; Choi, C. R. [Chungnam National Univ., Taejon (Korea, Republic of)

    2001-03-15

    The Study of activities to solve software safety and quality must be executed in base of establishing software development process for digitalized nuclear plant. Especially study of software testing and Verification and Validation must executed. For this purpose methodologies and tools which can improve software qualities are evaluated and software Testing, V and V and Configuration Management which can be applied to software life cycle are investigated. This study establish a guideline that can be used to assure software safety and reliability requirements in digitalized nuclear plant systems.

  18. Evaluating the reliability of predictions made using environmental transfer models

    International Nuclear Information System (INIS)

    1989-01-01

    The development and application of mathematical models for predicting the consequences of releases of radionuclides into the environment from normal operations in the nuclear fuel cycle and in hypothetical accident conditions has increased dramatically in the last two decades. This Safety Practice publication has been prepared to provide guidance on the available methods for evaluating the reliability of environmental transfer model predictions. It provides a practical introduction of the subject and a particular emphasis has been given to worked examples in the text. It is intended to supplement existing IAEA publications on environmental assessment methodology. 60 refs, 17 figs, 12 tabs

  19. Data driven estimation of imputation error-a strategy for imputation with a reject option

    DEFF Research Database (Denmark)

    Bak, Nikolaj; Hansen, Lars Kai

    2016-01-01

    Missing data is a common problem in many research fields and is a challenge that always needs careful considerations. One approach is to impute the missing values, i.e., replace missing values with estimates. When imputation is applied, it is typically applied to all records with missing values i...

  20. Improving accuracy of rare variant imputation with a two-step imputation approach

    DEFF Research Database (Denmark)

    Kreiner-Møller, Eskil; Medina-Gomez, Carolina; Uitterlinden, André G

    2015-01-01

    not being comprehensively scrutinized. Next-generation arrays ensuring sufficient coverage together with new reference panels, as the 1000 Genomes panel, are emerging to facilitate imputation of low frequent single-nucleotide polymorphisms (minor allele frequency (MAF) ... reference sample genotyped on a dense array and hereafter to the 1000 Genomes reference panel. We show that mean imputation quality, measured by the r(2) using this approach, increases by 28% for variants with a MAF between 1 and 5% as compared with direct imputation to 1000 Genomes reference. Similarly......Genotype imputation has been the pillar of the success of genome-wide association studies (GWAS) for identifying common variants associated with common diseases. However, most GWAS have been run using only 60 HapMap samples as reference for imputation, meaning less frequent and rare variants...

  1. Process evaluation of the human reliability data bank

    International Nuclear Information System (INIS)

    Miller, D.P.; Comer, K.

    1985-01-01

    The US Nuclear Regulatory Commission and Sandia National Laboratories have been developing a plan for a human reliability data bank since August 1981. This research is in response to the data need of the nuclear power industry's probabilistic risk assessment community. The three phases of the program are to: (a) develop the data bank concept, (b) develop an implementation plan and conduct a process evaluation, and (c) assist a sponsor in implementing the data bank. The program is now in Phase B. This paper describes the methods used and the results of the process evaluation. Decisions to be made in the future regarding full-scale implementation will be based, in part, on the outcome of this study

  2. Process evaluation of the human reliability data bank

    International Nuclear Information System (INIS)

    Miller, D.P.; Comer, K.

    1984-01-01

    The US Nuclear Regulatory Commission and Sandia National Laboratories have been developing a plan for a human reliability data bank since August 1981. This research is in response to the data needs of the nuclear power industry's probabilistic risk assessment community. The three phases of the program are to: (A) develop the data bank concept, (B) develop an implementation plan and conduct a process evaluation, and (C) assist a sponsor in implementing the data bank. The program is now in Phase B. This paper describes the methods used and the results of the process evaluation. Decisions to be made in the future regarding full-scale implementation will be based in part on the outcome of this study

  3. On multivariate imputation and forecasting of decadal wind speed missing data.

    Science.gov (United States)

    Wesonga, Ronald

    2015-01-01

    This paper demonstrates the application of multiple imputations by chained equations and time series forecasting of wind speed data. The study was motivated by the high prevalence of missing wind speed historic data. Findings based on the fully conditional specification under multiple imputations by chained equations, provided reliable wind speed missing data imputations. Further, the forecasting model shows, the smoothing parameter, alpha (0.014) close to zero, confirming that recent past observations are more suitable for use to forecast wind speeds. The maximum decadal wind speed for Entebbe International Airport was estimated to be 17.6 metres per second at a 0.05 level of significance with a bound on the error of estimation of 10.8 metres per second. The large bound on the error of estimations confirms the dynamic tendencies of wind speed at the airport under study.

  4. A comparative evaluation of five human reliability assessment techniques

    International Nuclear Information System (INIS)

    Kirwan, B.

    1988-01-01

    A field experiment was undertaken to evaluate the accuracy, usefulness, and resources requirements of five human reliability quantification techniques (Techniques for Human Error Rate Prediction (THERP); Paired Comparisons, Human Error Assessment and Reduction Technique (HEART), Success Liklihood Index Method (SLIM)-Multi Attribute Utility Decomposition (MAUD), and Absolute Probability Judgement). This was achieved by assessing technique predictions against a set of known human error probabilities, and by comparing their predictions on a set of five realistic Probabilisitc Risk Assessment (PRA) human error. On a combined measure of accuracy THERP and Absolute Probability Judgement performed best, whilst HEART showed indications of accuracy and was lower in resources usage than other techniques. HEART and THERP both appear to benefit from using trained assessors in order to obtain the best results. SLIM and Paired Comparisons require further research on achieving a robust calibration relationship between their scale values and absolute probabilities. (author)

  5. Reliability evaluation of thermophysical properties from first-principles calculations.

    Science.gov (United States)

    Palumbo, Mauro; Fries, Suzana G; Dal Corso, Andrea; Kürmann, Fritz; Hickel, Tilmann; Neugebauer, Jürg

    2014-08-20

    Thermophysical properties, such as heat capacity, bulk modulus and thermal expansion, are of great importance for many technological applications and are traditionally determined experimentally. With the rapid development of computational methods, however, first-principles computed temperature-dependent data are nowadays accessible. We evaluate various computational realizations of such data in comparison to the experimental scatter. The work is focussed on the impact of different first-principles codes (QUANTUM ESPRESSO and VASP), pseudopotentials (ultrasoft and projector augmented wave) as well as phonon determination methods (linear response and direct force constant method) on these properties. Based on the analysis of data for two pure elements, Cr and Ni, consequences for the reliability of temperature-dependent first-principles data in computational thermodynamics are discussed.

  6. A nonparametric multiple imputation approach for missing categorical data

    Directory of Open Access Journals (Sweden)

    Muhan Zhou

    2017-06-01

    Full Text Available Abstract Background Incomplete categorical variables with more than two categories are common in public health data. However, most of the existing missing-data methods do not use the information from nonresponse (missingness probabilities. Methods We propose a nearest-neighbour multiple imputation approach to impute a missing at random categorical outcome and to estimate the proportion of each category. The donor set for imputation is formed by measuring distances between each missing value with other non-missing values. The distance function is calculated based on a predictive score, which is derived from two working models: one fits a multinomial logistic regression for predicting the missing categorical outcome (the outcome model and the other fits a logistic regression for predicting missingness probabilities (the missingness model. A weighting scheme is used to accommodate contributions from two working models when generating the predictive score. A missing value is imputed by randomly selecting one of the non-missing values with the smallest distances. We conduct a simulation to evaluate the performance of the proposed method and compare it with several alternative methods. A real-data application is also presented. Results The simulation study suggests that the proposed method performs well when missingness probabilities are not extreme under some misspecifications of the working models. However, the calibration estimator, which is also based on two working models, can be highly unstable when missingness probabilities for some observations are extremely high. In this scenario, the proposed method produces more stable and better estimates. In addition, proper weights need to be chosen to balance the contributions from the two working models and achieve optimal results for the proposed method. Conclusions We conclude that the proposed multiple imputation method is a reasonable approach to dealing with missing categorical outcome data with

  7. Development on methods for evaluating structure reliability of piping components

    International Nuclear Information System (INIS)

    Schimpfke, T.; Grebner, H.; Peschke, J.; Sievers, J.

    2003-01-01

    In the frame of the German reactor safety research program of the Federal Ministry of Economics and Labour, GRS has started to develop an analysis code named PROST (PRObabilistic STructure analysis) for estimating the leak and break probabilities of piping systems in nuclear power plants. The development is based on the experience achieved with applications of the public available US code PRAISE 3.10 (Piping Reliability Analysis Including Seismic Events), which was supplemented by additional features regarding the statistical evaluation and the crack orientation. PROST is designed to be more flexible to changes and supplementations. Up to now it can be used for calculating fatigue problems. The paper mentions the main capabilities and theoretical background of the present PROST development and presents a parametric study on the influence by changing the method of stress intensity factor and limit load calculation and the statistical evaluation options on the leak probability of an exemplary pipe with postulated axial crack distribution. Furthermore the resulting leak probability of an exemplary pipe with postulated circumferential crack distribution is compared with the results of the modified PRAISE computer program. The intention of this investigation is to show trends. Therefore the resulting absolute values for probabilities should not be considered as realistic evaluations. (author)

  8. Methods for reliability evaluation of trust and reputation systems

    Science.gov (United States)

    Janiszewski, Marek B.

    2016-09-01

    Trust and reputation systems are a systematic approach to build security on the basis of observations of node's behaviour. Exchange of node's opinions about other nodes is very useful to indicate nodes which act selfishly or maliciously. The idea behind trust and reputation systems gets significance because of the fact that conventional security measures (based on cryptography) are often not sufficient. Trust and reputation systems can be used in various types of networks such as WSN, MANET, P2P and also in e-commerce applications. Trust and reputation systems give not only benefits but also could be a thread itself. Many attacks aim at trust and reputation systems exist, but such attacks still have not gain enough attention of research teams. Moreover, joint effects of many of known attacks have been determined as a very interesting field of research. Lack of an acknowledged methodology of evaluation of trust and reputation systems is a serious problem. This paper aims at presenting various approaches of evaluation such systems. This work also contains a description of generalization of many trust and reputation systems which can be used to evaluate reliability of such systems in the context of preventing various attacks.

  9. Cost reduction for web-based data imputation

    KAUST Repository

    Li, Zhixu; Shang, Shuo; Xie, Qing; Zhang, Xiangliang

    2014-01-01

    Web-based Data Imputation enables the completion of incomplete data sets by retrieving absent field values from the Web. In particular, complete fields can be used as keywords in imputation queries for absent fields. However, due to the ambiguity

  10. Evaluation and improvement of nondestructive evaluation reliability for inservice inspection of light water reactors

    International Nuclear Information System (INIS)

    Bates, D.J.; Deffenbaugh, J.D.; Good, M.S.; Heasler, P.G.; Mart, G.A.; Simonen, F.A.; Spanner, J.C.; Taylor, T.T.; Van Fleet, L.G.

    1987-01-01

    The Evaluation and Improvement of NDE Reliability for Inservice Inspection (ISI) of Light Water Reactors (NDE Reliability) Program at Pacific Northwest Laboratory (PNL) was established to determine the reliability of current ISI techniques and to develop recommendations that will ensure a suitably high inspection reliability. The objectives of this NRC program are to: determine the reliability of ultrasonic ISI performed on commercial light-water reactor (LWR) primary systems, using probabilistic fracture mechanics analysis, determine the impact of NDE unreliability on system safety and determine the level of inspection reliability required to ensure a suitably low failure probability, evaluate the degree of reliability improvement that could be achieved using improved and advanced NDE techniques, based on material properties, service conditions, and NDE uncertainties, recommend revisions to ASME Code, Section XI, and Regulatory Requirements that will ensure suitably low failure probabilities. The scope of this program is limited to ISI of primary systems; the results and recommendations may also be applicable to Class II piping systems

  11. Distribution system reliability evaluation using credibility theory | Xu ...

    African Journals Online (AJOL)

    In this paper, a hybrid algorithm based on fuzzy simulation and Failure Mode and Effect Analysis (FMEA) is applied to determine fuzzy reliability indices of distribution system. This approach can obtain fuzzy expected values and their variances of reliability indices, and the credibilities of reliability indices meeting specified ...

  12. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    Science.gov (United States)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  13. Assessment of imputation methods using varying ecological information to fill the gaps in a tree functional trait database

    Science.gov (United States)

    Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2016-04-01

    Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix

  14. Fully conditional specification in multivariate imputation

    NARCIS (Netherlands)

    van Buuren, S.; Brand, J. P.L.; Groothuis-Oudshoorn, C. G.M.; Rubin, D. B.

    2006-01-01

    The use of the Gibbs sampler with fully conditionally specified models, where the distribution of each variable given the other variables is the starting point, has become a popular method to create imputations in incomplete multivariate data. The theoretical weakness of this approach is that the

  15. Saturated linkage map construction in Rubus idaeus using genotyping by sequencing and genome-independent imputation

    Directory of Open Access Journals (Sweden)

    Ward Judson A

    2013-01-01

    Full Text Available Abstract Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry. Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation

  16. Reliability model analysis and primary experimental evaluation of laser triggered pulse trigger

    International Nuclear Information System (INIS)

    Chen Debiao; Yang Xinglin; Li Yuan; Li Jin

    2012-01-01

    High performance pulse trigger can enhance performance and stability of the PPS. It is necessary to evaluate the reliability of the LTGS pulse trigger, so we establish the reliability analysis model of this pulse trigger based on CARMES software, the reliability evaluation is accord with the statistical results. (authors)

  17. LinkImputeR: user-guided genotype calling and imputation for non-model organisms.

    Science.gov (United States)

    Money, Daniel; Migicovsky, Zoë; Gardner, Kyle; Myles, Sean

    2017-07-10

    Genomic studies such as genome-wide association and genomic selection require genome-wide genotype data. All existing technologies used to create these data result in missing genotypes, which are often then inferred using genotype imputation software. However, existing imputation methods most often make use only of genotypes that are successfully inferred after having passed a certain read depth threshold. Because of this, any read information for genotypes that did not pass the threshold, and were thus set to missing, is ignored. Most genomic studies also choose read depth thresholds and quality filters without investigating their effects on the size and quality of the resulting genotype data. Moreover, almost all genotype imputation methods require ordered markers and are therefore of limited utility in non-model organisms. Here we introduce LinkImputeR, a software program that exploits the read count information that is normally ignored, and makes use of all available DNA sequence information for the purposes of genotype calling and imputation. It is specifically designed for non-model organisms since it requires neither ordered markers nor a reference panel of genotypes. Using next-generation DNA sequence (NGS) data from apple, cannabis and grape, we quantify the effect of varying read count and missingness thresholds on the quantity and quality of genotypes generated from LinkImputeR. We demonstrate that LinkImputeR can increase the number of genotype calls by more than an order of magnitude, can improve genotyping accuracy by several percent and can thus improve the power of downstream analyses. Moreover, we show that the effects of quality and read depth filters can differ substantially between data sets and should therefore be investigated on a per-study basis. By exploiting DNA sequence data that is normally ignored during genotype calling and imputation, LinkImputeR can significantly improve both the quantity and quality of genotype data generated from

  18. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    Science.gov (United States)

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  19. VIGAN: Missing View Imputation with Generative Adversarial Networks.

    Science.gov (United States)

    Shang, Chao; Palmer, Aaron; Sun, Jiangwen; Chen, Ko-Shin; Lu, Jin; Bi, Jinbo

    2017-01-01

    In an era when big data are becoming the norm, there is less concern with the quantity but more with the quality and completeness of the data. In many disciplines, data are collected from heterogeneous sources, resulting in multi-view or multi-modal datasets. The missing data problem has been challenging to address in multi-view data analysis. Especially, when certain samples miss an entire view of data, it creates the missing view problem. Classic multiple imputations or matrix completion methods are hardly effective here when no information can be based on in the specific view to impute data for such samples. The commonly-used simple method of removing samples with a missing view can dramatically reduce sample size, thus diminishing the statistical power of a subsequent analysis. In this paper, we propose a novel approach for view imputation via generative adversarial networks (GANs), which we name by VIGAN. This approach first treats each view as a separate domain and identifies domain-to-domain mappings via a GAN using randomly-sampled data from each view, and then employs a multi-modal denoising autoencoder (DAE) to reconstruct the missing view from the GAN outputs based on paired data across the views. Then, by optimizing the GAN and DAE jointly, our model enables the knowledge integration for domain mappings and view correspondences to effectively recover the missing view. Empirical results on benchmark datasets validate the VIGAN approach by comparing against the state of the art. The evaluation of VIGAN in a genetic study of substance use disorders further proves the effectiveness and usability of this approach in life science.

  20. Nonparametric autocovariance estimation from censored time series by Gaussian imputation.

    Science.gov (United States)

    Park, Jung Wook; Genton, Marc G; Ghosh, Sujit K

    2009-02-01

    One of the most frequently used methods to model the autocovariance function of a second-order stationary time series is to use the parametric framework of autoregressive and moving average models developed by Box and Jenkins. However, such parametric models, though very flexible, may not always be adequate to model autocovariance functions with sharp changes. Furthermore, if the data do not follow the parametric model and are censored at a certain value, the estimation results may not be reliable. We develop a Gaussian imputation method to estimate an autocovariance structure via nonparametric estimation of the autocovariance function in order to address both censoring and incorrect model specification. We demonstrate the effectiveness of the technique in terms of bias and efficiency with simulations under various rates of censoring and underlying models. We describe its application to a time series of silicon concentrations in the Arctic.

  1. Reliability Evaluation Of The City Transport Buses Under Actual Conditions

    Directory of Open Access Journals (Sweden)

    Rymarz Joanna

    2015-12-01

    Full Text Available The purpose of this paper was to present a reliability comparison of two types of city transport buses. Case study on the example of the well-known brands of city buses: Solaris Urbino 12 and Mercedes-Benz 628 Conecto L used at Municipal Transport Company in Lublin was presented in details. A reliability index for the most failure parts and complex systems for the period of time failures was determined. The analysis covered damages of the following systems: engine, electrical system, pneumatic system, brake system, driving system, central heating and air-conditioning and doors. Reliability was analyzed based on Weibull model. It has been demonstrated, that during the operation significant reliability differences occur between the buses produced nowadays.

  2. Reliability Evaluation for Optimizing Electricity Supply in a Developing Country

    Directory of Open Access Journals (Sweden)

    Mark Ndubuka NWOHU

    2007-09-01

    Full Text Available The reliability standards for electricity supply in a developing country, like Nigeria, have to be determined on past engineering principles and practice. Because of the high demand of electrical power due to rapid development, industrialization and rural electrification; the economic, social and political climate in which the electric power supply industry now operates should be critically viewed to ensure that the production of electrical power should be augmented and remain uninterrupted. This paper presents an economic framework that can be used to optimize electric power system reliability. Finally the cost models are investigated to take into account the economic analysis of system reliability, which can be periodically updated to improve overall reliability of electric power system.

  3. BRITS: Bidirectional Recurrent Imputation for Time Series

    OpenAIRE

    Cao, Wei; Wang, Dong; Li, Jian; Zhou, Hao; Li, Lei; Li, Yitan

    2018-01-01

    Time series are widely used as signals in many classification/regression tasks. It is ubiquitous that time series contains many missing values. Given multiple correlated time series data, how to fill in missing values and to predict their class labels? Existing imputation methods often impose strong assumptions of the underlying data generating process, such as linear dynamics in the state space. In this paper, we propose BRITS, a novel method based on recurrent neural networks for missing va...

  4. Determination of reliability of express forecasting evaluation of radiometric enriching ability of non-ferrous ores

    International Nuclear Information System (INIS)

    Kirpishchikov, S.P.

    1991-01-01

    Use of the data of nuclear physical methods of sampling and logging enables to improve reliability of evaluation of radiometric enriching ability of ores, as well as to evaluate quantitatively this reliability. This problem may be solved by using some concepts of geostatistics. The presented results enable to conclude, that the data of nuclear-physical methods of sampling and logging can provide high reliability of evaluation of radiometric enriching ability of non-ferrous ores and their geometrization by technological types

  5. Bootstrap inference when using multiple imputation.

    Science.gov (United States)

    Schomaker, Michael; Heumann, Christian

    2018-04-16

    Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Students' Evaluation Strategies in a Web Research Task: Are They Sensitive to Relevance and Reliability?

    Science.gov (United States)

    Rodicio, Héctor García

    2015-01-01

    When searching and using resources on the Web, students have to evaluate Web pages in terms of relevance and reliability. This evaluation can be done in a more or less systematic way, by either considering deep or superficial cues of relevance and reliability. The goal of this study was to examine how systematic students are when evaluating Web…

  7. Evaluation of tecnological reliability of wind turbine facility Gibara 2

    International Nuclear Information System (INIS)

    Torres Valle, Antonio; Martínez Martín, Erich

    2016-01-01

    Renewable energy, particularly wind, will occupy an important place in the coming decades, marked by the depletion of fossil fuel sources. In Cuba significant growth in the use of these energy sourcesis forecasted. For this reason is importantthe creation of reliable technology to ensure that future mission. The paper proposes as its central objective, the analysis of reliability of Wind Farm Gibara 2 starting from its representation based on the methodology of fault tree and to recommend some possible applications of the results. An essential step in the research is the determination of participating components in the fault tree and processing of the available reliability database at the Wind Farm Gibara 2. The document essentially helpsin the identification of the main contributors to the unavailability of facilities and optimizing maintenance policy. (author)

  8. Study on evaluation of construction reliability for engineering project based on fuzzy language operator

    Science.gov (United States)

    Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping

    2018-03-01

    System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.

  9. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  10. Applications of Human Performance Reliability Evaluation Concepts and Demonstration Guidelines

    Science.gov (United States)

    1977-03-15

    ship stops dead in the water and the AN/SQS-26 operator recommends a new heading (000°). At T + 14 minutes, the target ship begins a hard turn to...Various Simulated Conditions 82 9 Hunan Reliability for Each Simulated Operator (Baseline Run) 83 10 Human and Equipment Availabilit / under

  11. Reliability and performance evaluation of stainless and mild steel ...

    African Journals Online (AJOL)

    Reliability and performance of stainless and mild steel products in methanolic and aqueous sodium chloride media have been investigated. Weight-loss and pre-exposure methods were used. There was a higher rate of weight-loss of mild steels and stainless steels in 1% HCl methanolic solution than in aqueous NaCl ...

  12. Reliability of FAMACHA© chart for the evaluation of anaemia in ...

    African Journals Online (AJOL)

    The reliability of FAMACHA© chart for identifying anaemic goats was compared with Packed Cell Volume (PCV). The colour of the lower eyelids was graded with FAMACHA© chart based on FAMACHA© scores (FS) of 1-5. The animals were scored from severely anaemic (white or FS 5) through moderately anaemic (pink or ...

  13. A New Method of Reliability Evaluation Based on Wavelet Information Entropy for Equipment Condition Identification

    International Nuclear Information System (INIS)

    He, Z J; Zhang, X L; Chen, X F

    2012-01-01

    Aiming at reliability evaluation of condition identification of mechanical equipment, it is necessary to analyze condition monitoring information. A new method of reliability evaluation based on wavelet information entropy extracted from vibration signals of mechanical equipment is proposed. The method is quite different from traditional reliability evaluation models that are dependent on probability statistics analysis of large number sample data. The vibration signals of mechanical equipment were analyzed by means of second generation wavelet package (SGWP). We take relative energy in each frequency band of decomposed signal that equals a percentage of the whole signal energy as probability. Normalized information entropy (IE) is obtained based on the relative energy to describe uncertainty of a system instead of probability. The reliability degree is transformed by the normalized wavelet information entropy. A successful application has been achieved to evaluate the assembled quality reliability for a kind of dismountable disk-drum aero-engine. The reliability degree indicates the assembled quality satisfactorily.

  14. A technical survey on issues of the quantitative evaluation of software reliability

    International Nuclear Information System (INIS)

    Park, J. K; Sung, T. Y.; Eom, H. S.; Jeong, H. S.; Park, J. H.; Kang, H. G.; Lee, K. Y.; Park, J. K.

    2000-04-01

    To develop the methodology for evaluating the software reliability included in digital instrumentation and control system (I and C), many kinds of methodologies/techniques that have been proposed from the software reliability engineering fuel are analyzed to identify the strong and week points of them. According to analysis results, methodologies/techniques that can be directly applied for the evaluation of the software reliability are not exist. Thus additional researches to combine the most appropriate methodologies/techniques from existing ones would be needed to evaluate the software reliability. (author)

  15. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    International Nuclear Information System (INIS)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik

    2015-01-01

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles

  16. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)

    2015-12-15

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles.

  17. Reliability Evaluation for Optimizing Electricity Supply in a Developing Country

    OpenAIRE

    Mark Ndubuka NWOHU

    2007-01-01

    The reliability standards for electricity supply in a developing country, like Nigeria, have to be determined on past engineering principles and practice. Because of the high demand of electrical power due to rapid development, industrialization and rural electrification; the economic, social and political climate in which the electric power supply industry now operates should be critically viewed to ensure that the production of electrical power should be augmented and remain uninterrupted. ...

  18. Human reliability: an evaluation of its understanding and prediction

    International Nuclear Information System (INIS)

    Joksimovich, V.

    1987-01-01

    This paper presents a viewpoint on the state-of-the-art in human reliability. The bases for this viewpoint are, by and large, research projects conducted by the NUS for the Electric Power Research Institute (EPRI) primarily with the objective of further enhancing the credibility of PRA methodology. The presentation is divided into the following key sections: Background and Overview, Methodology and Data Base with emphasis on the simulator data base

  19. Final report : testing and evaluation for solar hot water reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Caudell, Thomas P. (University of New Mexico, Albuquerque, NM); He, Hongbo (University of New Mexico, Albuquerque, NM); Menicucci, David F. (Building Specialists, Inc., Albuquerque, NM); Mammoli, Andrea A. (University of New Mexico, Albuquerque, NM); Burch, Jay (National Renewable Energy Laboratory, Golden CO)

    2011-07-01

    Solar hot water (SHW) systems are being installed by the thousands. Tax credits and utility rebate programs are spurring this burgeoning market. However, the reliability of these systems is virtually unknown. Recent work by Sandia National Laboratories (SNL) has shown that few data exist to quantify the mean time to failure of these systems. However, there is keen interest in developing new techniques to measure SHW reliability, particularly among utilities that use ratepayer money to pay the rebates. This document reports on an effort to develop and test new, simplified techniques to directly measure the state of health of fielded SHW systems. One approach was developed by the National Renewable Energy Laboratory (NREL) and is based on the idea that the performance of the solar storage tank can reliably indicate the operational status of the SHW systems. Another approach, developed by the University of New Mexico (UNM), uses adaptive resonance theory, a type of neural network, to detect and predict failures. This method uses the same sensors that are normally used to control the SHW system. The NREL method uses two additional temperature sensors on the solar tank. The theories, development, application, and testing of both methods are described in the report. Testing was performed on the SHW Reliability Testbed at UNM, a highly instrumented SHW system developed jointly by SNL and UNM. The two methods were tested against a number of simulated failures. The results show that both methods show promise for inclusion in conventional SHW controllers, giving them advanced capability in detecting and predicting component failures.

  20. Statistical Bayesian method for reliability evaluation based on ADT data

    Science.gov (United States)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  1. Dynamic reliability assessment and prediction for repairable systems with interval-censored data

    International Nuclear Information System (INIS)

    Peng, Yizhen; Wang, Yu; Zi, YanYang; Tsui, Kwok-Leung; Zhang, Chuhua

    2017-01-01

    The ‘Test, Analyze and Fix’ process is widely applied to improve the reliability of a repairable system. In this process, dynamic reliability assessment for the system has been paid a great deal of attention. Due to instrument malfunctions, staff omissions and imperfect inspection strategies, field reliability data are often subject to interval censoring, making dynamic reliability assessment become a difficult task. Most traditional methods assume this kind of data as multiple normal distributed variables or the missing mechanism as missing at random, which may cause a large bias in parameter estimation. This paper proposes a novel method to evaluate and predict the dynamic reliability of a repairable system subject to interval-censored problem. First, a multiple imputation strategy based on the assumption that the reliability growth trend follows a nonhomogeneous Poisson process is developed to derive the distributions of missing data. Second, a new order statistic model that can transfer the dependent variables into independent variables is developed to simplify the imputation procedure. The unknown parameters of the model are iteratively inferred by the Monte Carlo expectation maximization (MCEM) algorithm. Finally, to verify the effectiveness of the proposed method, a simulation and a real case study for gas pipeline compressor system are implemented. - Highlights: • A new multiple imputation strategy was developed to derive the PDF of missing data. • A new order statistic model was developed to simplify the imputation procedure. • The parameters of the order statistic model were iteratively inferred by MCEM. • A real cases study was conducted to verify the effectiveness of the proposed method.

  2. Local exome sequences facilitate imputation of less common variants and increase power of genome wide association studies.

    Directory of Open Access Journals (Sweden)

    Peter K Joshi

    Full Text Available The analysis of less common variants in genome-wide association studies promises to elucidate complex trait genetics but is hampered by low power to reliably detect association. We show that addition of population-specific exome sequence data to global reference data allows more accurate imputation, particularly of less common SNPs (minor allele frequency 1-10% in two very different European populations. The imputation improvement corresponds to an increase in effective sample size of 28-38%, for SNPs with a minor allele frequency in the range 1-3%.

  3. Evaluation of ECT reliability for axial ODSCC in steam generator tubes

    International Nuclear Information System (INIS)

    Lee, Jae Bong; Park, Jai Hak; Kim, Hong Deok; Chung, Han Sub

    2010-01-01

    The integrity of steam generator tubes is usually evaluated based on eddy current test (ECT) results. Because detection capacity of the ECT is not perfect, all of the physical flaws, which actually exist in steam generator tubes, cannot be detected by ECT inspection. Therefore it is very important to analyze ECT reliability in the integrity assessment of steam generators. The reliability of an ECT inspection system is divided into reliability of inspection technique and reliability of quality of analyst. And the reliability of ECT results is also divided into reliability of size and reliability of detection. The reliability of ECT sizing is often characterized as a linear regression model relating true flaw size data to measured flaw size data. The reliability of detection is characterized in terms of probability of detection (POD), which is expressed as a function of flaw size. In this paper the reliability of an ECT inspection system is analyzed quantitatively. POD of the ECT inspection system for axial outside diameter stress corrosion cracks (ODSCC) in steam generator tubes is evaluated. Using a log-logistic regression model, POD is evaluated from hit (detection) and miss (no detection) binary data obtained from destructive and non-destructive inspections of cracked tubes. Crack length and crack depth are considered as variables in multivariate log-logistic regression and their effects on detection capacity are assessed using two-dimensional POD (2-D POD) surface. The reliability of detection is also analyzed using POD for inspection technique (POD T ) and POD for analyst (POD A ).

  4. Construction and Evaluation of Reliability and Validity of Reasoning Ability Test

    Science.gov (United States)

    Bhat, Mehraj A.

    2014-01-01

    This paper is based on the construction and evaluation of reliability and validity of reasoning ability test at secondary school students. In this paper an attempt was made to evaluate validity, reliability and to determine the appropriate standards to interpret the results of reasoning ability test. The test includes 45 items to measure six types…

  5. A Standardized Rubric for Evaluating Webquest Design: Reliability Analysis of ZUNAL Webquest Design Rubric

    Science.gov (United States)

    Unal, Zafer; Bodur, Yasar; Unal, Aslihan

    2012-01-01

    Current literature provides many examples of rubrics that are used to evaluate the quality of web-quest designs. However, reliability of these rubrics has not yet been researched. This is the first study to fully characterize and assess the reliability of a webquest evaluation rubric. The ZUNAL rubric was created to utilize the strengths of the…

  6. SPSS Macros for Assessing the Reliability and Agreement of Student Evaluations of Teaching

    Science.gov (United States)

    Morley, Donald D.

    2009-01-01

    This article reports and demonstrates two SPSS macros for calculating Krippendorff's alpha and intraclass reliability coefficients in repetitive situations where numerous coefficients are needed. Specifically, the reported SPSS macros were used to evaluate the interrater agreement and reliability of student evaluations of teaching in thousands of…

  7. Assessing the Reliability of Student Evaluations of Teaching: Choosing the Right Coefficient

    Science.gov (United States)

    Morley, Donald

    2014-01-01

    Many of the studies used to support the claim that student evaluations of teaching are reliable measures of teaching effectiveness have frequently calculated inappropriate reliability coefficients. This paper points to three coefficients that would be appropriate depending on if student evaluations were used for formative or summative purposes.…

  8. Application-Driven Reliability Measures and Evaluation Tool for Fault-Tolerant Real-Time Systems

    National Research Council Canada - National Science Library

    Krishna, C

    2001-01-01

    .... The measure combines graphic-theoretic concepts in evaluating the underlying reliability of the network and other means to evaluate the ability of the network to support interprocessor traffic...

  9. Monte Carlo simulation based reliability evaluation in a multi-bilateral contracts market

    International Nuclear Information System (INIS)

    Goel, L.; Viswanath, P.A.; Wang, P.

    2004-01-01

    This paper presents a time sequential Monte Carlo simulation technique to evaluate customer load point reliability in multi-bilateral contracts market. The effects of bilateral transactions, reserve agreements, and the priority commitments of generating companies on customer load point reliability have been investigated. A generating company with bilateral contracts is modelled as an equivalent time varying multi-state generation (ETMG). A procedure to determine load point reliability based on ETMG has been developed. The developed procedure is applied to a reliability test system to illustrate the technique. Representing each bilateral contract by an ETMG provides flexibility in determining the reliability at various customer load points. (authors)

  10. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    Science.gov (United States)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  11. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    Directory of Open Access Journals (Sweden)

    Xuyong Chen

    2017-01-01

    Full Text Available Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic reliability index and to narrow the range of the nonprobabilistic reliability index. If the range of the reliability index reduces to an acceptable accuracy, the solution will be considered convergent, and the nonprobabilistic reliability index will be obtained. The case study indicates that using the proposed method can avoid oscillating iteration process, make iteration process stable and convergent, reduce iteration steps significantly, and improve computational efficiency and precision significantly compared with the traditional nonprobabilistic response surface method. Finally, the nonprobabilistic reliability evaluation process of bridge will be built through evaluating the reliability of one PC continuous rigid frame bridge with three spans using the proposed method, which appears to be more simple and reliable when lack of samples and parameters in the bridge nonprobabilistic reliability evaluation is present.

  12. Alternate ways for automation of evaluating nuclear physical data reliability from primary literature

    International Nuclear Information System (INIS)

    Golashvili, T.V.; Tsvetaev, S.M.

    1983-01-01

    Methods, possible ways, criteria and algorithms for organizing an automated system for evaluating nuclear physical data reliability from primary literature are discussed. It is noted that automation of data reliability evaluation does not substitute for a scientist dealing with data evaluation. It only releases him from hard, monotonous and tedious work not requiring erudition or profound knowledge. Computers will facilitate and accelerate the work of the expert and, hence, leat to a sharp increase of a bulk of works on evaluation of data reliability

  13. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  14. Analysis of dependent failures in risk assessment and reliability evaluation

    International Nuclear Information System (INIS)

    Fleming, K.N.; Mosleh, A.; Kelley, A.P. Jr.; Gas-Cooled Reactors Associates, La Jolla, CA)

    1983-01-01

    The ability to estimate the risk of potential reactor accidents is largely determined by the ability to analyze statistically dependent multiple failures. The importance of dependent failures has been indicated in recent probabilistic risk assessment (PRA) studies as well as in reports of reactor operating experiences. This article highlights the importance of several different types of dependent failures from the perspective of the risk and reliability analyst and provides references to the methods and data available for their analysis. In addition to describing the current state of the art, some recent advances, pitfalls, misconceptions, and limitations of some approaches to dependent failure analysis are addressed. A summary is included of the discourse on this subject, which is presented in the Institute of Electrical and Electronics Engineers/American Nuclear Society PRA Procedures Guide

  15. Evaluation and improvement in nondestructive examination (NDE) reliability for inservice inspection of light water reactors

    International Nuclear Information System (INIS)

    Doctor, S.R.; Andersen, E.S.; Bowey, R.E.; Diaz, A.A.; Good, M.S.; Heasler, P.G.; Hockey, R.L.; Simonen, F.A.; Spanner, J.C.; Taylor, T.T.; Vo, T.V.

    1991-01-01

    This program is intended to establish the effectiveness, reliability and adequacy of inservice inspection of reactor pressure vessels and primary piping systems and the impact of ISI reliability on system integrity. The objectives of the program include: (a) determine the effectiveness and reliability of ultrasonic inservice inspection (ISI) performed on commercial, light water reactor pressure vessels and piping; (b) recommend Code changes to the inspection procedures to improve the reliability of ISI; (c) using fracture mechanics analysis, determine the impact of NDE unreliability on system safety and determine the level of inspection reliability required to assure a suitably low failure probability; (d) evaluate the degree of reliability improvement which could be achieved using improved NDE techniques; and (e) based on importance of component to safety, material properties, service conditions, and NDE uncertainties, formulate improved inservice inspection criteria (including sampling plan, frequency, and reliability of inspection) for revisions to ASME Section XI and regulatory requirements needed to assure suitably low failure probabilities

  16. Evaluation and improvement in nondestructive examination (NDE) reliability for inservice inspection of light water reactors

    International Nuclear Information System (INIS)

    Doctor, S.R.; Deffenbaugh, J.D.; Good, M.S.; Green, E.R.; Heasler, P.G.; Simonen, F.A.; Spanner, J.C.; Taylor, T.T.

    1988-01-01

    The Evaluation and Improvement of NDE Reliability for Inservice Inspection of Light Water Reactor (NDE Reliability) program at the Pacific Northwest Laboratory was established by the NRC to determine the reliability of current inservice inspection (ISI) techniques and to develop recommendations that will ensure a suitably high inspection reliability. The objectives of this program include determining the reliability of ISI performed on the primary systems of commercial light-water reactors (LWRs); using probabilistic fracture mechanics analysis to determine the impact of NDE unreliability on system safety; and evaluating reliability improvements that can be achieved with improved and advanced technology. A final objective is to formulate recommended revisions to ASME Code and Regulatory requirements, based on material properties, service conditions, and NDE uncertainties. The program scope is limited to ISI of the primary systems including the piping, vessel, and other inspected components. This is a progress report covering the programmatic work from October 1986 through September 1987

  17. Evaluation and improvement in nondestructive examination (NDE) reliability for inservice inspection of light water reactors

    International Nuclear Information System (INIS)

    Doctor, S.R.; Deffenbaugh, J.D.; Good, M.S.; Green, E.R.; Heasler, P.G.; Simonen, F.A.; Spanner, J.C.; Taylor, T.T.

    1988-01-01

    The Evaluation and Improvement of NDE Reliability for Inservice Inspection of Light Water Reactors (NDE Reliability) program at the Pacific Northwest Laboratory was established by the NRC to determine the reliability of current inservice inspection (ISI) techniques and to develop recommendations that will ensure a suitably high inspection reliability. The objectives of this program include determining the reliability of ISI performed on the primary systems of commercial light-water reactors (LWRs); using probabilistic fracture mechanics analysis to determine the impact of NDE unreliability on system safety; and evaluating reliability improvements that can be achieved with improved and advanced technology. A final objective is to formulate recommended revisions to ASME Code and Regulatory requirements, based on material properties, service conditions and NDE uncertainties. The program scope is limited to ISI of the primary systems including the piping, vessel, and other inspected components. This is a progress report covering the programmatic work from October 1986 through September 1987. (author)

  18. Missing value imputation in DNA microarrays based on conjugate gradient method.

    Science.gov (United States)

    Dorri, Fatemeh; Azmi, Paeiz; Dorri, Faezeh

    2012-02-01

    Analysis of gene expression profiles needs a complete matrix of gene array values; consequently, imputation methods have been suggested. In this paper, an algorithm that is based on conjugate gradient (CG) method is proposed to estimate missing values. k-nearest neighbors of the missed entry are first selected based on absolute values of their Pearson correlation coefficient. Then a subset of genes among the k-nearest neighbors is labeled as the best similar ones. CG algorithm with this subset as its input is then used to estimate the missing values. Our proposed CG based algorithm (CGimpute) is evaluated on different data sets. The results are compared with sequential local least squares (SLLSimpute), Bayesian principle component analysis (BPCAimpute), local least squares imputation (LLSimpute), iterated local least squares imputation (ILLSimpute) and adaptive k-nearest neighbors imputation (KNNKimpute) methods. The average of normalized root mean squares error (NRMSE) and relative NRMSE in different data sets with various missing rates shows CGimpute outperforms other methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Traffic Speed Data Imputation Method Based on Tensor Completion

    Directory of Open Access Journals (Sweden)

    Bin Ran

    2015-01-01

    Full Text Available Traffic speed data plays a key role in Intelligent Transportation Systems (ITS; however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS. In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC, an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  20. Traffic speed data imputation method based on tensor completion.

    Science.gov (United States)

    Ran, Bin; Tan, Huachun; Feng, Jianshuai; Liu, Ying; Wang, Wuhong

    2015-01-01

    Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  1. Customer control and evaluation of service validity and reliability

    NARCIS (Netherlands)

    van Raaij, W. Fred; Pruyn, Adriaan T.H.

    1998-01-01

    A control and attribution model of service production and evaluation is proposed. Service production consists of the stages specification (input), realization (throughput), and outcome (output). Customers may exercise control over all three stages of the service. Critical factors of service

  2. Reliability of early radiographic evaluations for canine hip dysplasia obtained from the standard ventrodorsal radiographic projection

    International Nuclear Information System (INIS)

    Corley, E.A.; Keller, G.G.; Lattimer, J.C.; Ellersieck, M.R.

    1997-01-01

    To determine reliability of preliminary evaluations for canine hip dysplasia (CHD) performed by the Orthopedic Foundation for Animals on dogs between 3 and 18 months of age. Retrospective analysis of data from the Orthopedic Foundation for Animals database. 2,332 Golden Retrievers, Labrador Retrievers, German Shepherd Dogs, and Rottweilers for which preliminary evaluation had been performed between 3 and 18 months of age and for which results of a definitive evaluation performed after 24 months of age were available. Each radiograph was evaluated, and hip joint status was graded as excellent, good, fair, or borderline phenotype or mild, moderate, or severe dysplasia. Preliminary evaluations were performed by 1 radiologist; definitive evaluations were the consensus of 3 radiologists. Reliability of preliminary evaluations was calculated as the percentage of definitive evaluations (normal vs dysplastic) that were unchanged from preliminary evaluations. Reliability of a preliminary evaluation of normal hip joint phenotype decreased significantly as the preliminary evaluation changed from excellent (100%) to good (97.9%) to fair (76.9%) phenotype. Reliability of a preliminary evaluation of CHD increased significantly as the preliminary evaluation changed from mild (84.4%) to moderate (97.4%) CHD. Reliability of preliminary evaluations increased significantly as age at the time of preliminary evaluation increased, regardless of whether dogs received a preliminary evaluation of normal phenotype or CHD. Results suggest that preliminary evaluations of hip joint status in dogs are generally reliable. However, dogs that receive a preliminary evaluation of fair phenotype of mild CHD should be reevaluated after 24 months of age

  3. A Review on VSC-HVDC Reliability Modeling and Evaluation Techniques

    Science.gov (United States)

    Shen, L.; Tang, Q.; Li, T.; Wang, Y.; Song, F.

    2017-05-01

    With the fast development of power electronics, voltage-source converter (VSC) HVDC technology presents cost-effective ways for bulk power transmission. An increasing number of VSC-HVDC projects has been installed worldwide. Their reliability affects the profitability of the system and therefore has a major impact on the potential investors. In this paper, an overview of the recent advances in the area of reliability evaluation for VSC-HVDC systems is provided. Taken into account the latest multi-level converter topology, the VSC-HVDC system is categorized into several sub-systems and the reliability data for the key components is discussed based on sources with academic and industrial backgrounds. The development of reliability evaluation methodologies is reviewed and the issues surrounding the different computation approaches are briefly analysed. A general VSC-HVDC reliability evaluation procedure is illustrated in this paper.

  4. How Reliable Are Students' Evaluations of Teaching Quality? A Variance Components Approach

    Science.gov (United States)

    Feistauer, Daniela; Richter, Tobias

    2017-01-01

    The inter-rater reliability of university students' evaluations of teaching quality was examined with cross-classified multilevel models. Students (N = 480) evaluated lectures and seminars over three years with a standardised evaluation questionnaire, yielding 4224 data points. The total variance of these student evaluations was separated into the…

  5. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  6. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    Science.gov (United States)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  7. System reliability evaluation of a touch panel manufacturing system with defect rate and reworking

    International Nuclear Information System (INIS)

    Lin, Yi-Kuei; Huang, Cheng-Fu; Chang, Ping-Chen

    2013-01-01

    In recent years, portable consumer electronic products, such as cell phone, GPS, digital camera, tablet PC, and notebook are using touch panel as interface. With the demand of touch panel increases, performance assessment is essential for touch panel production. This paper develops a method to evaluate system reliability of a touch panel manufacturing system (TPMS) with defect rate of each workstation and takes reworking actions into account. The system reliability which evaluates the possibility of demand satisfaction can provide to managers with an understanding of the system capability and can indicate possible improvements. First, we construct a capacitated manufacturing network (CMN) for a TPMS. Second, a decomposition technique is developed to determine the input flow of each workstation based on the CMN. Finally, we generate the minimal capacity vectors that should be provided to satisfy the demand. The system reliability is subsequently evaluated in terms of the minimal capacity vectors. A further decision making issue is discussed to decide a reliable production strategy. -- Graphical abstract: The proposed procedure to evaluate system reliability of the touch panel manufacturing system (TPMS). Highlights: • The system reliability of a touch panel manufacturing system (TPMS) is evaluated. • The reworking actions are taken into account in the TPMS. • A capacitated manufacturing network is constructed for the TPMS. • A procedure is proposed to evaluate system reliability of TPMS

  8. CARES/PC - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES

    Science.gov (United States)

    Szatmary, S. A.

    1994-01-01

    The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES/PC performs statistical analysis of data obtained from the fracture of simple, uniaxial tensile or flexural specimens and estimates the Weibull and Batdorf material parameters from this data. CARES/PC is a subset of the program CARES (COSMIC program number LEW-15168) which calculates the fast-fracture reliability or failure probability of ceramic components utilizing the Batdorf and Weibull models to describe the effects of multi-axial stress states on material strength. CARES additionally requires that the ceramic structure be modeled by a finite element program such as MSC/NASTRAN or ANSYS. The more limited CARES/PC does not perform fast-fracture reliability estimation of components. CARES/PC estimates ceramic material properties from uniaxial tensile or from three- and four-point bend bar data. In general, the parameters are obtained from the fracture stresses of many specimens (30 or more are recommended) whose geometry and loading configurations are held constant. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests measure the accuracy of the hypothesis that the fracture data comes from a population with a distribution specified by the estimated Weibull parameters. Ninety-percent confidence intervals on the Weibull parameters and the unbiased value of the shape parameter for complete samples are provided

  9. Evaluating the Reliability, Validity, and Usefulness of Education Cost Studies

    Science.gov (United States)

    Baker, Bruce D.

    2006-01-01

    Recent studies that purport to estimate the costs of constitutionally adequate education have been described as either a "gold standard" that should guide legislative school finance policy design and judicial evaluation, or as pure "alchemy." Methods for estimating the cost of constitutionally adequate education can be roughly…

  10. HTGR plant availability and reliability evaluations. Volume II. Appendices

    International Nuclear Information System (INIS)

    Cadwallader, G.J.; Hannaman, G.W.; Jacobsen, F.K.; Stokely, R.J.

    1976-12-01

    Information is presented in the following areas: methodology of identifying components and systems important for availability studies, failure modes and effects analyses, quantitative evaluations, comparison with experience, estimated cost of plant unavailability, and probabilistic use of interest formulas for rare events

  11. Reliability-based maintenance evaluations and standard preventive maintenance programs

    International Nuclear Information System (INIS)

    Varno, M.; McKinley, M.

    1993-01-01

    Due to recent issuance of 10CFR50.65, the U.S. Nuclear Regulatory Commission maintenance rule (Rule), and the open-quotes Industry Guideline for Monitoring the Effectiveness of Maintenance at Nuclear Power Plantsclose quotes prepared by the Nuclear Management and Resources Council, many utilities are undertaking review or evaluation of current preventive maintenance (PM) programs. Although PM optimization and documentation are not specifically required by the Rule, an appropriate and effective PM program (PMP) will be the cornerstone of the successful and cost-effective implementation of the Rule. Currently, a project is being conducted at the Vermont Yankee Nuclear Power Station (VYNPS) in conjunction with Quadrex Energy Services to evaluate, optimize, and document the PMP. The project began in March 1993 and is scheduled for completion in mid-1995. The initial scope for the project is the evaluation of those structures, systems, and components that are within the scope of the Rule. Because of the number of systems to be evaluated (∼50), the desired completion schedule, and cost considerations, a streamlined approach to PM optimization and documentation is being utilized

  12. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  13. Two-pass imputation algorithm for missing value estimation in gene expression time series.

    Science.gov (United States)

    Tsiporkova, Elena; Boeva, Veselka

    2007-10-01

    Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different

  14. Once is not enough : Establishing reliability criteria for teacher evaluation based on classroom observations

    NARCIS (Netherlands)

    van der Lans, Rikkert; van de Grift, Wim; van Veen, Klaas

    2016-01-01

    Classroom observation is the most implemented method to evaluate teaching. To ensure reliability, researchers often train observers extensively. However, schools have limited resources to train observers and often lesson observation is performed by limitedly trained or untrained colleagues. In this

  15. BUILDING MODEL ANALYSIS APPLICATIONS WITH THE JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY (JUPITER) API

    Science.gov (United States)

    The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...

  16. Reliability Of Kraus-Weber Exercise Test As An Evaluation Tool In ...

    African Journals Online (AJOL)

    Reliability Of Kraus-Weber Exercise Test As An Evaluation Tool In Low Back ... strength and flexibility of the back, abdominal, psoas and hamstring muscles. ... Keywords: Kraus-Weber test, low back pain, muscle flexibility, muscle strength.

  17. Efficiency evaluation of an electronic equipment: availability,reliability and maintenance

    International Nuclear Information System (INIS)

    Guyot, C.

    1966-01-01

    This concept of efficiency often called ''system effectiveness'', is presented and analyzed in terms of reliability and maintenance. It allows to define the availability factor of an electronic equipment. A procedure of evaluation is proposed. (A.L.B.)

  18. Reliability Evaluation on Creep Life Prediction of Alloy 617 for a Very High Temperature Reactor

    International Nuclear Information System (INIS)

    Kim, Woo-Gon; Hong, Sung-Deok; Kim, Yong-Wan; Park, Jae-Young; Kim, Seon-Jin

    2012-01-01

    This paper evaluates the reliability of creep rupture life under service conditions of Alloy 617, which is considered as one of the candidate materials for use in a very high temperature reactor (VHTR) system. A Z-parameter, which represents the deviation of creep rupture data from the master curve, was used for the reliability analysis of the creep rupture data of Alloy 617. A Service-condition Creep Rupture Interference (SCRI) model, which can consider both the scattering of the creep rupture data and the fluctuations of temperature and stress under any service conditions, was also used for evaluating the reliability of creep rupture life. The statistical analysis showed that the scattering of creep rupture data based on Z-parameter was supported by normal distribution. The values of reliability decreased rapidly with increasing amplitudes of temperature and stress fluctuations. The results established that the reliability decreased with an increasing service time.

  19. The reliability of WorkWell Systems Functional Capacity Evaluation: a systematic review

    Science.gov (United States)

    2014-01-01

    Background Functional capacity evaluation (FCE) determines a person’s ability to perform work-related tasks and is a major component of the rehabilitation process. The WorkWell Systems (WWS) FCE (formerly known as Isernhagen Work Systems FCE) is currently the most commonly used FCE tool in German rehabilitation centres. Our systematic review investigated the inter-rater, intra-rater and test-retest reliability of the WWS FCE. Methods We performed a systematic literature search of studies on the reliability of the WWS FCE and extracted item-specific measures of inter-rater, intra-rater and test-retest reliability from the identified studies. Intraclass correlation coefficients ≥ 0.75, percentages of agreement ≥ 80%, and kappa coefficients ≥ 0.60 were categorised as acceptable, otherwise they were considered non-acceptable. The extracted values were summarised for the five performance categories of the WWS FCE, and the results were classified as either consistent or inconsistent. Results From 11 identified studies, 150 item-specific reliability measures were extracted. 89% of the extracted inter-rater reliability measures, all of the intra-rater reliability measures and 96% of the test-retest reliability measures of the weight handling and strength tests had an acceptable level of reliability, compared to only 67% of the test-retest reliability measures of the posture/mobility tests and 56% of the test-retest reliability measures of the locomotion tests. Both of the extracted test-retest reliability measures of the balance test were acceptable. Conclusions Weight handling and strength tests were found to have consistently acceptable reliability. Further research is needed to explore the reliability of the other tests as inconsistent findings or a lack of data prevented definitive conclusions. PMID:24674029

  20. The new features of the ExaMe evaluation system and reliability of its fixed tests.

    Science.gov (United States)

    Martinková, P; Zvára, K; Zvárová, J; Zvára, K

    2006-01-01

    The ExaMe system for the evaluation of targeted knowledge has been in development since 1998. The new features of the ExaMe system are introduced in this paper. Especially, the new three-layer architecture is described. Besides the system itself, the properties of fixed tests in the ExaMe system are studied. In special detail, the reliability of the fixed tests is discussed. The theory background is explained and some limitations of the reliability are pointed out. Three characteristics used for estimation of reliability of educational tests are discussed: Cronbach's alpha, standardized item alpha and split half coefficient. The relation between these characteristics and reliability and between characteristics themselves is investigated. In more detail, the properties of Cronbach's alpha, the characteristics mostly used for the estimation of reliability, are discussed. A confidence interval is introduced for the characteristics. Since 2000, the serviceability of the ExaMe evaluation system as the supporting evaluation tool has been repeatedly shown at the courses of Ph.D. studies in biomedical informatics at Charles University in Prague. The ExaMe system also opens new possibilities for self-evaluation and distance learning, especially when connected with electronic books on the Internet. The estimation of reliability of tests contains some limitations. Keeping them in mind, we can still get some information about the quality of certain educational tests. Therefore, the estimation of reliability of the fixed tests is implemented in the ExaMe system.

  1. A New Tool for Nutrition App Quality Evaluation (AQEL): Development, Validation, and Reliability Testing.

    Science.gov (United States)

    DiFilippo, Kristen Nicole; Huang, Wenhao; Chapman-Novakofski, Karen M

    2017-10-27

    The extensive availability and increasing use of mobile apps for nutrition-based health interventions makes evaluation of the quality of these apps crucial for integration of apps into nutritional counseling. The goal of this research was the development, validation, and reliability testing of the app quality evaluation (AQEL) tool, an instrument for evaluating apps' educational quality and technical functionality. Items for evaluating app quality were adapted from website evaluations, with additional items added to evaluate the specific characteristics of apps, resulting in 79 initial items. Expert panels of nutrition and technology professionals and app users reviewed items for face and content validation. After recommended revisions, nutrition experts completed a second AQEL review to ensure clarity. On the basis of 150 sets of responses using the revised AQEL, principal component analysis was completed, reducing AQEL into 5 factors that underwent reliability testing, including internal consistency, split-half reliability, test-retest reliability, and interrater reliability (IRR). Two additional modifiable constructs for evaluating apps based on the age and needs of the target audience as selected by the evaluator were also tested for construct reliability. IRR testing using intraclass correlations (ICC) with all 7 constructs was conducted, with 15 dietitians evaluating one app. Development and validation resulted in the 51-item AQEL. These were reduced to 25 items in 5 factors after principal component analysis, plus 9 modifiable items in two constructs that were not included in principal component analysis. Internal consistency and split-half reliability of the following constructs derived from principal components analysis was good (Cronbach alpha >.80, Spearman-Brown coefficient >.80): behavior change potential, support of knowledge acquisition, app function, and skill development. App purpose split half-reliability was .65. Test-retest reliability showed no

  2. Evaluation of methodologies for remunerating wind power's reliability in Colombia

    International Nuclear Information System (INIS)

    Botero B, Sergio; Isaza C, Felipe; Valencia, Adriana

    2010-01-01

    Colombia strives to have enough firm capacity available to meet unexpected power shortages and peak demand; this is clear from mechanisms currently in place that provide monetary incentives (in the order of nearly US$ 14/MW h) to power producers that can guarantee electricity provision during scarcity periods. Yet, wind power in Colombia is not able to currently guarantee firm power because an accepted methodology to calculate its potential firm capacity does not exist. In this paper we argue that developing such methodology would provide an incentive to potential investors to enter into this low carbon technology. This paper analyzes three methodologies currently used in energy markets around the world to calculate firm wind energy capacity: PJM, NYISO, and Spain. These methodologies are initially selected due to their ability to accommodate to the Colombian energy regulations. The objective of this work is to determine which of these methodologies makes most sense from an investor's perspective, to ultimately shed light into developing a methodology to be used in Colombia. To this end, the authors developed a methodology consisting on the elaboration of a wind model using the Monte-Carlo simulation, based on known wind behaviour statistics of a region with adequate wind potential in Colombia. The simulation gives back random generation data, representing the resource's inherent variability and simulating the historical data required to evaluate the mentioned methodologies, thus achieving the technology's theoretical generation data. The document concludes that the evaluated methodologies are easy to implement and that these do not require historical data (important for Colombia, where there is almost no historical wind power data). It is also found that the Spanish methodology provides a higher Capacity Value (and therefore a higher return to investors). The financial assessment results show that it is crucial that these types of incentives exist to make viable

  3. Multiple imputation in the presence of non-normal data.

    Science.gov (United States)

    Lee, Katherine J; Carlin, John B

    2017-02-20

    Multiple imputation (MI) is becoming increasingly popular for handling missing data. Standard approaches for MI assume normality for continuous variables (conditionally on the other variables in the imputation model). However, it is unclear how to impute non-normally distributed continuous variables. Using simulation and a case study, we compared various transformations applied prior to imputation, including a novel non-parametric transformation, to imputation on the raw scale and using predictive mean matching (PMM) when imputing non-normal data. We generated data from a range of non-normal distributions, and set 50% to missing completely at random or missing at random. We then imputed missing values on the raw scale, following a zero-skewness log, Box-Cox or non-parametric transformation and using PMM with both type 1 and 2 matching. We compared inferences regarding the marginal mean of the incomplete variable and the association with a fully observed outcome. We also compared results from these approaches in the analysis of depression and anxiety symptoms in parents of very preterm compared with term-born infants. The results provide novel empirical evidence that the decision regarding how to impute a non-normal variable should be based on the nature of the relationship between the variables of interest. If the relationship is linear in the untransformed scale, transformation can introduce bias irrespective of the transformation used. However, if the relationship is non-linear, it may be important to transform the variable to accurately capture this relationship. A useful alternative is to impute the variable using PMM with type 1 matching. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Bulk electric system reliability evaluation incorporating wind power and demand side management

    Science.gov (United States)

    Huang, Dange

    Electric power systems are experiencing dramatic changes with respect to structure, operation and regulation and are facing increasing pressure due to environmental and societal constraints. Bulk electric system reliability is an important consideration in power system planning, design and operation particularly in the new competitive environment. A wide range of methods have been developed to perform bulk electric system reliability evaluation. Theoretically, sequential Monte Carlo simulation can include all aspects and contingencies in a power system and can be used to produce an informative set of reliability indices. It has become a practical and viable tool for large system reliability assessment technique due to the development of computing power and is used in the studies described in this thesis. The well-being approach used in this research provides the opportunity to integrate an accepted deterministic criterion into a probabilistic framework. This research work includes the investigation of important factors that impact bulk electric system adequacy evaluation and security constrained adequacy assessment using the well-being analysis framework. Load forecast uncertainty is an important consideration in an electrical power system. This research includes load forecast uncertainty considerations in bulk electric system reliability assessment and the effects on system, load point and well-being indices and reliability index probability distributions are examined. There has been increasing worldwide interest in the utilization of wind power as a renewable energy source over the last two decades due to enhanced public awareness of the environment. Increasing penetration of wind power has significant impacts on power system reliability, and security analyses become more uncertain due to the unpredictable nature of wind power. The effects of wind power additions in generating and bulk electric system reliability assessment considering site wind speed

  5. A Comparison of Joint Model and Fully Conditional Specification Imputation for Multilevel Missing Data

    Science.gov (United States)

    Mistler, Stephen A.; Enders, Craig K.

    2017-01-01

    Multiple imputation methods can generally be divided into two broad frameworks: joint model (JM) imputation and fully conditional specification (FCS) imputation. JM draws missing values simultaneously for all incomplete variables using a multivariate distribution, whereas FCS imputes variables one at a time from a series of univariate conditional…

  6. Research on Control Method Based on Real-Time Operational Reliability Evaluation for Space Manipulator

    Directory of Open Access Journals (Sweden)

    Yifan Wang

    2014-05-01

    Full Text Available A control method based on real-time operational reliability evaluation for space manipulator is presented for improving the success rate of a manipulator during the execution of a task. In this paper, a method for quantitative analysis of operational reliability is given when manipulator is executing a specified task; then a control model which could control the quantitative operational reliability is built. First, the control process is described by using a state space equation. Second, process parameters are estimated in real time using Bayesian method. Third, the expression of the system's real-time operational reliability is deduced based on the state space equation and process parameters which are estimated using Bayesian method. Finally, a control variable regulation strategy which considers the cost of control is given based on the Theory of Statistical Process Control. It is shown via simulations that this method effectively improves the operational reliability of space manipulator control system.

  7. Long-term reliability evaluation of nuclear containments with tendon force degradation

    International Nuclear Information System (INIS)

    Kim, Sang-Hyo; Choi, Moon-Seock; Joung, Jung-Yeun; Kim, Kun-Soo

    2013-01-01

    Highlights: • A probabilistic model on long-term degradation of tendon force is developed. • By using the model, we performed reliability evaluation of nuclear containment. • The analysis is also performed for the case with the strict maintenance programme. • We showed how to satisfy the target safety in the containments facing life extension. - Abstract: The long-term reliability of nuclear containment is important for operating nuclear power plants. In particular, long-term reliability should be clarified when the service life of nuclear containment is being extended. This study focuses not only on determining the reliability of nuclear containment but also presenting the reliability improvement by strengthening the containment itself or by running a strict maintenance programme. The degradation characteristics of tendon force are estimated from the data recorded during in-service inspection of containments. A reliability analysis is conducted for a limit state of through-wall cracking, which is conservative, but most crucial limit state. The results of this analysis indicate that reliability is the lowest at 3/4 height of the containment wall. Therefore, this location is the most vulnerable for the specific limit state considered in this analysis. Furthermore, changes in structural reliability owing to an increase in the number of inspecting tendons are analysed for verifying the effect of the maintenance program's intensity on expected containment reliability. In the last part of this study, an example of obtaining target reliability of nuclear containment by strengthening its structural resistance is presented. A case study is conducted for exemplifying the effect of strengthening work on containment reliability, especially during extended service life

  8. Evaluation of the reliability of transport networks based on the stochastic flow of moving objects

    International Nuclear Information System (INIS)

    Wu Weiwei; Ning, Angelika; Ning Xuanxi

    2008-01-01

    In transport networks, human beings are moving objects whose moving direction is stochastic in emergency situations. Based on this idea, a new model-stochastic moving network (SMN) is proposed. It is different from binary-state networks and stochastic-flow networks. The flow of SMNs has multiple-saturated states, that correspond to different flow values in each arc. In this paper, we try to evaluate the system reliability, defined as the probability that the saturated flow of the network is not less than a given demand d. Based on this new model, we obtain the flow probability distribution of every arc by simulation. An algorithm based on the blocking cutset of the SMN is proposed to evaluate the network reliability. An example is used to show how to calculate the corresponding reliabilities for different given demands of the SMN. Simulation experiments of different size were made and the system reliability precision was calculated. The precision of simulation results also discussed

  9. Evaluation of nodal reliability risk in a deregulated power system with photovoltaic power penetration

    DEFF Research Database (Denmark)

    Zhao, Qian; Wang, Peng; Goel, Lalit

    2014-01-01

    Owing to the intermittent characteristic of solar radiation, power system reliability may be affected with high photovoltaic (PV) power penetration. To reduce large variation of PV power, additional system balancing reserve would be needed. In deregulated power systems, deployment of reserves...... and customer reliability requirements are correlated with energy and reserve prices. Therefore a new method should be developed to evaluate the impacts of PV power on customer reliability and system reserve deployment in the new environment. In this study, a method based on the pseudo-sequential Monte Carlo...... simulation technique has been proposed to evaluate the reserve deployment and customers' nodal reliability with high PV power penetration. The proposed method can effectively model the chronological aspects and stochastic characteristics of PV power and system operation with high computation efficiency...

  10. Reliability and risk evaluation of a port oil pipeline transportation system in variable operation conditions

    International Nuclear Information System (INIS)

    Soszynska, Joanna

    2010-01-01

    The semi-Markov model of the system operation processes is proposed and its selected characteristics are determined. A system composed on multi-state components is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability is applied to the reliability and risk evaluation of the port oil pipeline transportation system. The pipeline system is described and its operation process unknown parameters are identified on the basis of real statistical data. The mean values of the pipeline system operation process unconditional sojourn times in particular operation states are found and applied to determining this process transient probabilities in these states. The piping different reliability structures in various its operation states are fixed and their conditional reliability functions on the basis of data coming from experts are approximately determined. Finally, after applying earlier estimated transient probabilities and system conditional reliability functions in particular operation states the unconditional reliability function, the mean values and standard deviations of the pipeline lifetimes in particular reliability states, risk function and the moment when the risk exceeds a critical value are found.

  11. Reliability and risk evaluation of a port oil pipeline transportation system in variable operation conditions

    Energy Technology Data Exchange (ETDEWEB)

    Soszynska, Joanna, E-mail: joannas@am.gdynia.p [Department of Mathematics, Gdynia Maritime University, ul. Morska 83, 81-225 Gdynia (Poland)

    2010-02-15

    The semi-Markov model of the system operation processes is proposed and its selected characteristics are determined. A system composed on multi-state components is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability is applied to the reliability and risk evaluation of the port oil pipeline transportation system. The pipeline system is described and its operation process unknown parameters are identified on the basis of real statistical data. The mean values of the pipeline system operation process unconditional sojourn times in particular operation states are found and applied to determining this process transient probabilities in these states. The piping different reliability structures in various its operation states are fixed and their conditional reliability functions on the basis of data coming from experts are approximately determined. Finally, after applying earlier estimated transient probabilities and system conditional reliability functions in particular operation states the unconditional reliability function, the mean values and standard deviations of the pipeline lifetimes in particular reliability states, risk function and the moment when the risk exceeds a critical value are found.

  12. Study on seismic reliability for foundation grounds and surrounding slopes of nuclear power plants. Proposal of evaluation methodology and integration of seismic reliability evaluation system

    International Nuclear Information System (INIS)

    Ohtori, Yasuki; Kanatani, Mamoru

    2006-01-01

    This paper proposes an evaluation methodology of annual probability of failure for soil structures subjected to earthquakes and integrates the analysis system for seismic reliability of soil structures. The method is based on margin analysis, that evaluates the ground motion level at which structure is damaged. First, ground motion index that is strongly correlated with damage or response of the specific structure, is selected. The ultimate strength in terms of selected ground motion index is then evaluated. Next, variation of soil properties is taken into account for the evaluation of seismic stability of structures. The variation of the safety factor (SF) is evaluated and then the variation is converted into the variation of the specific ground motion index. Finally, the fragility curve is developed and then the annual probability of failure is evaluated combined with seismic hazard curve. The system facilitates the assessment of seismic reliability. A generator of random numbers, dynamic analysis program and stability analysis program are incorporated into one package. Once we define a structural model, distribution of the soil properties, input ground motions and so forth, list of safety factors for each sliding line is obtained. Monte Carlo Simulation (MCS), Latin Hypercube Sampling (LHS), point estimation method (PEM) and first order second moment (FOSM) implemented in this system are also introduced. As numerical examples, a ground foundation and a surrounding slope are assessed using the proposed method and the integrated system. (author)

  13. Verification of practicability of quantitative reliability evaluation method (De-BDA) in nuclear power plants

    International Nuclear Information System (INIS)

    Takahashi, Kinshiro; Yukimachi, Takeo.

    1988-01-01

    A variety of methods have been applied to study of reliability analysis in which human factors are included in order to enhance the safety and availability of nuclear power plants. De-BDA (Detailed Block Diagram Analysis) is one of such mehtods developed with the objective of creating a more comprehensive and understandable tool for quantitative analysis of reliability associated with plant operations. The practicability of this method has been verified by applying it to reliability analysis of various phases of plant operation as well as evaluation of enhanced man-machine interface in the central control room. (author)

  14. Reliability evaluation of a port oil transportation system in variable operation conditions

    International Nuclear Information System (INIS)

    Soszynska, Joanna

    2006-01-01

    The semi-Markov model of the system operation processes is proposed and its selected parameters are determined. The series 'm out of k n ' multi-state system is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability and risk is constructed. Moreover, reliability and risk evaluation of the multi-state series 'm out of k n ' system in its operation process is applied to the port oil transportation system

  15. Reliability evaluation of a port oil transportation system in variable operation conditions

    Energy Technology Data Exchange (ETDEWEB)

    Soszynska, Joanna [Department of Mathematics, Gdynia Maritime University, ul. Morska 83, 81-225 Gdynia (Poland)]. E-mail: joannas@am.gdynia.pl

    2006-04-15

    The semi-Markov model of the system operation processes is proposed and its selected parameters are determined. The series 'm out of k {sub n}' multi-state system is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability and risk is constructed. Moreover, reliability and risk evaluation of the multi-state series 'm out of k {sub n}' system in its operation process is applied to the port oil transportation system.

  16. Sensitivity evaluation of human factors for reliability of the containment spray system

    International Nuclear Information System (INIS)

    Tsujimura, Yasuhiro; Suzuki, Eiji

    1988-01-01

    Evaluation of the human reliability is one of the most difficult problems that deal with the safety and reliability of large systems, especially of the Engineered Safety Features (ESF) of the nuclear power plant. Influences of human factors on the reliability of the Containment Spray System in the ESF were estimated by using the FTA method in this paper. As a result, the adequacy of the system structure and the effects of human factors on variations of the design of the system structure were explained. (author)

  17. Multiple Imputation of Predictor Variables Using Generalized Additive Models

    NARCIS (Netherlands)

    de Jong, Roel; van Buuren, Stef; Spiess, Martin

    2016-01-01

    The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

  18. Accuracy, reliability, and timing of visual evaluations of decay in fresh-cut lettuce

    Science.gov (United States)

    Visual assessments are used for evaluating the quality of food products, such as fresh-cut lettuce packaged in bags with modified atmosphere. We have compared the accuracy and the reliability of visual evaluations of decay on fresh-cut lettuce performed with experienced and inexperienced raters. In ...

  19. Reading for Reliability: Preservice Teachers Evaluate Web Sources about Climate Change

    Science.gov (United States)

    Damico, James S.; Panos, Alexandra

    2016-01-01

    This study examined what happened when 65 undergraduate prospective secondary level teachers across content areas evaluated the reliability of four online sources about climate change: an oil company webpage, a news report, and two climate change organizations with competing views on climate change. The students evaluated the sources at three time…

  20. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    Science.gov (United States)

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Comparison of different Methods for Univariate Time Series Imputation in R

    OpenAIRE

    Moritz, Steffen; Sardá, Alexis; Bartz-Beielstein, Thomas; Zaefferer, Martin; Stork, Jörg

    2015-01-01

    Missing values in datasets are a well-known problem and there are quite a lot of R packages offering imputation functions. But while imputation in general is well covered within R, it is hard to find functions for imputation of univariate time series. The problem is, most standard imputation techniques can not be applied directly. Most algorithms rely on inter-attribute correlations, while univariate time series imputation needs to employ time dependencies. This paper provides an overview of ...

  2. Evaluating the reliability of multi-body mechanisms: A method considering the uncertainties of dynamic performance

    International Nuclear Information System (INIS)

    Wu, Jianing; Yan, Shaoze; Zuo, Ming J.

    2016-01-01

    Mechanism reliability is defined as the ability of a certain mechanism to maintain output accuracy under specified conditions. Mechanism reliability is generally assessed by the classical direct probability method (DPM) derived from the first order second moment (FOSM) method. The DPM relies strongly on the analytical form of the dynamic solution so it is not applicable to multi-body mechanisms that have only numerical solutions. In this paper, an indirect probability model (IPM) is proposed for mechanism reliability evaluation of multi-body mechanisms. IPM combines the dynamic equation, degradation function and Kaplan–Meier estimator to evaluate mechanism reliability comprehensively. Furthermore, to reduce the amount of computation in practical applications, the IPM is simplified into the indirect probability step model (IPSM). A case study of a crank–slider mechanism with clearance is investigated. Results show that relative errors between the theoretical and experimental results of mechanism reliability are less than 5%, demonstrating the effectiveness of the proposed method. - Highlights: • An indirect probability model (IPM) is proposed for mechanism reliability evaluation. • The dynamic equation, degradation function and Kaplan–Meier estimator are used. • Then the simplified form of indirect probability model is proposed. • The experimental results agree well with the predicted results.

  3. Multiple Improvements of Multiple Imputation Likelihood Ratio Tests

    OpenAIRE

    Chan, Kin Wai; Meng, Xiao-Li

    2017-01-01

    Multiple imputation (MI) inference handles missing data by first properly imputing the missing values $m$ times, and then combining the $m$ analysis results from applying a complete-data procedure to each of the completed datasets. However, the existing method for combining likelihood ratio tests has multiple defects: (i) the combined test statistic can be negative in practice when the reference null distribution is a standard $F$ distribution; (ii) it is not invariant to re-parametrization; ...

  4. Test-retest and interrater reliability of the functional lower extremity evaluation.

    Science.gov (United States)

    Haitz, Karyn; Shultz, Rebecca; Hodgins, Melissa; Matheson, Gordon O

    2014-12-01

    Repeated-measures clinical measurement reliability study. To establish the reliability and face validity of the Functional Lower Extremity Evaluation (FLEE). The FLEE is a 45-minute battery of 8 standardized functional performance tests that measures 3 components of lower extremity function: control, power, and endurance. The reliability and normative values for the FLEE in healthy athletes are unknown. A face validity survey for the FLEE was sent to sports medicine personnel to evaluate the level of importance and frequency of clinical usage of each test included in the FLEE. The FLEE was then administered and rated for 40 uninjured athletes. To assess test-retest reliability, each athlete was tested twice, 1 week apart, by the same rater. To assess interrater reliability, 3 raters scored each athlete during 1 of the testing sessions. Intraclass correlation coefficients were used to assess the test-retest and interrater reliability of each of the FLEE tests. In the face validity survey, the FLEE tests were rated as highly important by 58% to 71% of respondents but frequently used by only 26% to 45% of respondents. Interrater reliability intraclass correlation coefficients ranged from 0.83 to 1.00, and test-retest reliability ranged from 0.71 to 0.95. The FLEE tests are considered clinically important for assessing lower extremity function by sports medicine personnel but are underused. The FLEE also is a reliable assessment tool. Future studies are required to determine if use of the FLEE to make return-to-play decisions may reduce reinjury rates.

  5. A web-based approach to data imputation

    KAUST Repository

    Li, Zhixu

    2013-10-24

    In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques. © 2013 Springer Science+Business Media New York.

  6. Missing data treatments matter: an analysis of multiple imputation for anterior cervical discectomy and fusion procedures.

    Science.gov (United States)

    Ondeck, Nathaniel T; Fu, Michael C; Skrip, Laura A; McLynn, Ryan P; Cui, Jonathan J; Basques, Bryce A; Albert, Todd J; Grauer, Jonathan N

    2018-04-09

    The presence of missing data is a limitation of large datasets, including the National Surgical Quality Improvement Program (NSQIP). In addressing this issue, most studies use complete case analysis, which excludes cases with missing data, thus potentially introducing selection bias. Multiple imputation, a statistically rigorous approach that approximates missing data and preserves sample size, may be an improvement over complete case analysis. The present study aims to evaluate the impact of using multiple imputation in comparison with complete case analysis for assessing the associations between preoperative laboratory values and adverse outcomes following anterior cervical discectomy and fusion (ACDF) procedures. This is a retrospective review of prospectively collected data. Patients undergoing one-level ACDF were identified in NSQIP 2012-2015. Perioperative adverse outcome variables assessed included the occurrence of any adverse event, severe adverse events, and hospital readmission. Missing preoperative albumin and hematocrit values were handled using complete case analysis and multiple imputation. These preoperative laboratory levels were then tested for associations with 30-day postoperative outcomes using logistic regression. A total of 11,999 patients were included. Of this cohort, 63.5% of patients had missing preoperative albumin and 9.9% had missing preoperative hematocrit. When using complete case analysis, only 4,311 patients were studied. The removed patients were significantly younger, healthier, of a common body mass index, and male. Logistic regression analysis failed to identify either preoperative hypoalbuminemia or preoperative anemia as significantly associated with adverse outcomes. When employing multiple imputation, all 11,999 patients were included. Preoperative hypoalbuminemia was significantly associated with the occurrence of any adverse event and severe adverse events. Preoperative anemia was significantly associated with the

  7. Creation of reliable relevance judgments in information retrieval systems evaluation experimentation through crowdsourcing: a review.

    Science.gov (United States)

    Samimi, Parnia; Ravana, Sri Devi

    2014-01-01

    Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in performing reliable and low-cost evaluation of retrieval systems. Crowdsourcing as a novel method of data acquisition is broadly used in many research fields. It has been proven that crowdsourcing is an inexpensive and quick solution as well as a reliable alternative for creating relevance judgments. One of the crowdsourcing applications in IR is to judge relevancy of query document pair. In order to have a successful crowdsourcing experiment, the relevance judgment tasks should be designed precisely to emphasize quality control. This paper is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment.

  8. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    Science.gov (United States)

    Dâmaso, Antônio; Maciel, Paulo

    2017-01-01

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way. PMID:29113078

  9. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Antônio Dâmaso

    2017-11-01

    Full Text Available Power consumption is a primary interest in Wireless Sensor Networks (WSNs, and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  10. EVALUATION OF HUMAN RELIABILITY IN SELECTED ACTIVITIES IN THE RAILWAY INDUSTRY

    Directory of Open Access Journals (Sweden)

    Erika SUJOVÁ

    2016-07-01

    Full Text Available The article focuses on evaluation of human reliability in the human – machine system in the railway industry. Based on a survey of a train dispatcher and of selected activities, we have identified risk factors affecting the dispatcher‘s work and the evaluated risk level of their influence on the reliability and safety of preformed activities. The research took place at the authors‘ work place between 2012-2013. A survey method was used. With its help, authors were able to identify selected work activities of train dispatcher’s risk factors that affect his/her work and the evaluated seriousness of its in-fluence on the reliability and safety of performed activities. Amongst the most important finding fall expressions of un-clear and complicated internal regulations and work processes, a feeling of being overworked, fear for one’s safety at small, insufficiently protected stations.

  11. Reliability of the AMA Guides to the Evaluation of Permanent Impairment.

    Science.gov (United States)

    Forst, Linda; Friedman, Lee; Chukwu, Abraham

    2010-12-01

    AMA's Guides to the Evaluation of Permanent Impairment is used to rate loss of function and determine compensation and ability to work after injury or illness; however, there are few studies that evaluate reliability or construct validity. To evaluate the reliability of the fifth and sixth editions for back injury; to determine best methods for further study. Intra-class correlation coefficients within and between raters were relatively high. There was wider variability for individual cases. Impairment ratings were lower and correlated less well for the sixth edition, though confidence intervals overlapped. The sixth edition may not be an improvement over the fifth. A research agenda should include investigations of reliability and construct validity for different body sites and organ systems along the entire rating scale and among different categories of raters.

  12. Reliability assessment of a peer evaluation instrument in a team-based learning course

    Directory of Open Access Journals (Sweden)

    Wahawisan J

    2016-03-01

    Full Text Available Objective: To evaluate the reliability of a peer evaluation instrument in a longitudinal team-based learning setting. Methods: Student pharmacists were instructed to evaluate the contributions of their peers. Evaluations were analyzed for the variance of the scores by identifying low, medium, and high scores. Agreement between performance ratings within each group of students was assessed via intra-class correlation coefficient (ICC. Results: We found little variation in the standard deviation (SD based on the score means among the high, medium, and low scores within each group. The lack of variation in SD of results between groups suggests that the peer evaluation instrument produces precise results. The ICC showed strong concordance among raters. Conclusions: Findings suggest that our student peer evaluation instrument provides a reliable method for peer assessment in team-based learning settings.

  13. Inter-rater reliability of the evaluation of muscular chains associated with posture alterations in scoliosis

    Directory of Open Access Journals (Sweden)

    Fortin Carole

    2012-05-01

    Full Text Available Abstract Background In the Global postural re-education (GPR evaluation, posture alterations are associated with anterior or posterior muscular chain impairments. Our goal was to assess the reliability of the GPR muscular chain evaluation. Methods Design: Inter-rater reliability study. Fifty physical therapists (PTs and two experts trained in GPR assessed the standing posture from photographs of five youths with idiopathic scoliosis using a posture analysis grid with 23 posture indices (PI. The PTs and experts indicated the muscular chain associated with posture alterations. The PTs were also divided into three groups according to their experience in GPR. Experts’ results (after consensus were used to verify agreement between PTs and experts for muscular chain and posture assessments. We used Kappa coefficients (K and the percentage of agreement (%A to assess inter-rater reliability and intra-class coefficients (ICC for determining agreement between PTs and experts. Results For the muscular chain evaluation, reliability was moderate to substantial for 12 PI for the PTs (%A: 56 to 82; K: 0.42 to 0.76 and perfect for 19 PI for the experts. For posture assessment, reliability was moderate to substantial for 12 PI for the PTs (%A > 60%; K: 0.42 to 0.75 and moderate to perfect for 18 PI for the experts (%A: 80 to 100; K: 0.55 to 1.00. The agreement between PTs and experts was good for most muscular chain evaluations (18 PI; ICC: 0.82 to 0.99 and PI (19 PI; ICC: 0.78 to 1.00. Conclusions The GPR muscular chain evaluation has good reliability for most posture indices. GPR evaluation should help guide physical therapists in targeting affected muscles for treatment of abnormal posture patterns.

  14. Digital System Reliability Test for the Evaluation of safety Critical Software of Digital Reactor Protection System

    Directory of Open Access Journals (Sweden)

    Hyun-Kook Shin

    2006-08-01

    Full Text Available A new Digital Reactor Protection System (DRPS based on VME bus Single Board Computer has been developed by KOPEC to prevent software Common Mode Failure(CMF inside digital system. The new DRPS has been proved to be an effective digital safety system to prevent CMF by Defense-in-Depth and Diversity (DID&D analysis. However, for practical use in Nuclear Power Plants, the performance test and the reliability test are essential for the digital system qualification. In this study, a single channel of DRPS prototype has been manufactured for the evaluation of DRPS capabilities. The integrated functional tests are performed and the system reliability is analyzed and tested. The results of reliability test show that the application software of DRPS has a very high reliability compared with the analog reactor protection systems.

  15. Development of Tsunami Trace Database with reliability evaluation on Japan coasts

    International Nuclear Information System (INIS)

    Iwabuchi, Yoko; Sugino, Hideharu; Imamura, Fumihiko; Imai, Kentaro; Tsuji, Yoshinobu; Matsuoka, Yuya; Shuto, Nobuo

    2012-01-01

    The purpose of this research is to develop a Tsunami Trace Database by collecting historical materials as well as documents concerning tsunamis which had hit Japan and, of which the reliability of tsunami run-up and related data is taken into account. Based on acquisition and surveying of references, tsunami trace data over past 400 years of Japan has collected into a database, and reliability of each trace data was evaluated according to categorization of Japan Society of Civil Engineers (2002). As a result, trace data can now be searched and filtered with reliability levels accordingly whilst utilizing it for verification of tsunami numerical analysis and estimation of tsunami sources. By analyzing this database, we have quantitatively revealed the fact that the amount of reliable data tends to diminish as it goes older. (author)

  16. A Novel OBDD-Based Reliability Evaluation Algorithm for Wireless Sensor Networks on the Multicast Model

    Directory of Open Access Journals (Sweden)

    Zongshuai Yan

    2015-01-01

    Full Text Available The two-terminal reliability calculation for wireless sensor networks (WSNs is a #P-hard problem. The reliability calculation of WSNs on the multicast model provides an even worse combinatorial explosion of node states with respect to the calculation of WSNs on the unicast model; many real WSNs require the multicast model to deliver information. This research first provides a formal definition for the WSN on the multicast model. Next, a symbolic OBDD_Multicast algorithm is proposed to evaluate the reliability of WSNs on the multicast model. Furthermore, our research on OBDD_Multicast construction avoids the problem of invalid expansion, which reduces the number of subnetworks by identifying the redundant paths of two adjacent nodes and s-t unconnected paths. Experiments show that the OBDD_Multicast both reduces the complexity of the WSN reliability analysis and has a lower running time than Xing’s OBDD- (ordered binary decision diagram- based algorithm.

  17. Reliability Evaluation of Service-Oriented Architecture Systems Considering Fault-Tolerance Designs

    Directory of Open Access Journals (Sweden)

    Kuan-Li Peng

    2014-01-01

    strategies. Sensitivity analysis of SOA at both coarse and fine grain levels is also studied, which can be used to efficiently identify the critical parts within the system. Two SOA system scenarios based on real industrial practices are studied. Experimental results show that the proposed SOA model can be used to accurately depict the behavior of SOA systems. Additionally, a sensitivity analysis that quantizes the effects of system structure as well as fault tolerance on the overall reliability is also studied. On the whole, the proposed reliability modeling and analysis framework may help the SOA system service provider to evaluate the overall system reliability effectively and also make smarter improvement plans by focusing resources on enhancing reliability-sensitive parts within the system.

  18. Evaluation of research reactor fuel reliability in support of regulatory requirements

    International Nuclear Information System (INIS)

    Sokolov, Eugene N.

    2005-01-01

    This standards, codes and practices survey is devoted to the problem of reliability of R and D especially research reactor fuel (RRF) performance-related processes. Regulatory R and D evaluations were based on one standard and just few of them provide correlation to other relative standards whereas synthetic process approach reflects actual status of particular R and D practices. Fuel performance regulatory parameters are based on quality standards. A reliability process-based method similar to PSA/FMEA is proposed to evaluate RRF performance- related parameters in terms of reactor safety. (author)

  19. Reliability Evaluation of Distribution System Considering Sequential Characteristics of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Sheng Wanxing

    2016-01-01

    Full Text Available In allusion to the randomness of output power of distributed generation (DG, a reliability evaluation model based on sequential Monte Carlo simulation (SMCS for distribution system with DG is proposed. Operating states of the distribution system can be sampled by SMCS in chronological order thus the corresponding output power of DG can be generated. The proposed method has been tested on feeder F4 of IEEE-RBTS Bus 6. The results show that reliability evaluation of distribution system considering the uncertainty of output power of DG can be effectively implemented by SMCS.

  20. Evaluation of research reactor fuel reliability in support of regulatory requirements

    Energy Technology Data Exchange (ETDEWEB)

    Sokolov, Eugene N [Chalk River Laboratories, AECL, Chalk River, ON, K0J 1J0 (Canada)

    2005-07-01

    This standards, codes and practices survey is devoted to the problem of reliability of R and D especially research reactor fuel (RRF) performance-related processes. Regulatory R and D evaluations were based on one standard and just few of them provide correlation to other relative standards whereas synthetic process approach reflects actual status of particular R and D practices. Fuel performance regulatory parameters are based on quality standards. A reliability process-based method similar to PSA/FMEA is proposed to evaluate RRF performance- related parameters in terms of reactor safety. (author)

  1. Mapping wildland fuels and forest structure for land management: a comparison of nearest neighbor imputation and other methods

    Science.gov (United States)

    Kenneth B. Pierce; Janet L. Ohmann; Michael C. Wimberly; Matthew J. Gregory; Jeremy S. Fried

    2009-01-01

    Land managers need consistent information about the geographic distribution of wildland fuels and forest structure over large areas to evaluate fire risk and plan fuel treatments. We compared spatial predictions for 12 fuel and forest structure variables across three regions in the western United States using gradient nearest neighbor (GNN) imputation, linear models (...

  2. Software reliability evaluation of digital plant protection system development process using V and V

    International Nuclear Information System (INIS)

    Lee, Na Young; Hwang, Il Soon; Seong, Seung Hwan; Oh, Seung Rok

    2001-01-01

    In the nuclear power industry, digital technology has been introduced recently for the Instrumentation and Control (I and C) of reactor systems. For its application to the safety critical system such as Reactor Protection System(RPS), a reliability assessment is indispensable. Unlike traditional reliability models, software reliability is hard to evaluate, and should be evaluated throughout development lifecycle. In the development process of Digital Plant Protection System(DPPS), the concept of verification and validation (V and V) was introduced to assure the quality of the product. Also, test should be performed to assure the reliability. Verification procedure with model checking is relatively well defined, however, test is labor intensive and not well organized. In this paper, we developed the methodological process of combining the verification with validation test case generation. For this, we used PVS for the table specification and for the theorem proving. As a result, we could not only save time to design test case but also get more effective and complete verification related test case set. Add to this, we could extract some meaningful factors useful for the reliability evaluation both from the V and V and verification combined tests

  3. Inter-rater reliability of the Sødring Motor Evaluation of Stroke patients (SMES).

    Science.gov (United States)

    Halsaa, K E; Sødring, K M; Bjelland, E; Finsrud, K; Bautz-Holter, E

    1999-12-01

    The Sødring Motor Evaluation of Stroke patients is an instrument for physiotherapists to evaluate motor function and activities in stroke patients. The rating reflects quality as well as quantity of the patient's unassisted performance within three domains: leg, arm and gross function. The inter-rater reliability of the method was studied in a sample of 30 patients admitted to a stroke rehabilitation unit. Three therapists were involved in the study; two therapists assessed the same patient on two consecutive days in a balanced design. Cohen's weighted kappa and McNemar's test of symmetry were used as measures of item reliability, and the intraclass correlation coefficient was used to express the reliability of the sumscores. For 24 out of 32 items the weighted kappa statistic was excellent (0.75-0.98), while 7 items had a kappa statistic within the range 0.53-0.74 (fair to good). The reliability of one item was poor (0.13). The intraclass correlation coefficient for the three sumscores was 0.97, 0.91 and 0.97. We conclude that the Sødring Motor Evaluation of Stroke patients is a reliable measure of motor function in stroke patients undergoing rehabilitation.

  4. Age at menopause: imputing age at menopause for women with a hysterectomy with application to risk of postmenopausal breast cancer

    Science.gov (United States)

    Rosner, Bernard; Colditz, Graham A.

    2011-01-01

    Purpose Age at menopause, a major marker in the reproductive life, may bias results for evaluation of breast cancer risk after menopause. Methods We follow 38,948 premenopausal women in 1980 and identify 2,586 who reported hysterectomy without bilateral oophorectomy, and 31,626 who reported natural menopause during 22 years of follow-up. We evaluate risk factors for natural menopause, impute age at natural menopause for women reporting hysterectomy without bilateral oophorectomy and estimate the hazard of reaching natural menopause in the next 2 years. We apply this imputed age at menopause to both increase sample size and to evaluate the relation between postmenopausal exposures and risk of breast cancer. Results Age, cigarette smoking, age at menarche, pregnancy history, body mass index, history of benign breast disease, and history of breast cancer were each significantly related to age at natural menopause; duration of oral contraceptive use and family history of breast cancer were not. The imputation increased sample size substantially and although some risk factors after menopause were weaker in the expanded model (height, and alcohol use), use of hormone therapy is less biased. Conclusions Imputing age at menopause increases sample size, broadens generalizability making it applicable to women with hysterectomy, and reduces bias. PMID:21441037

  5. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes

    Directory of Open Access Journals (Sweden)

    Lotz Meredith J

    2008-01-01

    Full Text Available Abstract Background Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures × time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. Results We found that the optimal imputation algorithms (LSA, LLS, and BPCA are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Conclusion Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA

  6. Which missing value imputation method to use in expression profiles: a comparative study and two selection schemes.

    Science.gov (United States)

    Brock, Guy N; Shaffer, John R; Blakesley, Richard E; Lotz, Meredith J; Tseng, George C

    2008-01-10

    Gene expression data frequently contain missing values, however, most down-stream analyses for microarray experiments require complete data. In the literature many methods have been proposed to estimate missing values via information of the correlation patterns within the gene expression matrix. Each method has its own advantages, but the specific conditions for which each method is preferred remains largely unclear. In this report we describe an extensive evaluation of eight current imputation methods on multiple types of microarray experiments, including time series, multiple exposures, and multiple exposures x time series data. We then introduce two complementary selection schemes for determining the most appropriate imputation method for any given data set. We found that the optimal imputation algorithms (LSA, LLS, and BPCA) are all highly competitive with each other, and that no method is uniformly superior in all the data sets we examined. The success of each method can also depend on the underlying "complexity" of the expression data, where we take complexity to indicate the difficulty in mapping the gene expression matrix to a lower-dimensional subspace. We developed an entropy measure to quantify the complexity of expression matrixes and found that, by incorporating this information, the entropy-based selection (EBS) scheme is useful for selecting an appropriate imputation algorithm. We further propose a simulation-based self-training selection (STS) scheme. This technique has been used previously for microarray data imputation, but for different purposes. The scheme selects the optimal or near-optimal method with high accuracy but at an increased computational cost. Our findings provide insight into the problem of which imputation method is optimal for a given data set. Three top-performing methods (LSA, LLS and BPCA) are competitive with each other. Global-based imputation methods (PLS, SVD, BPCA) performed better on mcroarray data with lower complexity

  7. Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions

    Science.gov (United States)

    Turrado, Concepción Crespo; López, María del Carmen Meizoso; Lasheras, Fernando Sánchez; Gómez, Benigno Antonio Rodríguez; Rollé, José Luis Calvo; de Cos Juez, Francisco Javier

    2014-01-01

    Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE). This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW) and Multiple Linear Regression (MLR). The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW. PMID:25356644

  8. Missing Data Imputation of Solar Radiation Data under Different Atmospheric Conditions

    Directory of Open Access Journals (Sweden)

    Concepción Crespo Turrado

    2014-10-01

    Full Text Available Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE. This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW and Multiple Linear Regression (MLR. The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.

  9. Missing data imputation of solar radiation data under different atmospheric conditions.

    Science.gov (United States)

    Turrado, Concepción Crespo; López, María Del Carmen Meizoso; Lasheras, Fernando Sánchez; Gómez, Benigno Antonio Rodríguez; Rollé, José Luis Calvo; Juez, Francisco Javier de Cos

    2014-10-29

    Global solar broadband irradiance on a planar surface is measured at weather stations by pyranometers. In the case of the present research, solar radiation values from nine meteorological stations of the MeteoGalicia real-time observational network, captured and stored every ten minutes, are considered. In this kind of record, the lack of data and/or the presence of wrong values adversely affects any time series study. Consequently, when this occurs, a data imputation process must be performed in order to replace missing data with estimated values. This paper aims to evaluate the multivariate imputation of ten-minute scale data by means of the chained equations method (MICE). This method allows the network itself to impute the missing or wrong data of a solar radiation sensor, by using either all or just a group of the measurements of the remaining sensors. Very good results have been obtained with the MICE method in comparison with other methods employed in this field such as Inverse Distance Weighting (IDW) and Multiple Linear Regression (MLR). The average RMSE value of the predictions for the MICE algorithm was 13.37% while that for the MLR it was 28.19%, and 31.68% for the IDW.

  10. [Evaluation of Suicide Risk Levels in Hospitals: Validity and Reliability Tests].

    Science.gov (United States)

    Macagnino, Sandro; Steinert, Tilman; Uhlmann, Carmen

    2018-05-01

    Examination of in-hospital suicide risk levels concerning their validity and their reliability. The internal suicide risk levels were evaluated in a cross sectional study of in 163 inpatients. A reliability check was performed via determining interrater-reliability of senior physician, therapist and the responsible nurse. Within the scope of the validity check, we conducted analyses of criterion validity and construct validity. For the total sample an "acceptable" to "good" interrater-reliability (Kendalls W = .77) of suicide risk levels were obtained. Schizophrenic disorders showed the lowest values, for personality disorders we found the highest level of interrater-reliability. When examining the criterion validity, Item-9 of the BDI-II is substantial correlated to our suicide risk levels (ρ m  = .54, p validity check, affective disorders showed the highest correlation (ρ = .77), compatible also with "convergent validity". They differed with schizophrenic disorders which showed the least concordance (ρ = .43). In-hospital suicide risk levels may represent an important contribution to the assessment of suicidal behavior of inpatients experiencing psychiatric treatment due to their overall good validity and reliability. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Balance Assessment in Sports-Related Concussion: Evaluating Test-Retest Reliability of the Equilibrate System.

    Science.gov (United States)

    Odom, Mitchell J; Lee, Young M; Zuckerman, Scott L; Apple, Rachel P; Germanos, Theodore; Solomon, Gary S; Sills, Allen K

    2016-01-01

    This study evaluated the test-retest reliability of a novel computer-based, portable balance assessment tool, the Equilibrate System (ES), used to diagnose sports-related concussion. Twenty-seven students participated in ES testing consisting of three sessions over 4 weeks. The modified Balance Error Scoring System was performed. For each participant, test-retest reliability was established using the intraclass correlation coefficient (ICC). The ES test-retest reliability from baseline to week 2 produced an ICC value of 0.495 (95% CI, 0.123-0.745). Week 2 testing produced ICC values of 0.602 (95% CI, 0.279-0.803) and 0.610 (95% CI, 0.299-0.804), respectively. All other single measures test-retest reliability values produced poor ICC values. Same-day ES testing showed fair to good test-retest reliability while interweek measures displayed poor to fair test-retest reliability. Testing conditions should be controlled when using computerized balance assessment methods. ES testing should only be used as a part of a comprehensive assessment.

  12. The New Features of the Exame Evaluation System and Reliability of Its Fixed Tests

    Czech Academy of Sciences Publication Activity Database

    Martinková, Patrícia; Zvára jr., K.; Zvárová, Jana; Zvára, K.

    2006-01-01

    Roč. 45, č. 2 (2006), s. 310-315 ISSN 0026-1270 Grant - others:Evropské sociální fondy CZ04307/42011/0013 Institutional research plan: CEZ:AV0Z10300504 Keywords : education * evaluation * Internet * reliability * bioinformatics Subject RIV: IN - Informatics, Computer Science Impact factor: 1.684, year: 2006

  13. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  14. Test-retest reliability of the isernhagen work systems functional capacity evaluation in healthy adults

    NARCIS (Netherlands)

    Reneman, MF; Brouwer, S; Meinema, A; Dijkstra, PU; Geertzen, JHB; Groothoff, JW

    2004-01-01

    Aim of this study was to investigate test-retest reliability of the Isernhagen Work System Functional Capacity Evaluation (IWS FCE) in healthy subjects. The IWS FCE consists of 28 tests that reflect work-related activities such as lifting, carrying, bending, etc. A convenience sample of 26 healthy

  15. Evaluation of the reliability of Levine method of wound swab for ...

    African Journals Online (AJOL)

    The aim of this paper is to evaluate the reliability of Levine swab in accurate identification of microorganisms present in a wound and identify the necessity for further studies in this regard. Methods: A semi structured questionnaire was administered and physical examination was performed on patients with chronic wounds ...

  16. Reliability-Related Issues in the Context of Student Evaluations of Teaching in Higher Education

    Science.gov (United States)

    Kalender, Ilker

    2015-01-01

    Student evaluations of teaching (SET) have been the principal instrument to elicit students' opinions in higher education institutions. Many decisions, including high-stake ones, are made based on SET scores reported by students. In this respect, reliability of SET scores is of considerable importance. This paper has an argument that there are…

  17. Validity and Reliability of the Clinical Competency Evaluation Instrument for Use among Physiotherapy Students: Pilot study.

    Science.gov (United States)

    Muhamad, Zailani; Ramli, Ayiesah; Amat, Salleh

    2015-05-01

    The aim of this study was to determine the content validity, internal consistency, test-retest reliability and inter-rater reliability of the Clinical Competency Evaluation Instrument (CCEVI) in assessing the clinical performance of physiotherapy students. This study was carried out between June and September 2013 at University Kebangsaan Malaysia (UKM), Kuala Lumpur, Malaysia. A panel of 10 experts were identified to establish content validity by evaluating and rating each of the items used in the CCEVI with regards to their relevance in measuring students' clinical competency. A total of 50 UKM undergraduate physiotherapy students were assessed throughout their clinical placement to determine the construct validity of these items. The instrument's reliability was determined through a cross-sectional study involving a clinical performance assessment of 14 final-year undergraduate physiotherapy students. The content validity index of the entire CCEVI was 0.91, while the proportion of agreement on the content validity indices ranged from 0.83-1.00. The CCEVI construct validity was established with factor loading of ≥0.6, while internal consistency (Cronbach's alpha) overall was 0.97. Test-retest reliability of the CCEVI was confirmed with a Pearson's correlation range of 0.91-0.97 and an intraclass coefficient correlation range of 0.95-0.98. Inter-rater reliability of the CCEVI domains ranged from 0.59 to 0.97 on initial and subsequent assessments. This pilot study confirmed the content validity of the CCEVI. It showed high internal consistency, thereby providing evidence that the CCEVI has moderate to excellent inter-rater reliability. However, additional refinement in the wording of the CCEVI items, particularly in the domains of safety and documentation, is recommended to further improve the validity and reliability of the instrument.

  18. A human reliability based usability evaluation method for safety-critical software

    International Nuclear Information System (INIS)

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.; Ragsdale, A.

    2006-01-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thus allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)

  19. An In vitro evaluation of the reliability of QR code denture labeling technique.

    Science.gov (United States)

    Poovannan, Sindhu; Jain, Ashish R; Krishnan, Cakku Jalliah Venkata; Chandran, Chitraa R

    2016-01-01

    Positive identification of the dead after accidents and disasters through labeled dentures plays a key role in forensic scenario. A number of denture labeling methods are available, and studies evaluating their reliability under drastic conditions are vital. This study was conducted to evaluate the reliability of QR (Quick Response) Code labeled at various depths in heat-cured acrylic blocks after acid treatment, heat treatment (burns), and fracture in forensics. It was an in vitro study. This study included 160 specimens of heat-cured acrylic blocks (1.8 cm × 1.8 cm) and these were divided into 4 groups (40 samples per group). QR Codes were incorporated in the samples using clear acrylic sheet and they were assessed for reliability under various depths, acid, heat, and fracture. Data were analyzed using Chi-square test, test of proportion. The QR Code inclusion technique was reliable under various depths of acrylic sheet, acid (sulfuric acid 99%, hydrochloric acid 40%) and heat (up to 370°C). Results were variable with fracture of QR Code labeled acrylic blocks. Within the limitations of the study, by analyzing the results, it was clearly indicated that the QR Code technique was reliable under various depths of acrylic sheet, acid, and heat (370°C). Effectiveness varied in fracture and depended on the level of distortion. This study thus suggests that QR Code is an effective and simpler denture labeling method.

  20. Reliability assessment and correlation analysis of evaluating orthodontic treatment outcome in Chinese patients

    OpenAIRE

    Song, Guang-Ying; Zhao, Zhi-He; Ding, Yin; Bai, Yu-Xing; Wang, Lin; He, Hong; Shen, Gang; Li, Wei-Ran; Baumrind, Sheldon; Geng, Zhi; Xu, Tian-Min

    2013-01-01

    This study aimed to assess the reliability of experienced Chinese orthodontists in evaluating treatment outcome and to determine the correlations between three diagnostic information sources. Sixty-nine experienced Chinese orthodontic specialists each evaluated the outcome of orthodontic treatment of 108 Chinese patients. Three different information sources: study casts (SC), lateral cephalometric X-ray images (LX) and facial photographs (PH) were generated at the end of treatment for 108 pat...

  1. Education Research: Bias and poor interrater reliability in evaluating the neurology clinical skills examination

    Science.gov (United States)

    Schuh, L A.; London, Z; Neel, R; Brock, C; Kissela, B M.; Schultz, L; Gelb, D J.

    2009-01-01

    Objective: The American Board of Psychiatry and Neurology (ABPN) has recently replaced the traditional, centralized oral examination with the locally administered Neurology Clinical Skills Examination (NEX). The ABPN postulated the experience with the NEX would be similar to the Mini-Clinical Evaluation Exercise, a reliable and valid assessment tool. The reliability and validity of the NEX has not been established. Methods: NEX encounters were videotaped at 4 neurology programs. Local faculty and ABPN examiners graded the encounters using 2 different evaluation forms: an ABPN form and one with a contracted rating scale. Some NEX encounters were purposely failed by residents. Cohen’s kappa and intraclass correlation coefficients (ICC) were calculated for local vs ABPN examiners. Results: Ninety-eight videotaped NEX encounters of 32 residents were evaluated by 20 local faculty evaluators and 18 ABPN examiners. The interrater reliability for a determination of pass vs fail for each encounter was poor (kappa 0.32; 95% confidence interval [CI] = 0.11, 0.53). ICC between local faculty and ABPN examiners for each performance rating on the ABPN NEX form was poor to moderate (ICC range 0.14-0.44), and did not improve with the contracted rating form (ICC range 0.09-0.36). ABPN examiners were more likely than local examiners to fail residents. Conclusions: There is poor interrater reliability between local faculty and American Board of Psychiatry and Neurology examiners. A bias was detected for favorable assessment locally, which is concerning for the validity of the examination. Further study is needed to assess whether training can improve interrater reliability and offset bias. GLOSSARY ABIM = American Board of Internal Medicine; ABPN = American Board of Psychiatry and Neurology; CI = confidence interval; HFH = Henry Ford Hospital; ICC = intraclass correlation coefficients; IM = internal medicine; mini-CEX = Mini-Clinical Evaluation Exercise; NEX = Neurology Clinical

  2. Assessing accuracy of genotype imputation in American Indians.

    Directory of Open Access Journals (Sweden)

    Alka Malhotra

    Full Text Available Genotype imputation is commonly used in genetic association studies to test untyped variants using information on linkage disequilibrium (LD with typed markers. Imputing genotypes requires a suitable reference population in which the LD pattern is known, most often one selected from HapMap. However, some populations, such as American Indians, are not represented in HapMap. In the present study, we assessed accuracy of imputation using HapMap reference populations in a genome-wide association study in Pima Indians.Data from six randomly selected chromosomes were used. Genotypes in the study population were masked (either 1% or 20% of SNPs available for a given chromosome. The masked genotypes were then imputed using the software Markov Chain Haplotyping Algorithm. Using four HapMap reference populations, average genotype error rates ranged from 7.86% for Mexican Americans to 22.30% for Yoruba. In contrast, use of the original Pima Indian data as a reference resulted in an average error rate of 1.73%.Our results suggest that the use of HapMap reference populations results in substantial inaccuracy in the imputation of genotypes in American Indians. A possible solution would be to densely genotype or sequence a reference American Indian population.

  3. Evaluation of the reliability of the protection system of 1300 MWE PWR'S

    International Nuclear Information System (INIS)

    Blin, A.

    1990-01-01

    An assesment of the reliability of the Digital Integrated Protection System (SPIN) of the 1300 MWe type french reactors has been carried out by treating an example: the emergency shutdown, which can be called upon by several initiating events. The whole chain, from sensors to breakers and control rods, is taken into account. The reliability parameters used for the quantification are evaluated essentially from the experience feedback of french reactors. The not wellknown parameters being the common cause failure rates of electronic components and the efficiency rate of the self-tests, the results of the study are then presented in a parametric form, according to these two factors

  4. Evaluation of European blanket concepts for DEMO from availability and reliability point of view

    International Nuclear Information System (INIS)

    Nardi, C.

    1995-12-01

    This technical report is concerned with the ENEA activities relating to reliability and availability for the selection among two of the four European blanket concepts for the DEMO reactor. The activities on the BIT concept, the one proposed by ENEA, are emphasized. In spite of the lack of data relating to the behaviour of structures in an environment similar to that of a fusion reactor, it is evidenced that the available data are relevant to the BIT concept geometry. Moreover, it is evidenced that the qualitative reliability evaluations, compared to the quantitative ones, can lead to a better understanding of the typical problems of a structure to be used in a fusion reactor

  5. Standards and reliability in evaluation: when rules of thumb don't apply.

    Science.gov (United States)

    Norcini, J J

    1999-10-01

    The purpose of this paper is to identify situations in which two rules of thumb in evaluation do not apply. The first rule is that all standards should be absolute. When selection decisions are being made or when classroom tests are given, however, relative standards may be better. The second rule of thumb is that every test should have a reliability of .80 or better. Depending on the circumstances, though, the standard error of measurement, the consistency of pass/fail classifications, and the domain-referenced reliability coefficients may be better indicators of reproducibility.

  6. Development of slim-maud: a multi-attribute utility approach to human reliability evaluation

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1984-01-01

    This paper describes further work on the Success Likelihood Index Methodology (SLIM), a procedure for quantitatively evaluating human reliability in nuclear power plants and other systems. SLIM was originally developed by Human Reliability Associates during an earlier contract with Brookhaven National Laboratory (BNL). A further development of SLIM, SLIM-MAUD (Multi-Attribute Utility Decomposition) is also described. This is an extension of the original approach using an interactive, computer-based system. All of the work described in this report was supported by the Human Factors and Safeguards Branch of the US Nuclear Regulatory Commission

  7. Failure mechanism dependence and reliability evaluation of non-repairable system

    International Nuclear Information System (INIS)

    Chen, Ying; Yang, Liu; Ye, Cui; Kang, Rui

    2015-01-01

    Reliability study of electronic system with the physics-of-failure method has been promoted due to the increase knowledge of electronic failure mechanisms. System failure initiates from independent failure mechanisms, have effect on or affect by other failure mechanisms and finally result in system failure. Failure mechanisms in a non-repairable system have many kinds of correlation. One failure mechanism developing to a certain degree will trigger, accelerate or inhibit another or many other failure mechanisms, some kind of failure mechanisms may have the same effect on the failure site, component or system. The destructive effect will be accumulated and result in early failure. This paper presents a reliability evaluation method considering correlativity among failure mechanisms, which includes trigger, acceleration, inhibition, accumulation, and competition. Based on fundamental rule of physics of failure, decoupling methods of these correlations are discussed. With a case, reliability of electronic system is evaluated considering failure mechanism dependence. - Highlights: • Five types of failure mechanism correlations are described. • Decoupling methods of these correlations are discussed. • A reliability evaluation method considering mechanism dependence is proposed. • Results are quite different to results under failure independence assumption

  8. Simulated patient training: Using inter-rater reliability to evaluate simulated patient consistency in nursing education.

    Science.gov (United States)

    MacLean, Sharon; Geddes, Fiona; Kelly, Michelle; Della, Phillip

    2018-03-01

    Simulated patients (SPs) are frequently used for training nursing students in communication skills. An acknowledged benefit of using SPs is the opportunity to provide a standardized approach by which participants can demonstrate and develop communication skills. However, relatively little evidence is available on how to best facilitate and evaluate the reliability and accuracy of SPs' performances. The aim of this study is to investigate the effectiveness of an evidenced based SP training framework to ensure standardization of SPs. The training framework was employed to improve inter-rater reliability of SPs. A quasi-experimental study was employed to assess SP post-training understanding of simulation scenario parameters using inter-rater reliability agreement indices. Two phases of data collection took place. Initially a trial phase including audio-visual (AV) recordings of two undergraduate nursing students completing a simulation scenario is rated by eight SPs using the Interpersonal Communication Assessments Scale (ICAS) and Quality of Discharge Teaching Scale (QDTS). In phase 2, eight SP raters and four nursing faculty raters independently evaluated students' (N=42) communication practices using the QDTS. Intraclass correlation coefficients (ICC) were >0.80 for both stages of the study in clinical communication skills. The results support the premise that if trained appropriately, SPs have a high degree of reliability and validity to both facilitate and evaluate student performance in nurse education. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  9. Evaluating abdominal core muscle fatigue: Assessment of the validity and reliability of the prone bridging test.

    Science.gov (United States)

    De Blaiser, C; De Ridder, R; Willems, T; Danneels, L; Vanden Bossche, L; Palmans, T; Roosen, P

    2018-02-01

    The aims of this study were to research the amplitude and median frequency characteristics of selected abdominal, back, and hip muscles of healthy subjects during a prone bridging endurance test, based on surface electromyography (sEMG), (a) to determine if the prone bridging test is a valid field test to measure abdominal muscle fatigue, and (b) to evaluate if the current method of administrating the prone bridging test is reliable. Thirty healthy subjects participated in this experiment. The sEMG activity of seven abdominal, back, and hip muscles was bilaterally measured. Normalized median frequencies were computed from the EMG power spectra. The prone bridging tests were repeated on separate days to evaluate inter and intratester reliability. Significant differences in normalized median frequency slope (NMF slope ) values between several abdominal, back, and hip muscles could be demonstrated. Moderate-to-high correlation coefficients were shown between NMF slope values and endurance time. Multiple backward linear regression revealed that the test endurance time could only be significantly predicted by the NMF slope of the rectus abdominis. Statistical analysis showed excellent reliability (ICC=0.87-0.89). The findings of this study support the validity and reliability of the prone bridging test for evaluating abdominal muscle fatigue. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Rainfall Reliability Evaluation for Stability of Municipal Solid Waste Landfills on Slope

    Directory of Open Access Journals (Sweden)

    Fu-Kuo Huang

    2013-01-01

    Full Text Available A method to assess the reliability for the stability of municipal solid waste (MSW landfills on slope due to rainfall infiltration is proposed. Parameter studies are first done to explore the influence of factors on the stability of MSW. These factors include rainfall intensity, duration, pattern, and the engineering properties of MSW. Then 100 different combinations of parameters are generated and associated stability analyses of MSW on slope are performed assuming that each parameter is uniform distributed around its reason ranges. In the following, the performance of the stability of MSW is interpreted by the artificial neural network (ANN trained and verified based on the aforementioned 100 analysis results. The reliability for the stability of MSW landfills on slope is then evaluated and explored for different rainfall parameters by the ANN model with first-order reliability method (FORM and Monte Carlo simulation (MCS.

  11. A Bayesian reliability evaluation method with integrated accelerated degradation testing and field information

    International Nuclear Information System (INIS)

    Wang, Lizhi; Pan, Rong; Li, Xiaoyang; Jiang, Tongmin

    2013-01-01

    Accelerated degradation testing (ADT) is a common approach in reliability prediction, especially for products with high reliability. However, oftentimes the laboratory condition of ADT is different from the field condition; thus, to predict field failure, one need to calibrate the prediction made by using ADT data. In this paper a Bayesian evaluation method is proposed to integrate the ADT data from laboratory with the failure data from field. Calibration factors are introduced to calibrate the difference between the lab and the field conditions so as to predict a product's actual field reliability more accurately. The information fusion and statistical inference procedure are carried out through a Bayesian approach and Markov chain Monte Carlo methods. The proposed method is demonstrated by two examples and the sensitivity analysis to prior distribution assumption

  12. ON CONSTRUCTION OF A RELIABLE GROUND TRUTH FOR EVALUATION OF VISUAL SLAM ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Jan Bayer

    2016-11-01

    Full Text Available In this work we are concerning the problem of localization accuracy evaluation of visual-based Simultaneous Localization and Mapping (SLAM techniques. Quantitative evaluation of the SLAM algorithm performance is usually done using the established metrics of Relative pose error and Absolute trajectory error which require a precise and reliable ground truth. Such a ground truth is usually hard to obtain, while it requires an expensive external localization system. In this work we are proposing to use the SLAM algorithm itself to construct a reliable ground-truth by offline frame-by-frame processing. The generated ground-truth is suitable for evaluation of different SLAM systems, as well as for tuning the parametrization of the on-line SLAM. The presented practical experimental results indicate the feasibility of the proposed approach.

  13. The multiple imputation method: a case study involving secondary data analysis.

    Science.gov (United States)

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  14. Student Practice Evaluation Form-Revised Edition online comment bank: development and reliability analysis.

    Science.gov (United States)

    Rodger, Sylvia; Turpin, Merrill; Copley, Jodie; Coleman, Allison; Chien, Chi-Wen; Caine, Anne-Maree; Brown, Ted

    2014-08-01

    The reliable evaluation of occupational therapy students completing practice education placements along with provision of appropriate feedback is critical for both students and for universities from a quality assurance perspective. This study describes the development of a comment bank for use with an online version of the Student Practice Evaluation Form-Revised Edition (SPEF-R Online) and investigates its reliability. A preliminary bank of 109 individual comments (based on previous students' placement performance) was developed via five stages. These comments reflected all 11 SPEF-R domains. A purpose-designed online survey was used to examine the reliability of the comment bank. A total of 37 practice educators returned surveys, 31 of which were fully completed. Participants were asked to rate each individual comment using the five-point SPEF-R rating scale. One hundred and two of 109 comments demonstrated satisfactory agreement with their respective default ratings that were determined by the development team. At each domain level, the intra-class correlation coefficients (ranging between 0.86 and 0.96) also demonstrated good to excellent inter-rater reliability. There were only seven items that required rewording prior to inclusion in the final SPEF-R Online comment bank. The development of the SPEF-R Online comment bank offers a source of reliable comments (consistent with the SPEF-R rating scale across different domains) and aims to assist practice educators in providing reliable and timely feedback to students in a user-friendly manner. © 2014 Occupational Therapy Australia.

  15. TRIP: An interactive retrieving-inferring data imputation approach

    KAUST Repository

    Li, Zhixu

    2016-06-25

    Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.

  16. TRIP: An interactive retrieving-inferring data imputation approach

    KAUST Repository

    Li, Zhixu; Qin, Lu; Cheng, Hong; Zhang, Xiangliang; Zhou, Xiaofang

    2016-01-01

    Data imputation aims at filling in missing attribute values in databases. Existing imputation approaches to nonquantitive string data can be roughly put into two categories: (1) inferring-based approaches [2], and (2) retrieving-based approaches [1]. Specifically, the inferring-based approaches find substitutes or estimations for the missing ones from the complete part of the data set. However, they typically fall short in filling in unique missing attribute values which do not exist in the complete part of the data set [1]. The retrieving-based approaches resort to external resources for help by formulating proper web search queries to retrieve web pages containing the missing values from the Web, and then extracting the missing values from the retrieved web pages [1]. This webbased retrieving approach reaches a high imputation precision and recall, but on the other hand, issues a large number of web search queries, which brings a large overhead [1]. © 2016 IEEE.

  17. Imputed prices of greenhouse gases and land forests

    International Nuclear Information System (INIS)

    Uzawa, Hirofumi

    1993-01-01

    The theory of dynamic optimum formulated by Maeler gives us the basic theoretical framework within which it is possible to analyse the economic and, possibly, political circumstances under which the phenomenon of global warming occurs, and to search for the policy and institutional arrangements whereby it would be effectively arrested. The analysis developed here is an application of Maeler's theory to atmospheric quality. In the analysis a central role is played by the concept of imputed price in the dynamic context. Our determination of imputed prices of atmospheric carbon dioxide and land forests takes into account the difference in the stages of economic development. Indeed, the ratios of the imputed prices of atmospheric carbon dioxide and land forests over the per capita level of real national income are identical for all countries involved. (3 figures, 2 tables) (Author)

  18. A multi-state reliability evaluation model for P2P networks

    International Nuclear Information System (INIS)

    Fan Hehong; Sun Xiaohan

    2010-01-01

    The appearance of new service types and the convergence tendency of the communication networks have endowed the networks more and more P2P (peer to peer) properties. These networks can be more robust and tolerant for a series of non-perfect operational states due to the non-deterministic server-client distributions. Thus a reliability model taking into account of the multi-state and non-deterministic server-client distribution properties is needed for appropriate evaluation of the networks. In this paper, two new performance measures are defined to quantify the overall and local states of the networks. A new time-evolving state-transition Monte Carlo (TEST-MC) simulation model is presented for the reliability analysis of P2P networks in multiple states. The results show that the model is not only valid for estimating the traditional binary-state network reliability parameters, but also adequate for acquiring the parameters in a series of non-perfect operational states, with good efficiencies, especially for highly reliable networks. Furthermore, the model is versatile for the reliability and maintainability analyses in that both the links and the nodes can be failure-prone with arbitrary life distributions, and various maintainability schemes can be applied.

  19. Evaluation of Smart Grid Technologies Employed for System Reliability Improvement: Pacific Northwest Smart Grid Demonstration Experience

    Energy Technology Data Exchange (ETDEWEB)

    Agalgaonkar, Yashodhan P.; Hammerstrom, Donald J.

    2017-06-01

    The Pacific Northwest Smart Grid Demonstration (PNWSGD) was a smart grid technology performance evaluation project that included multiple U.S. states and cooperation from multiple electric utilities in the northwest region. One of the local objectives for the project was to achieve improved distribution system reliability. Toward this end, some PNWSGD utilities automated their distribution systems, including the application of fault detection, isolation, and restoration and advanced metering infrastructure. In light of this investment, a major challenge was to establish a correlation between implementation of these smart grid technologies and actual improvements of distribution system reliability. This paper proposes using Welch’s t-test to objectively determine and quantify whether distribution system reliability is improving over time. The proposed methodology is generic, and it can be implemented by any utility after calculation of the standard reliability indices. The effectiveness of the proposed hypothesis testing approach is demonstrated through comprehensive practical results. It is believed that wider adoption of the proposed approach can help utilities to evaluate a realistic long-term performance of smart grid technologies.

  20. Evaluating validity and reliability of Persian version of Supports Intensity Scale in adults with intellectual disability

    Directory of Open Access Journals (Sweden)

    Shahin Soltani

    2013-12-01

    Full Text Available Background: Shifting paradigms regarding the ways to assess the support needs of people with intellectual disability in 1980 necessitates the design and development of appropriate tools more than ever. In this regard, American Association on Intellectual and Developmental Disabilities (AAIDD developed Supports Intensity Scale (SIS to respond the lack of an appropriate measurement tool. The aim of this study is the cultural adaptation and evaluation of psychometric properties of Supports Intensity Scale in adults with intellectual disability. Methods: Validity of Persian version of SIS was assessed by Content validity. The reliability of the scale was evaluated using Cronbach's alpha and test–retest reliability with a 3-week interval. In this study, the sample contained 43 adults (29 men and 14 women with intellectual disability. Results: The content of the Persian version of SIS was approved by the experts. The Cronbach's alpha reliability coefficients for the subscales ranged between 0.80 and 0.99. Also, Intraclass correlation coefficients ranged between 0.90 and 0.99 (P<0.001. Furthermore, all Pearson correlation coefficients among the SIS subscales ranged between 0.63 and 0.98 (P<0.01. Conclusion: The results of this study indicated that the validity and reliability of the equivalent Persian version of SIS for identifying pattern and required supports intensity in adults with intellectual disability is acceptable.

  1. [Reliability of HOMA-IR for evaluation of insulin resistance during perioperative period].

    Science.gov (United States)

    Fujino, Hiroko; Itoda, Shoko; Sako, Saori; Matsuo, Kazuki; Sakamoto, Eiji; Yokoyama, Takeshi

    2013-02-01

    Hyperglycemia due to increase in insulin resistance (IR) is often observed after surgery in spite of normal insulin secretion. To evaluate the degree of IR, the golden standard method is the normoglycemic hyperinsulinemic clamp technique (glucose clamp: GC). The GC using the artificial pancreas, STG-22 (Nikkiso, Tokyo, Japan), was established as a more reliable method, since it was evaluated during steady-state period under constant insulin infusion. Homeostasis model assessment insulin resistance (HOMA-IR), however, is frequently employed in daily practice because of its convenience. We, therefore, investigated the reliability of HOMA-IR in comparison with the glucose clamp using the STG-22. Eight healthy patients undergoing maxillofacial surgery were employed in this study after obtaining written informed consent. Their insulin resistance was evaluated by HOMA-IR and the GC using the STG-22 before and after surgery. HOMA-IR increased from 0.81 +/- 0.48 to 1.17 +/- 0.50, although there were no significant differences between before and after surgery. On the other hand, M-value by GC significantly decreased after surgery from 8.82 +/- 2.49 mg x kg(-1) x min(-1) to 3.84 +/- 0.79 mg x kg(-1) x min(-1) (P = 0.0003). In addition, no significant correlation was found between the values of HOMA-IR and the M-value by GC. HOMA-IR may not be reliable to evaluate IR for perioperative period.

  2. Multiple imputation of missing passenger boarding data in the national census of ferry operators

    Science.gov (United States)

    2008-08-01

    This report presents findings from the 2006 National Census of Ferry Operators (NCFO) augmented with imputed values for passengers and passenger miles. Due to the imputation procedures used to calculate missing data, totals in Table 1 may not corresp...

  3. [The external evaluation of study quality: the role in maintaining the reliability of laboratory information].

    Science.gov (United States)

    Men'shikov, V V

    2013-08-01

    The external evaluation of quality of clinical laboratory examinations was gradually introduced in USSR medical laboratories since 1970s. In Russia, in the middle of 1990 a unified all-national system of external evaluation quality was organized known as the Federal center of external evaluation of quality at the basis of laboratory of the state research center of preventive medicine. The main positions of policy in this area were neatly formulated in the guidance documents of ministry of Health. Nowadays, the center of external evaluation of quality proposes 100 and more types of control studies and permanently extends their specter starting from interests of different disciplines of clinical medicine. The consistent participation of laboratories in the cycles of external evaluation of quality intrinsically promotes improvement of indicators of properness and precision of analysis results and increases reliability of laboratory information. However, a significant percentage of laboratories does not participate at all in external evaluation of quality or takes part in control process irregularly and in limited number of tests. The managers of a number of medical organizations disregard the application of the proposed possibilities to increase reliability of laboratory information and limit financing of studies in the field of quality control. The article proposes to adopt the national standard on the basis of ISO 17043 "Evaluation of compliance. The common requirements of professional competence testing".

  4. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database

    International Nuclear Information System (INIS)

    Ochs, Michael; Saito, Yoshihiko; Kitamura, Akira; Shibata, Masahiro; Sasamoto, Hiroshi; Yui, Mikazu

    2007-03-01

    Japan Atomic Energy Agency (JAEA) has developed the sorption database (JNC-SDB) for bentonite and rocks in order to assess the retardation property of important radioactive elements in natural and engineered barriers in the H12 report. The database includes distribution coefficient (K d ) of important radionuclides. The K d values in the SDB are about 20,000 data. The SDB includes a great variety of K d and additional key information from many different literatures. Accordingly, the classification guideline and classification system were developed in order to evaluate the reliability of each K d value (Th, Pa, U, Np, Pu, Am, Cm, Cs, Ra, Se, Tc on bentonite). The reliability of 3740 K d values are evaluated and categorized. (author)

  5. A prospective study assessing agreement and reliability of a geriatric evaluation

    OpenAIRE

    Locatelli, Isabella; Monod, St?fanie; Cornuz, Jacques; B?la, Christophe J.; Senn, Nicolas

    2017-01-01

    Background The present study takes place within a geriatric program, aiming at improving the diagnosis and management of geriatric syndromes in primary care. Within this program it was of prime importance to be able to rely on a robust and reproducible geriatric consultation to use as a gold standard for evaluating a primary care brief assessment tool. The specific objective of the present study was thus assessing the agreement and reliability of a comprehensive geriatric consultation. Method...

  6. Evaluating the reliability of uranium concentration and isotope ratio measurements via an interlaboratory comparison program

    International Nuclear Information System (INIS)

    Oliveira Junior, Olivio Pereira de; Oliveira, Inez Cristina de; Pereira, Marcia Regina; Tanabe, Eduardo

    2009-01-01

    The nuclear fuel cycle is a strategic area for the Brazilian development because it is associated with the generation of electricity needed to boost the country economy. Uranium is one the chemical elements in this cycle and its concentration and isotope composition must be accurately known. In this present work, the reliability of the uranium concentration and isotope ratio measurements carried out at the CTMSP analytical laboratories is evaluated by the results obtained in an international interlaboratory comparison program. (author)

  7. Reliability of the Balance Evaluation Systems Test (BESTest) and BESTest sections for adults with hemiparesis

    Science.gov (United States)

    Rodrigues, Letícia C.; Marques, Aline P.; Barros, Paula B.; Michaelsen, Stella M.

    2014-01-01

    BACKGROUND: The Balance Evaluation Systems Test (BESTest) was recently created to allow the development of treatments according to the specific balance system affected in each patient. The Brazilian version of the BESTest has not been specifically tested after stroke. OBJECTIVE: To evaluate the intra- and inter-rater reliability and concurrent and convergent validity of the total score of the BESTest and BESTest sections for adults with hemiparesis after stroke. METHOD: The study included 16 subjects (61.1±7.5 years) with chronic hemiparesis (54.5±43.5 months after stroke). The BESTest was administered by two raters in the same week and one of the raters repeated the test after a one-week interval. Intraclass correlation coefficient (ICC) was calculated to assess intra- and interrater reliability. Concurrent validity with the Berg Balance Scale (BBS) and convergent validity with the Activities-specific Balance Confidence scale (ABC-Brazil) were assessed using Pearson's correlation coefficient. RESULTS: Both the BESTest total score (ICC=0.98) and the BESTest sections (ICC between 0.85 and 0.96) have excellent intrarater reliability. Interrater reliability for the total score was excellent (ICC=0.93) and, for the sections, it ranged between 0.71 and 0.94. The correlation coefficient between the BESTest and the BBS and ABC-Brazil were 0.78 and 0.59, respectively. CONCLUSIONS: The Brazilian version of the BESTest demonstrated adequate reliability when measured by sections and could identify what balance system was affected in patients after stroke. Concurrent validity was excellent with the BBS total score and good to excellent with the sections. The total scores but not the sections present adequate convergent validity with the ABC-Brazil. However, other psychometric properties should be further investigated. PMID:25003281

  8. Development of Probabilistic Reliability Models of Photovoltaic System Topologies for System Adequacy Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Alferidi

    2017-02-01

    Full Text Available The contribution of solar power in electric power systems has been increasing rapidly due to its environmentally friendly nature. Photovoltaic (PV systems contain solar cell panels, power electronic converters, high power switching and often transformers. These components collectively play an important role in shaping the reliability of PV systems. Moreover, the power output of PV systems is variable, so it cannot be controlled as easily as conventional generation due to the unpredictable nature of weather conditions. Therefore, solar power has a different influence on generating system reliability compared to conventional power sources. Recently, different PV system designs have been constructed to maximize the output power of PV systems. These different designs are commonly adopted based on the scale of a PV system. Large-scale grid-connected PV systems are generally connected in a centralized or a string structure. Central and string PV schemes are different in terms of connecting the inverter to PV arrays. Micro-inverter systems are recognized as a third PV system topology. It is therefore important to evaluate the reliability contribution of PV systems under these topologies. This work utilizes a probabilistic technique to develop a power output model for a PV generation system. A reliability model is then developed for a PV integrated power system in order to assess the reliability and energy contribution of the solar system to meet overall system demand. The developed model is applied to a small isolated power unit to evaluate system adequacy and capacity level of a PV system considering the three topologies.

  9. Stroke Impact Scale 3.0: Reliability and Validity Evaluation of the Korean Version.

    Science.gov (United States)

    Choi, Seong Uk; Lee, Hye Sun; Shin, Joon Ho; Ho, Seung Hee; Koo, Mi Jung; Park, Kyoung Hae; Yoon, Jeong Ah; Kim, Dong Min; Oh, Jung Eun; Yu, Se Hwa; Kim, Dong A

    2017-06-01

    To establish the reliability and validity the Korean version of the Stroke Impact Scale (K-SIS) 3.0. A total of 70 post-stroke patients were enrolled. All subjects were evaluated for general characteristics, Mini-Mental State Examination (MMSE), the National Institutes of Health Stroke Scale (NIHSS), Modified Barthel Index, Hospital Anxiety and Depression Scale (HADS). The SF-36 and K-SIS 3.0 assessed their health-related quality of life. Statistical analysis after evaluation, determined the reliability and validity of the K-SIS 3.0. A total of 70 patients (mean age, 54.97 years) participated in this study. Internal consistency of the SIS 3.0 (Cronbach's alpha) was obtained, and all domains had good co-efficiency, with threshold above 0.70. Test-retest reliability of SIS 3.0 required correlation (Spearman's rho) of the same domain scores obtained on the first and second assessments. Results were above 0.5, with the exception of social participation and mobility. Concurrent validity of K-SIS 3.0 was assessed using the SF-36, and other scales with the same or similar domains. Each domain of K-SIS 3.0 had a positive correlation with corresponding similar domain of SF-36 and other scales (HADS, MMSE, and NIHSS). The newly developed K-SIS 3.0 showed high inter-intra reliability and test-retest reliabilities, together with high concurrent validity with the original and various other scales, for patients with stroke. K-SIS 3.0 can therefore be used for stroke patients, to assess their health-related quality of life and treatment efficacy.

  10. The REPAS approach to the evaluation of passive safety systems reliability

    International Nuclear Information System (INIS)

    Bianchi, F.; Burgazzi, L.; D'Auria, F.; Ricotti, M.E.

    2002-01-01

    Scope of this research, carried out by ENEA in collaboration with University of Pisa and Polytechnic of Milano since 1999, is the identification of a methodology allowing the evaluation of the reliability of passive systems as a whole, in a more physical and phenomenal way. The paper describe the study, named REPAS (Reliability Evaluation of Passive Safety systems), carried out by the partners and finalised to the development and validation of such a procedure. The strategy of engagement moves from the consideration that a passive system should be theoretically more reliable than an active one. In fact it does not need any external input or energy to operate and it relies only upon natural physical laws (e.g. gravity, natural circulation, internally stored energy, etc.) and/or 'intelligent' use of the energy inherently available in the system (e.g. chemical reaction, decay heat, etc.). Nevertheless the passive system may fail its mission not only as a consequence of classical mechanical failure of components, but also for deviation from the expected behaviour, due to physical phenomena mainly related to thermal-hydraulics or due to different boundary and initial conditions. The main sources of physical failure are identified and a probability of occurrence is assigned. The reliability analysis is performed on a passive system which operates in two-phase, natural circulation. The selected system is a loop including a heat source and a heat sink where the condensation occurs. The system behaviour under different configurations has been simulated via best-estimate code (Relap5 mod3.2). The results are shown and can be treated in such a way to give qualitative and quantitative information on the system reliability. Main routes of development of the methodology are also depicted. The analysis of the results shows that the procedure is suitable to evaluate the performance of a passive system on a probabilistic / deterministic basis. Important information can also be

  11. Quality Evaluation Scores are no more Reliable than Gestalt in Evaluating the Quality of Emergency Medicine Blogs: A METRIQ Study.

    Science.gov (United States)

    Thoma, Brent; Sebok-Syer, Stefanie S; Colmers-Gray, Isabelle; Sherbino, Jonathan; Ankel, Felix; Trueger, N Seth; Grock, Andrew; Siemens, Marshall; Paddock, Michael; Purdy, Eve; Kenneth Milne, William; Chan, Teresa M

    2018-01-30

    Construct: We investigated the quality of emergency medicine (EM) blogs as educational resources. Online medical education resources such as blogs are increasingly used by EM trainees and clinicians. However, quality evaluations of these resources using gestalt are unreliable. We investigated the reliability of two previously derived quality evaluation instruments for blogs. Sixty English-language EM websites that published clinically oriented blog posts between January 1 and February 24, 2016, were identified. A random number generator selected 10 websites, and the 2 most recent clinically oriented blog posts from each site were evaluated using gestalt, the Academic Life in Emergency Medicine (ALiEM) Approved Instructional Resources (AIR) score, and the Medical Education Translational Resources: Impact and Quality (METRIQ-8) score, by a sample of medical students, EM residents, and EM attendings. Each rater evaluated all 20 blog posts with gestalt and 15 of the 20 blog posts with the ALiEM AIR and METRIQ-8 scores. Pearson's correlations were calculated between the average scores for each metric. Single-measure intraclass correlation coefficients (ICCs) evaluated the reliability of each instrument. Our study included 121 medical students, 88 EM residents, and 100 EM attendings who completed ratings. The average gestalt rating of each blog post correlated strongly with the average scores for ALiEM AIR (r = .94) and METRIQ-8 (r = .91). Single-measure ICCs were fair for gestalt (0.37, IQR 0.25-0.56), ALiEM AIR (0.41, IQR 0.29-0.60) and METRIQ-8 (0.40, IQR 0.28-0.59). The average scores of each blog post correlated strongly with gestalt ratings. However, neither ALiEM AIR nor METRIQ-8 showed higher reliability than gestalt. Improved reliability may be possible through rater training and instrument refinement.

  12. HTGR plant availability and reliability evaluations. Volume I. Summary of evaluations

    International Nuclear Information System (INIS)

    Cadwallader, G.J.; Hannaman, G.W.; Jacobsen, F.K.; Stokely, R.J.

    1976-12-01

    The report (1) describes a reliability assessment methodology for systematically locating and correcting areas which may contribute to unavailability of new and uniquely designed components and systems, (2) illustrates the methodology by applying it to such components in a high-temperature gas-cooled reactor [Public Service Company of Colorado's Fort St. Vrain 330-MW(e) HTGR], and (3) compares the results of the assessment with actual experience. The methodology can be applied to any component or system; however, it is particularly valuable for assessments of components or systems which provide essential functions, or the failure or mishandling of which could result in relatively large economic losses

  13. HTGR plant availability and reliability evaluations. Volume I. Summary of evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Cadwallader, G.J.; Hannaman, G.W.; Jacobsen, F.K.; Stokely, R.J.

    1976-12-01

    The report (1) describes a reliability assessment methodology for systematically locating and correcting areas which may contribute to unavailability of new and uniquely designed components and systems, (2) illustrates the methodology by applying it to such components in a high-temperature gas-cooled reactor (Public Service Company of Colorado's Fort St. Vrain 330-MW(e) HTGR), and (3) compares the results of the assessment with actual experience. The methodology can be applied to any component or system; however, it is particularly valuable for assessments of components or systems which provide essential functions, or the failure or mishandling of which could result in relatively large economic losses.

  14. Development and Reliability Evaluation of the Movement Rating Instrument for Virtual Reality Video Game Play.

    Science.gov (United States)

    Levac, Danielle; Nawrotek, Joanna; Deschenes, Emilie; Giguere, Tia; Serafin, Julie; Bilodeau, Martin; Sveistrup, Heidi

    2016-06-01

    Virtual reality active video games are increasingly popular physical therapy interventions for children with cerebral palsy. However, physical therapists require educational resources to support decision making about game selection to match individual patient goals. Quantifying the movements elicited during virtual reality active video game play can inform individualized game selection in pediatric rehabilitation. The objectives of this study were to develop and evaluate the feasibility and reliability of the Movement Rating Instrument for Virtual Reality Game Play (MRI-VRGP). Item generation occurred through an iterative process of literature review and sample videotape viewing. The MRI-VRGP includes 25 items quantifying upper extremity, lower extremity, and total body movements. A total of 176 videotaped 90-second game play sessions involving 7 typically developing children and 4 children with cerebral palsy were rated by 3 raters trained in MRI-VRGP use. Children played 8 games on 2 virtual reality and active video game systems. Intraclass correlation coefficients (ICCs) determined intra-rater and interrater reliability. Excellent intrarater reliability was evidenced by ICCs of >0.75 for 17 of the 25 items across the 3 raters. Interrater reliability estimates were less precise. Excellent interrater reliability was achieved for far reach upper extremity movements (ICC=0.92 [for right and ICC=0.90 for left) and for squat (ICC=0.80) and jump items (ICC=0.99), with 9 items achieving ICCs of >0.70, 12 items achieving ICCs of between 0.40 and 0.70, and 4 items achieving poor reliability (close-reach upper extremity-ICC=0.14 for right and ICC=0.07 for left) and single-leg stance (ICC=0.55 for right and ICC=0.27 for left). Poor video quality, differing item interpretations between raters, and difficulty quantifying the high-speed movements involved in game play affected reliability. With item definition clarification and further psychometric property evaluation, the MRI

  15. Low field magnetic resonance imaging of the lumbar spine: Reliability of qualitative evaluation of disc and muscle parameters

    DEFF Research Database (Denmark)

    Sørensen, Joan Solgaard; Kjaer, Per; Jensen, Tue Secher

    2006-01-01

    PURPOSE: To determine the intra- and interobserver reliability in grading disc and muscle parameters using low-field magnetic resonance imaging (MRI). MATERIAL AND METHODS: MRI scans of 100 subjects representative of the general population were evaluated blindly by two radiologists. Criteria......: Convincing reliability was found in the evaluation of disc- and muscle-related MRI variables....

  16. Evaluation of ideomotor apraxia in patients with stroke: a study of reliability and validity.

    Science.gov (United States)

    Kaya, Kurtulus; Unsal-Delialioglu, Sibel; Kurt, Murat; Altinok, Nermin; Ozel, Sumru

    2006-03-01

    This aim of this study was to determine the reliability and validity of an established ideomotor apraxia test when applied to a Turkish stroke patient population and to healthy controls. The study group comprised 50 patients with right hemiplegia and 36 with left hemiplegia, who had developed the condition as a result of a cerebrovascular accident, and 33 age-matched healthy subjects. The subjects were evaluated for apraxia using an established ideomotor apraxia test. The cut-off value of the test and the reliability coefficient between observers were determined. Apraxia was found in 54% patients with right hemiplegia (most being severe) and in 25% of left hemiplegic patients (most being mild). The apraxia scores for patients with right hemiplegia were found to be significantly lower than for those with left hemiplegia and for healthy subjects. There was no statistically significant difference between patients with left hemiplegia and healthy subjects. It was shown that the ideomotor apraxia test could distinguish apraxic from non-apraxic subjects. The reliability coefficient among observers in the study was high and a reliability study of the ideomotor apraxia test was therefore performed.

  17. Test-retest reliability of the Progressive Isoinertial Lifting Evaluation (PILE).

    Science.gov (United States)

    Lygren, Hildegunn; Dragesund, Tove; Joensen, Jón; Ask, Tove; Moe-Nilssen, Rolf

    2005-05-01

    A repeated measures single group design. To investigate test-retest reliability of Progressive Isoinertial Lifting Evaluation on patients with long lasting musculoskeletal problems related to the lumbar spine. Test-retest reliability has been satisfactory in healthy men. Test-retest reliability for clinical populations has not been reported. A total of 31 patients (17 women and 14 men) with long lasting low back pain participated in the study. The patients were tested twice at an interval of 2 days and at the same time of the day. The heaviest load that the patient could lift 4 times was used as outcome measure. The error of measurement indicates that the true result in 95% of cases will be within +/-4.5 kg from the measured value, while the difference between 2 measurements in 95% of cases will be less than 6.4 kg. Intra-class correlation (1,1) was 0.91. Relative test-retest reliability was high assessed by intra-class correlation, but absolute measurement variability reported as the smallest detectable difference has relevance for the interpretation of clinical test results and should also be considered.

  18. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  19. Reliable tool life measurements in turning - an application to cutting fluid efficiency evaluation

    DEFF Research Database (Denmark)

    Axinte, Dragos A.; Belluco, Walter; De Chiffre, Leonardo

    2001-01-01

    The paper proposes a method to obtain reliable measurements of tool life in turning, discussing some aspects related to experimental procedure and measurement accuracy. The method (i) allows and experimental determination of the extended Taylor's equation, with a limited set of experiments and (ii......) provides efficiency evaluation. Six cutting oils, five of which formulated from vegetable basestock, were evaluated in turning. Experiments were run in a range of cutting parameters. according to a 2, 3-1 factorial design, machining AISI 316L stainless steel with coated carbide tools. Tool life...

  20. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database (4)

    International Nuclear Information System (INIS)

    Suyama, Tadahiro; Tachi, Yukio; Ganter, Charlotte; Kunze, Susanne; Ochs, Michael

    2011-02-01

    Sorption of radionuclides in bentonites and rocks is one of the key processes in the safe geological disposal of radioactive waste. Japan Atomic Energy Agency (JAEA) has developed sorption database (JAEA-SDB) which includes extensive compilation of sorption K d data by batch experiments, extracted from published literatures. JAEA published the first SDB as an important basis for the H12 performance assessment (PA), and has been continuing to improve and update the SDB in view of potential future data needs, focusing on assuring the desired quality level and practical applications to K d -setting for the geological environment. The JAEA-SDB includes more than 24,000 K d data which are related with various conditions and methods, and different reliabilities. Accordingly, the quality assuring (QA) and classifying guideline/criteria has been developed in order to evaluate the reliability of each K d value. The reliability of K d values of key radionuclides for bentonite, mudstone, granite and Fe-oxide/hydroxide, Al-oxide/hydroxide has been already evaluated. These QA information has been made available to access through the web-based JAEA-SDB since March, 2009. In this report, the QA/classification of selected entries in the JAEA-SDB, focusing on key radionuclides (Th, Np, Am, Se and Cs) sorption on tuff existing widely in geological environment, was done following the approach/guideline defined in our previous report. As a result, the reliability of 560 K d values was evaluated and classified. This classification scheme is expected to make it possible to obtain quick overview of the available data from the SDB, and to have suitable access to the respective data for K d -setting in PA. (author)

  1. Evaluating and categorizing the reliability of distribution coefficient values in the sorption database (3)

    International Nuclear Information System (INIS)

    Ochs, Michael; Kunze, Susanne; Suyama, Tadahiro; Tachi, Yukio; Yui, Mikazu

    2010-02-01

    Sorption of radionuclides in bentonites and rocks is one of the key processes in the safe geological disposal of radioactive waste. Japan Atomic Energy Agency (JAEA) has developed sorption database (JAEA-SDB) which includes extensive compilation of sorption K d data by batch experiments, extracted from published literatures. JAEA published the first SDB as an important basis for the H12 performance assessment (PA), and has been continuing to improve and update the SDB in view of potential future data needs, focusing on assuring the desired quality level and practical applications to K d -setting for the geological environment. The JAEA-SDB includes more than 24,000 K d data which are related with various conditions and methods, and different reliabilities. Accordingly, the quality assuring (QA) and classifying guideline/criteria has been developed in order to evaluate the reliability of each K d value. The reliability of K d values of key radionuclides for bentonite and mudstone system has been already evaluated. To use these QA information, the new web-based JAEA-SDB was published in March, 2009. In this report, the QA/classification of selected entries for key radionuclides (Th, Np, Am, Se and Cs) in the JAEA-SDB was done following the approach/guideline defined in our previous report focusing granite rocks which are related to reference systems in H12 PA and possible applications in the context of URL activities, and Fe-oxide/hydroxide, Al-oxide/hydroxide existing widely in geological environment. As a result, the reliability of 1,373 K d values was evaluated and classified. This classification scheme is expected to make it possible to obtain quick overview of the available data from the SDB, and to have suitable access to the respective data for K d -setting in PA. (author)

  2. Synthetic Multiple-Imputation Procedure for Multistage Complex Samples

    Directory of Open Access Journals (Sweden)

    Zhou Hanzhi

    2016-03-01

    Full Text Available Multiple imputation (MI is commonly used when item-level missing data are present. However, MI requires that survey design information be built into the imputation models. For multistage stratified clustered designs, this requires dummy variables to represent strata as well as primary sampling units (PSUs nested within each stratum in the imputation model. Such a modeling strategy is not only operationally burdensome but also inferentially inefficient when there are many strata in the sample design. Complexity only increases when sampling weights need to be modeled. This article develops a generalpurpose analytic strategy for population inference from complex sample designs with item-level missingness. In a simulation study, the proposed procedures demonstrate efficient estimation and good coverage properties. We also consider an application to accommodate missing body mass index (BMI data in the analysis of BMI percentiles using National Health and Nutrition Examination Survey (NHANES III data. We argue that the proposed methods offer an easy-to-implement solution to problems that are not well-handled by current MI techniques. Note that, while the proposed method borrows from the MI framework to develop its inferential methods, it is not designed as an alternative strategy to release multiply imputed datasets for complex sample design data, but rather as an analytic strategy in and of itself.

  3. Reliability evaluation of emergency AC power systems based on operating experience at U.S. nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Baranowsky, P. W. [U.S. Nuclear Regulatory Commission, Washington, DC (United States)

    1986-02-15

    The reliability of emergency AC power Systems has been under study at the U.S. Nuclear Regulatory Commission and by its contractors for several years. This paper provides the results of work recently performed to evaluate past U.S. nuclear power plant emergency AC power System reliability performance using system level data. Operating experience involving multiple diesel generator failures, unavailabilities, and simultaneous occurrences of failures and out of service diesel generators were used to evaluate reliability performance at individual nuclear power plants covering a 9 year period from 1976 through 1984. The number and nature of failures and distributions of reliability evaluation results are provided. The results show that plant specific performance varied considerably during the period with a large number achieving high reliability performance and a smaller number accounting for lower levels of reliability performance. (author)

  4. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy.

    Science.gov (United States)

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy.

  5. Decision Diagram Based Symbolic Algorithm for Evaluating the Reliability of a Multistate Flow Network

    Directory of Open Access Journals (Sweden)

    Rongsheng Dong

    2016-01-01

    Full Text Available Evaluating the reliability of Multistate Flow Network (MFN is an NP-hard problem. Ordered binary decision diagram (OBDD or variants thereof, such as multivalued decision diagram (MDD, are compact and efficient data structures suitable for dealing with large-scale problems. Two symbolic algorithms for evaluating the reliability of MFN, MFN_OBDD and MFN_MDD, are proposed in this paper. In the algorithms, several operating functions are defined to prune the generated decision diagrams. Thereby the state space of capacity combinations is further compressed and the operational complexity of the decision diagrams is further reduced. Meanwhile, the related theoretical proofs and complexity analysis are carried out. Experimental results show the following: (1 compared to the existing decomposition algorithm, the proposed algorithms take less memory space and fewer loops. (2 The number of nodes and the number of variables of MDD generated in MFN_MDD algorithm are much smaller than those of OBDD built in the MFN_OBDD algorithm. (3 In two cases with the same number of arcs, the proposed algorithms are more suitable for calculating the reliability of sparse networks.

  6. Validity and reliability of the Mastication Observation and Evaluation (MOE) instrument.

    Science.gov (United States)

    Remijn, Lianne; Speyer, Renée; Groen, Brenda E; van Limbeek, Jacques; Nijhuis-van der Sanden, Maria W G

    2014-07-01

    The Mastication Observation and Evaluation (MOE) instrument was developed to allow objective assessment of a child's mastication process. It contains 14 items and was developed over three Delphi rounds. The present study concerns the further development of the MOE using the COSMIN (Consensus based Standard for the Selection of Measurement Instruments) and investigated the instrument's internal consistency, inter-observer reliability, construct validity and floor and ceiling effects. Consumption of three bites of bread and biscuit was evaluated using the MOE. Data of 59 healthy children (6-48 mths) and 38 children (bread) and 37 children (biscuit) with cerebral palsy (24-72 mths) were used. Four items were excluded before analysis due to zero variance. Principal Components Analysis showed one factor with 8 items. Internal consistency was >0.70 (Chronbach's alpha) for both food consistencies and for both groups of children. Inter-observer reliability varied from 0.51 to 0.98 (weighted Gwet's agreement coefficient). The total MOE scores for both groups showed normal distribution for the population. There were no floor or ceiling effects. The revised MOE now contains 8 items that (a) have a consistent concept for mastication and can be scored on a 4-point scale with sufficient reliability and (b) are sensitive to stages of chewing development in young children. The removed items are retained as part of a criterion referenced list within the MOE. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Evaluating system reliability and targeted hardening strategies of power distribution systems subjected to hurricanes

    International Nuclear Information System (INIS)

    Salman, Abdullahi M.; Li, Yue; Stewart, Mark G.

    2015-01-01

    Over the years, power distribution systems have been vulnerable to extensive damage from hurricanes which can cause power outage resulting in millions of dollars of economic losses and restoration costs. Most of the outage is as a result of failure of distribution support structures. Over the years, various methods of strengthening distribution systems have been proposed and studied. Some of these methods, such as undergrounding of the system, have been shown to be unjustified from an economic point of view. A potential cost-effective strategy is targeted hardening of the system. This, however, requires a method of determining critical parts of a system that when strengthened, will have greater impact on reliability. This paper presents a framework for studying the effectiveness of targeted hardening strategies on power distribution systems subjected to hurricanes. The framework includes a methodology for evaluating system reliability that relates failure of poles and power delivery, determination of critical parts of a system, hurricane hazard analysis, and consideration of decay of distribution poles. The framework also incorporates cost analysis that considers economic losses due to power outage. A notional power distribution system is used to demonstrate the framework by evaluating and comparing the effectiveness of three hardening measures. - Highlight: • Risk assessment of power distribution systems subjected to hurricanes is carried out. • Framework for studying effectiveness of targeted hardening strategies is presented. • A system reliability method is proposed. • Targeted hardening is cost effective for existing systems. • Economic losses due to power outage should be considered for cost analysis.

  8. Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications

    Science.gov (United States)

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497

  9. The evaluation of equipment and Instrumentation Reliability Factors on Power Reactor

    International Nuclear Information System (INIS)

    Supriatna, Piping; Karlina, Itjeu; Widagdo, Suharyo; Santosa, Kussigit; Darlis; Sudiyono, Bambang; Yuniyanta, Sasongko; Sudarmin

    1999-01-01

    Equipment and instrumentation reliability on type power reactor control room was determined by its pattern and design. the principle of ergonomy applied on equipment and instrumentation layout in this ABWR type reactor are geometric pattern appropriate with economic body motion, average anthropometry data of operator especially operator hand-reach, range of vision, angle of vision, lighting, color arrangement and harmony as will as operator case in operating the equipment system. Limitation criteria of the parameter mentioned above are based on EPRI NP-3659, NURG 0700, and NUREG/CR-3331 documents. Besides that, the (working) physical environment parameter factor of the control room must be designed in order to fulfil the standard criteria of ergonomic condition based on NUREG-0800. The reliability evaluation of equipment and instrumentation system also occurs observed from man machine interaction side which happen between operator and equipment and instrumentation in the ABWR type power reactor control room. From the MMI analysis can be known the working failure possibility which is caused by the operator. The evaluation result of equipment and instrumentation reliability on ABWR type power reactor control room showed that the design of this ABWR control room is good and fulfils the ergonomy standard criteria have been determined

  10. Ceramics Analysis and Reliability Evaluation of Structures (CARES). Users and programmers manual

    Science.gov (United States)

    Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.

    1990-01-01

    This manual describes how to use the Ceramics Analysis and Reliability Evaluation of Structures (CARES) computer program. The primary function of the code is to calculate the fast fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. The program uses results from MSC/NASTRAN or ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effect of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or unifrom uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-square analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests, ninety percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan ninety percent confidence band values are also provided. The probabilistic fast-fracture theories used in CARES, along with the input and output for CARES, are described. Example problems to demonstrate various feature of the program are also included. This manual describes the MSC/NASTRAN version of the CARES program.

  11. CONSIDERING TRAVEL TIME RELIABILITY AND SAFETY FOR EVALUATION OF CONGESTION RELIEF SCHEMES ON EXPRESSWAY SEGMENTS

    Directory of Open Access Journals (Sweden)

    Babak MEHRAN

    2009-01-01

    Full Text Available Evaluation of the efficiency of congestion relief schemes on expressways has generally been based on average travel time analysis. However, road authorities are much more interested in knowing the possible impacts of improvement schemes on safety and travel time reliability prior to implementing them in real conditions. A methodology is presented to estimate travel time reliability based on modeling travel time variations as a function of demand, capacity and weather conditions. For a subject expressway segment, patterns of demand and capacity were generated for each 5-minute interval over a year by using the Monte-Carlo simulation technique, and accidents were generated randomly according to traffic conditions. A whole year analysis was performed by comparing demand and available capacity for each scenario and shockwave analysis was used to estimate the queue length at each time interval. Travel times were estimated from refined speed-flow relationships and buffer time index was estimated as a measure of travel time reliability. it was shown that the estimated reliability measures and predicted number of accidents are very close to observed values through empirical data. After validation, the methodology was applied to assess the impact of two alternative congestion relief schemes on a subject expressway segment. one alternative was to open the hard shoulder to traffic during the peak period, while the other was to reduce the peak period demand by 15%. The extent of improvements in travel conditions and safety, likewise the reduction in road users' costs after implementing each improvement scheme were estimated. it was shown that both strategies can result in up to 23% reduction in the number of occurred accidents and significant improvements in travel time reliability. Finally, the advantages and challenging issues of selecting each improvement scheme were discussed.

  12. A prospective study assessing agreement and reliability of a geriatric evaluation.

    Science.gov (United States)

    Locatelli, Isabella; Monod, Stéfanie; Cornuz, Jacques; Büla, Christophe J; Senn, Nicolas

    2017-07-19

    The present study takes place within a geriatric program, aiming at improving the diagnosis and management of geriatric syndromes in primary care. Within this program it was of prime importance to be able to rely on a robust and reproducible geriatric consultation to use as a gold standard for evaluating a primary care brief assessment tool. The specific objective of the present study was thus assessing the agreement and reliability of a comprehensive geriatric consultation. The study was conducted at the outpatient clinic of the Service of Geriatric Medicine, University of Lausanne, Switzerland. All community-dwelling older persons aged 70 years and above were eligible. Patients were excluded if they hadn't a primary care physician, they were unable to speak French, or they were already assessed by a geriatrician within the last 12 months. A set of 9 geriatricians evaluated 20 patients. Each patient was assessed twice within a 2-month delay. Geriatric consultations were based on a structured evaluation process, leading to rating the following geriatric conditions: functional, cognitive, visual, and hearing impairment, mood disorders, risk of fall, osteoporosis, malnutrition, and urinary incontinence. Reliability and agreement estimates on each of these items were obtained using a three-way Intraclass Correlation and a three-way Observed Disagreement index. The latter allowed a decomposition of overall disagreement into disagreements due to each source of error variability (visit, rater and random). Agreement ranged between 0.62 and 0.85. For most domains, geriatrician-related error variability explained an important proportion of disagreement. Reliability ranged between 0 and 0.8. It was poor/moderate for visual impairment, malnutrition and risk of fall, and good/excellent for functional/cognitive/hearing impairment, osteoporosis, incontinence and mood disorders. Six out of nine items of the geriatric consultation described in this study (functional

  13. Sequence imputation of HPV16 genomes for genetic association studies.

    Directory of Open Access Journals (Sweden)

    Benjamin Smith

    Full Text Available Human Papillomavirus type 16 (HPV16 causes over half of all cervical cancer and some HPV16 variants are more oncogenic than others. The genetic basis for the extraordinary oncogenic properties of HPV16 compared to other HPVs is unknown. In addition, we neither know which nucleotides vary across and within HPV types and lineages, nor which of the single nucleotide polymorphisms (SNPs determine oncogenicity.A reference set of 62 HPV16 complete genome sequences was established and used to examine patterns of evolutionary relatedness amongst variants using a pairwise identity heatmap and HPV16 phylogeny. A BLAST-based algorithm was developed to impute complete genome data from partial sequence information using the reference database. To interrogate the oncogenic risk of determined and imputed HPV16 SNPs, odds-ratios for each SNP were calculated in a case-control viral genome-wide association study (VWAS using biopsy confirmed high-grade cervix neoplasia and self-limited HPV16 infections from Guanacaste, Costa Rica.HPV16 variants display evolutionarily stable lineages that contain conserved diagnostic SNPs. The imputation algorithm indicated that an average of 97.5±1.03% of SNPs could be accurately imputed. The VWAS revealed specific HPV16 viral SNPs associated with variant lineages and elevated odds ratios; however, individual causal SNPs could not be distinguished with certainty due to the nature of HPV evolution.Conserved and lineage-specific SNPs can be imputed with a high degree of accuracy from limited viral polymorphic data due to the lack of recombination and the stochastic mechanism of variation accumulation in the HPV genome. However, to determine the role of novel variants or non-lineage-specific SNPs by VWAS will require direct sequence analysis. The investigation of patterns of genetic variation and the identification of diagnostic SNPs for lineages of HPV16 variants provides a valuable resource for future studies of HPV16

  14. Imputing amino acid polymorphisms in human leukocyte antigens.

    Directory of Open Access Journals (Sweden)

    Xiaoming Jia

    Full Text Available DNA sequence variation within human leukocyte antigen (HLA genes mediate susceptibility to a wide range of human diseases. The complex genetic structure of the major histocompatibility complex (MHC makes it difficult, however, to collect genotyping data in large cohorts. Long-range linkage disequilibrium between HLA loci and SNP markers across the major histocompatibility complex (MHC region offers an alternative approach through imputation to interrogate HLA variation in existing GWAS data sets. Here we describe a computational strategy, SNP2HLA, to impute classical alleles and amino acid polymorphisms at class I (HLA-A, -B, -C and class II (-DPA1, -DPB1, -DQA1, -DQB1, and -DRB1 loci. To characterize performance of SNP2HLA, we constructed two European ancestry reference panels, one based on data collected in HapMap-CEPH pedigrees (90 individuals and another based on data collected by the Type 1 Diabetes Genetics Consortium (T1DGC, 5,225 individuals. We imputed HLA alleles in an independent data set from the British 1958 Birth Cohort (N = 918 with gold standard four-digit HLA types and SNPs genotyped using the Affymetrix GeneChip 500 K and Illumina Immunochip microarrays. We demonstrate that the sample size of the reference panel, rather than SNP density of the genotyping platform, is critical to achieve high imputation accuracy. Using the larger T1DGC reference panel, the average accuracy at four-digit resolution is 94.7% using the low-density Affymetrix GeneChip 500 K, and 96.7% using the high-density Illumina Immunochip. For amino acid polymorphisms within HLA genes, we achieve 98.6% and 99.3% accuracy using the Affymetrix GeneChip 500 K and Illumina Immunochip, respectively. Finally, we demonstrate how imputation and association testing at amino acid resolution can facilitate fine-mapping of primary MHC association signals, giving a specific example from type 1 diabetes.

  15. Evaluating seismic reliability of Reinforced Concrete Bridge in view of their rehabilitation

    Directory of Open Access Journals (Sweden)

    Boubel Hasnae

    2018-01-01

    Full Text Available Considering in this work, a simplified methodology was proposed in order to evaluate seismic vulnerability of Reinforced Concrete Bridge. Reliability assessment of stress limits state and the applied loading which are assumed to be random variables. It is assumed that only their means and standard deviations are known while no information is available about their densities of probabilities. First Order Reliability Method is applied to a response surface representation of the stress limit state obtained through quadratic polynomial regression of finite element results. Then a parametric study is performed regarding the influence of the distributions of probabilities chosen to model the problem uncertainties for Reinforced Concrete Bridge. It is shown that the probability of failure depends largely on the chosen densities of probabilities, mainly in the useful domain of small failure probabilities.

  16. Evaluation of reliability of on-site A.C. power systems based on maintenance records

    International Nuclear Information System (INIS)

    Basso, G.; Pia, S.; Fusari, W.; Soressi, G.; Vaccari, G.

    1986-01-01

    To the end of ascertain in what extent the evaluation of reliability of emergency diesel generators (D.G.) can be improved by means of a deeper knowledge of their operating history a study has been carried-out on 21 D.G. sets: 4 D.G. of the Caorso nuclear plant (BWR, 870 MWe) and 17 D.G. in service at 6 steam-electric fossil-fuelled plants. The major points of interest resulting from this study are: 1) reliability assessments of A.C. on-site power Systems, made on the basis of outcomes of surveillance tests, may lead to results which overestimate the real performance. 2) the unreliability of a redundant System of stand-by components is determined in large extent by unavailabilities due to scheduled and unscheduled maintenance, latent failures, tests. (authors)

  17. Evaluation of reliability of on-site A.C. power systems based on maintenance records

    Energy Technology Data Exchange (ETDEWEB)

    Basso, G.; Pia, S. [ENEA/TERM/VAOEC, C.R.E. Casaccla via Anguillarese, 00100 Roma/Rome (Italy); Fusari, W. [ENEL, Rome (Italy); Soressi, G.; Vaccari, G. [ENEL, Centro di Ricerca Termica e Nucl., Via Rubattino, 54, 1-20134 Mllano/Milan (Italy)

    1986-02-15

    To the end of ascertain in what extent the evaluation of reliability of emergency diesel generators (D.G.) can be improved by means of a deeper knowledge of their operating history a study has been carried-out on 21 D.G. sets: 4 D.G. of the Caorso nuclear plant (BWR, 870 MWe) and 17 D.G. in service at 6 steam-electric fossil-fuelled plants. The major points of interest resulting from this study are: 1) reliability assessments of A.C. on-site power Systems, made on the basis of outcomes of surveillance tests, may lead to results which overestimate the real performance. 2) the unreliability of a redundant System of stand-by components is determined in large extent by unavailabilities due to scheduled and unscheduled maintenance, latent failures, tests. (authors)

  18. Study of evaluation techniques of software safety and reliability in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Youn, Cheong; Baek, Y. W.; Kim, H. C.; Park, N. J.; Shin, C. Y. [Chungnam National Univ., Taejon (Korea, Republic of)

    1999-04-15

    Software system development process and software quality assurance activities are examined in this study. Especially software safety and reliability requirements in nuclear power plant are investigated. For this purpose methodologies and tools which can be applied to software analysis, design, implementation, testing, maintenance step are evaluated. Necessary tasks for each step are investigated. Duty, input, and detailed activity for each task are defined to establish development process of high quality software system. This means applying basic concepts of software engineering and principles of system development. This study establish a guideline that can assure software safety and reliability requirements in digitalized nuclear plant systems and can be used as a guidebook of software development process to assure software quality many software development organization.

  19. Chinese-adapted youth attitude to noise scale: Evaluation of validity and reliability

    Directory of Open Access Journals (Sweden)

    Xiaofang Zhu

    2014-01-01

    Full Text Available Noise exposure is central to hearing impairment, especially for adolescents. Chinese youth frequently and consciously expose themselves to loud noise, often for many hours. Hence, a Chinese-adapted evaluative scale to measure youth′s attitude toward noise could rigorously evaluate data validity and reliability. After authenticating the youth attitude to noise scale (YANS originally developed by Olsen and Erlandsson, we purposively sampled and surveyed 642 freshmen at Capital Medical University in Beijing, China. To establish validity, we conducted confirmatory factor analysis according to Olsen′s classification. To establish reliability, we calculated Cronbach′s alpha coefficient and split-half coefficient. We used Bland-Altman analysis to calculate the agreement limits between test and retest. Among 642 students, 550 (85.67% participated in statistical analysis (399 females [72.55%] vs. 151 males [27.45%]. Confirmatory factorial analysis sorted 19 items into four main subcategories (F1-F4 in terms of factor load, yielding a correlation coefficient between factors <0.40. The Cronbach′s alpha coefficient (0.70 was within the desirable range, confirming the reliability of Chinese-adapted YANS. The split-half coefficient was 0.53. Furthermore, the paired t-test reported a mean difference of 0.002 (P = 0.9601. Notably, the mean overall YANS score (3.46 was similar to YANS testing in Belgium (3.10, but higher than Sweden (2.10 and Brazil (2.80. The Chinese version of the YANS questionnaire is valid, reliable, and adaptable to Chinese adolescents. Analysis of the adapted YANS showed that a significant number of Chinese youth display a poor attitude and behavior toward noise. Therefore, Chinese YANS can play a pivotal role in programs that focus on increasing youth awareness of noise and hearing health.

  20. Quantitative dynamic reliability evaluation of AP1000 passive safety systems by using FMEA and GO-FLOW methodology

    International Nuclear Information System (INIS)

    Hashim Muhammad; Yoshikawa, Hidekazu; Matsuoka, Takeshi; Yang Ming

    2014-01-01

    The passive safety systems utilized in advanced pressurized water reactor (PWR) design such as AP1000 should be more reliable than that of active safety systems of conventional PWR by less possible opportunities of hardware failures and human errors (less human intervention). The objectives of present study are to evaluate the dynamic reliability of AP1000 plant in order to check the effectiveness of passive safety systems by comparing the reliability-related issues with that of active safety systems in the event of the big accidents. How should the dynamic reliability of passive safety systems properly evaluated? And then what will be the comparison of reliability results of AP1000 passive safety systems with the active safety systems of conventional PWR. For this purpose, a single loop model of AP1000 passive core cooling system (PXS) and passive containment cooling system (PCCS) are assumed separately for quantitative reliability evaluation. The transient behaviors of these passive safety systems are taken under the large break loss-of-coolant accident in the cold leg. The analysis is made by utilizing the qualitative method failure mode and effect analysis in order to identify the potential failure mode and success-oriented reliability analysis tool called GO-FLOW for quantitative reliability evaluation. The GO-FLOW analysis has been conducted separately for PXS and PCCS systems under the same accident. The analysis results show that reliability of AP1000 passive safety systems (PXS and PCCS) is increased due to redundancies and diversity of passive safety subsystems and components, and four stages automatic depressurization system is the key subsystem for successful actuation of PXS and PCCS system. The reliability results of PCCS system of AP1000 are more reliable than that of the containment spray system of conventional PWR. And also GO-FLOW method can be utilized for reliability evaluation of passive safety systems. (author)

  1. Evaluation and improvement in nondestructive examination (NDE) reliability for in-service inspection of light water reactors

    International Nuclear Information System (INIS)

    Deffenbaugh, J.D.; Good, M.S.; Green, E.R.; Heasler, P.G.; Simonen, F.A.; Spanner, J.C.; Taylor, T.T.

    1988-01-01

    The evaluation and improvement of NDE Reliability for In-service Inspection (ISI) of Light Water Reactors (NDE Reliability) Program at Pacific Northwest Laboratory (PNL) was established to determine the reliability of current ISI techniques and to develop recommendations that will ensure a suitably high inspection reliability. The objectives of this NRC program are to: determine the reliability of ultrasonic ISI performed on commercial light-water reactor (LWR) primary systems; determine the impact of NDE unreliability on system safety and determine the level of inspection reliability required to ensure a suitably low failure probability using probabilistic fracture mechanics analysis; evaluate the degree of reliability improvement that could be achieved using improved and advanced NDE technique; and recommend revisions to ASME Code, Section XI, and Regulatory Requirements, based on material properties, service conditions, and NDE uncertainties, that will ensure suitably low failure probabilities. The program consists of three basic tasks: a Piping task, a Pressure Vessel task, and an Evaluation and Improvement in NDE Reliability task. The major efforts were concentrated in the Piping task and the Evaluation and Improvement in NDE Reliability task

  2. Power distribution system reliability evaluation using dagger-sampling Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Y.; Zhao, S.; Ma, Y. [North China Electric Power Univ., Hebei (China). Dept. of Electrical Engineering

    2009-03-11

    A dagger-sampling Monte Carlo simulation method was used to evaluate power distribution system reliability. The dagger-sampling technique was used to record the failure of a component as an incident and to determine its occurrence probability by generating incident samples using random numbers. The dagger sampling technique was combined with the direct sequential Monte Carlo method to calculate average values of load point indices and system indices. Results of the 2 methods with simulation times of up to 100,000 years were then compared. The comparative evaluation showed that less computing time was required using the dagger-sampling technique due to its higher convergence speed. When simulation times were 1000 years, the dagger-sampling method required 0.05 seconds to accomplish an evaluation, while the direct method required 0.27 seconds. 12 refs., 3 tabs., 4 figs.

  3. Human Reliability Assessment and Human Performance Evaluation: Research and Analysis Activities at the U.S. NRC

    International Nuclear Information System (INIS)

    Ramey-Smith, A.M.

    1998-01-01

    The author indicates the themes of the six programs identified by the US NRC mission on human performance and human reliability activities. They aim at developing the technical basis to support human performance, at developing and updating a model of human performance and human reliability, at fostering national and international dialogue and cooperation efforts on human performance evaluation, at conducting operating events analysis and database development, and at providing support to human performance and human reliability inspection

  4. Validity and reliability of a self-administered foot evaluation questionnaire (SAFE-Q).

    Science.gov (United States)

    Niki, Hisateru; Tatsunami, Shinobu; Haraguchi, Naoki; Aoki, Takafumi; Okuda, Ryuzo; Suda, Yasunori; Takao, Masato; Tanaka, Yasuhito

    2013-03-01

    The Japanese Society for Surgery of the Foot (JSSF) is developing a QOL questionnaire instrument for use in pathological conditions related to the foot and ankle. The main body of the outcome instrument (the Self-Administered Foot Evaluation Questionnaire, SAFE-Q version 2) consists of 34 questionnaire items, which provide five subscale scores (1: Pain and Pain-Related; 2: Physical Functioning and Daily Living; 3: Social Functioning; 4: Shoe-Related; and 5: General Health and Well-Being). In addition, the instrument has nine optional questionnaire items that provide a Sports Activity subscale score. The purpose of this study was to evaluate the test-retest reliability of the SAFE-Q. Version 2 of the SAFE-Q was administered to 876 patients and 491 non-patients, and the test-retest reliability was evaluated for 131 patients. In addition, the SF-36 questionnaire and the JSSF Scale scoring form were administered to all of the participants. Subscale scores were scaled such that the final sum of scores ranged between zero (least healthy) to 100 (healthiest). The intraclass correlation coefficients were larger than 0.7 for all of the scores. The means of the five subscale scores were between 60 and 75. The five subscales easily separated patients from non-patients. The coefficients for the correlations of the subscale scores with the scores on the JSSF Scale and the SF-36 subscales were all highly statistically significantly greater than zero (p valid and reliable. In the future, it will be beneficial to test the responsiveness of the SAFE-Q.

  5. Evaluate the system reliability for a manufacturing network with reworking actions

    International Nuclear Information System (INIS)

    Lin, Yi-Kuei; Chang, Ping-Chen

    2012-01-01

    To measure the system reliability of a manufacturing system with reworking actions is a crucial issue in industry, in which the system reliability could be one of the essential performance indicators to evaluate whether the manufacturing system is capable or not. In a manufacturing system, the input flow (raw materials/WIP) processed by each machine might be defective and thus the output flow (WIP/products) would be less than the input amount. Moreover, defective WIP/products are usually incentive to be reworked for reducing wasting and increasing output. Therefore, reworking actions are necessary to be considered in the manufacturing system. Based on the path concept, we revise such a manufacturing system as a stochastic-flow network in which the capacity of each machine is stochastic (i.e., multistate) due to the failure, partial failure, and maintenance. We decompose the network into one general processing path and several reworking paths. Subsequently, three algorithms for different network models are proposed to generate the lower boundary vector which affords to produce enough products satisfying the demand d. In terms of such a vector, the system reliability can be derived afterwards.

  6. Reliability and validity of the photogrammetry for scoliosis evaluation: a cross-sectional prospective study.

    Science.gov (United States)

    Saad, Karen Ruggeri; Colombo, Alexandra S; João, Silvia M Amado

    2009-01-01

    The purpose of this study was to investigate the reliability and validity of photogrammetry in measuring the lateral spinal inclination angles. Forty subjects (32 female and 8 males) with a mean age of 23.4 +/- 11.2 years had their scoliosis evaluated by radiographs of their trunk, determined by the Cobb angle method, and by photogrammetry. The statistical methods used included Cronbach alpha, Pearson/Spearman correlation coefficients, and regression analyses. The Cronbach alpha values showed that the photogrammetric measures showed high internal consistency, which indicated that the sample was bias free. The radiograph method showed to be more precise with intrarater reliabilities of 0.936, 0.975, and 0.945 for the thoracic, lumbar, and thoracolumbar curves, respectively, and interrater reliabilities of 0.942 and 0.879 for the angular measures of the thoracic and thoracolumbar segments, respectively. The regression analyses revealed a high determination coefficient although limited to the adjusted linear model between the radiographic and photographic measures. It was found that with more severe scoliosis, the lateral curve measures obtained with the photogrammetry were for the thoracic and lumbar regions (R = 0.619 and 0.551). The photogrammetric measures were found to be reproducible in this study and could be used as supplementary information to decrease the number of radiographs necessary for the monitoring of scoliosis.

  7. Ischiofemoral impingement: evaluation with new MRI parameters and assessment of their reliability

    Energy Technology Data Exchange (ETDEWEB)

    Tosun, Ozgur; Algin, Oktay; Cay, Nurdan; Karaoglanoglu, Mustafa [Ankara Ataturk Education and Research Hospital, Department of Radiology, Ankara (Turkey); Yalcin, Nadir [University of California, Department of Orthopaedic Surgery, San Francisco, CA (United States); Ocakoglu, Gokhan [Uludag University Medical Faculty, Biostatistics Department, Bursa (Turkey)

    2012-05-15

    The aim of this study was to describe the magnetic resonance imaging (MRI) findings in patients with ischiofemoral impingement (IFI) and to evaluate the reliability of these MRI findings. Seventy hips of 50 patients with hip pain and quadratus femoris muscle (QFM) edema and 38 hips of 30 control cases were included in the study. The QFM edema and fatty replacement were assessed visually. Ischiofemoral space (IFS), quadratus femoris space (QFS), inclination angle (IA), hamstring tendon area (HTA), and total quadratus femoris muscle volume (TQFMV) measurements were performed independently by two musculoskeletal radiologists. The intra- and interobserver reliabilities were obtained for quantitative variables. IFS, QFS, and TQFMV values of the patient group were significantly lower than those of controls (P < 0.001). HTA and IA measurements of the patient group were also significantly higher than in controls (P < 0.05). The QFM fatty replacement grades were significantly higher in the patient group than in the control group (P < 0.001). Inter- and intra-observer reliabilities were strong for all continuous variables. Clinicians and radiologists should be aware of IFI in patients with hip or groin pain, and MRI should be obtained for the presence of the QFM edema/fatty replacement, narrowing of the IFS-QFS, and other features that may help in the clinical diagnosis of IFI for the proper diagnosis and treatment of the disease. (orig.)

  8. Ischiofemoral impingement: evaluation with new MRI parameters and assessment of their reliability

    International Nuclear Information System (INIS)

    Tosun, Ozgur; Algin, Oktay; Cay, Nurdan; Karaoglanoglu, Mustafa; Yalcin, Nadir; Ocakoglu, Gokhan

    2012-01-01

    The aim of this study was to describe the magnetic resonance imaging (MRI) findings in patients with ischiofemoral impingement (IFI) and to evaluate the reliability of these MRI findings. Seventy hips of 50 patients with hip pain and quadratus femoris muscle (QFM) edema and 38 hips of 30 control cases were included in the study. The QFM edema and fatty replacement were assessed visually. Ischiofemoral space (IFS), quadratus femoris space (QFS), inclination angle (IA), hamstring tendon area (HTA), and total quadratus femoris muscle volume (TQFMV) measurements were performed independently by two musculoskeletal radiologists. The intra- and interobserver reliabilities were obtained for quantitative variables. IFS, QFS, and TQFMV values of the patient group were significantly lower than those of controls (P < 0.001). HTA and IA measurements of the patient group were also significantly higher than in controls (P < 0.05). The QFM fatty replacement grades were significantly higher in the patient group than in the control group (P < 0.001). Inter- and intra-observer reliabilities were strong for all continuous variables. Clinicians and radiologists should be aware of IFI in patients with hip or groin pain, and MRI should be obtained for the presence of the QFM edema/fatty replacement, narrowing of the IFS-QFS, and other features that may help in the clinical diagnosis of IFI for the proper diagnosis and treatment of the disease. (orig.)

  9. Coupling finite elements and reliability methods - application to safety evaluation of pressurized water reactor vessels

    International Nuclear Information System (INIS)

    Pitner, P.; Venturini, V.

    1995-02-01

    When reliability studies are extended form deterministic calculations in mechanics, it is necessary to take into account input parameters variabilities which are linked to the different sources of uncertainty. Integrals must then be calculated to evaluate the failure risk. This can be performed either by simulation methods, or by approximations ones (FORM/SORM). Model in mechanics often require to perform calculation codes. These ones must then be coupled with the reliability calculations. Theses codes can involve large calculation times when they are invoked numerous times during simulations sequences or in complex iterative procedures. Response surface method gives an approximation of the real response from a reduced number of points for which the finite element code is run. Thus, when it is combined with FORM/SORM methods, a coupling can be carried out which gives results in a reasonable calculation time. An application of response surface method to mechanics reliability coupling for a mechanical model which calls for a finite element code is presented. It corresponds to a probabilistic fracture mechanics study of a pressurized water reactor vessel. (authors). 5 refs., 3 figs

  10. Evaluation of Factorial Validity and Reliability of a Food Behavior Checklist for Low-Income Filipinos.

    Science.gov (United States)

    Suzuki, Asuka; Choi, So Yung; Lim, Eunjung; Tauyan, Socorro; Banna, Jinan C

    To examine factorial validity, test-retest reliability, and internal consistency of a Tagalog-language food behavior checklist (FBC) for a low-income Filipino population. Participants (n = 160) completed the FBC on 2 occasions 3 weeks apart. Factor structure was examined using principal component analysis. For internal consistency, Cronbach α was calculated. For test-retest reliability, Spearman correlation or intraclass correlation coefficient (ICC) was calculated between scores at the 2 points. All but 1 item loaded on 6 factors: fruit and vegetable quantity, fruit and vegetable variety, fast food, sweetened beverage, healthy fat, and diet quality. Cronbach α was .75 for the total scale (range, .39-.76 for subscales). Spearman correlation was 0.78 (ICC, 0.79) for the total scale (range, 0.66-0.80 [ICC, 0.68-0.80] for subscales). The FBC demonstrated adequate factorial validity, test-retest reliability, and internal consistency. With additional testing, the FBC may be used to evaluate the US Department of Agriculture's nutrition education programs for Tagalog speakers. Copyright © 2017 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  11. An integrated methodology for the dynamic performance and reliability evaluation of fault-tolerant systems

    International Nuclear Information System (INIS)

    Dominguez-Garcia, Alejandro D.; Kassakian, John G.; Schindall, Joel E.; Zinchuk, Jeffrey J.

    2008-01-01

    We propose an integrated methodology for the reliability and dynamic performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers to design the control system, but also incorporates artifacts to model the failure behavior of each component. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. Markov chains are used to model the stochastic process associated with the different configurations that a system can adopt when failures occur. This methodology not only enables an integrated framework for evaluating dynamic performance and reliability of fault-tolerant systems, but also enables a method for guiding the system design process, and further optimization. To illustrate the methodology, we present a case-study of a lateral-directional flight control system for a fighter aircraft

  12. Reliability Evaluation of Base-Metal-Electrode (BME) Multilayer Ceramic Capacitors for Space Applications

    Science.gov (United States)

    Liu, David (Donghang)

    2011-01-01

    This paper reports reliability evaluation of BME ceramic capacitors for possible high reliability space-level applications. The study is focused on the construction and microstructure of BME capacitors and their impacts on the capacitor life reliability. First, the examinations of the construction and microstructure of commercial-off-the-shelf (COTS) BME capacitors show great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and approximately 0.5 micrometers, which is much less than that of most PME capacitors. The primary reasons that a BME capacitor can be fabricated with more internal electrode layers and less dielectric layer thickness is that it has a fine-grained microstructure and does not shrink much during ceramic sintering. This results in the BME capacitors a very high volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT) and regular life testing as per MIL-PRF-123. Most BME capacitors were found to fail· with an early dielectric wearout, followed by a rapid wearout failure mode during the HALT test. When most of the early wearout failures were removed, BME capacitors exhibited a minimum mean time-to-failure of more than 10(exp 5) years. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically between 10 and 20. This may suggest that the number of grains per dielectric layer is more critical than the thickness itself for determining the rated voltage and the life

  13. Reliability and cost evaluation of small isolated power systems containing photovoltaic and wind energy

    Science.gov (United States)

    Karki, Rajesh

    Renewable energy application in electric power systems is growing rapidly worldwide due to enhanced public concerns for adverse environmental impacts and escalation in energy costs associated with the use of conventional energy sources. Photovoltaics and wind energy sources are being increasingly recognized as cost effective generation sources. A comprehensive evaluation of reliability and cost is required to analyze the actual benefits of utilizing these energy sources. The reliability aspects of utilizing renewable energy sources have largely been ignored in the past due the relatively insignificant contribution of these sources in major power systems, and consequently due to the lack of appropriate techniques. Renewable energy sources have the potential to play a significant role in the electrical energy requirements of small isolated power systems which are primarily supplied by costly diesel fuel. A relatively high renewable energy penetration can significantly reduce the system fuel costs but can also have considerable impact on the system reliability. Small isolated systems routinely plan their generating facilities using deterministic adequacy methods that cannot incorporate the highly erratic behavior of renewable energy sources. The utilization of a single probabilistic risk index has not been generally accepted in small isolated system evaluation despite its utilization in most large power utilities. Deterministic and probabilistic techniques are combined in this thesis using a system well-being approach to provide useful adequacy indices for small isolated systems that include renewable energy. This thesis presents an evaluation model for small isolated systems containing renewable energy sources by integrating simulation models that generate appropriate atmospheric data, evaluate chronological renewable power outputs and combine total available energy and load to provide useful system indices. A software tool SIPSREL+ has been developed which generates

  14. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Quality Assurance Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; R. Nims; K. J. Kvarfordt; C. Wharton

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the INL in this project is that of software developer and tester. This development takes place using formal software development procedures and is subject to quality assurance (QA) processes. The purpose of this document is to describe how the SAPHIRE software QA is performed for Version 6 and 7, what constitutes its parts, and limitations of those processes.

  15. Evaluation of flaw characteristics and their influence on inservice inspection reliability

    International Nuclear Information System (INIS)

    Becker, F.L.

    1980-01-01

    This report describes the results of the first year's effort of a five year program which is being conducted by Battelle, Pacific Northwest Laboratories, on behalf of the US Nuclear Regulatory Commission. This initial effort was directed toward identification and quantification of inspection uncertainties, which are likely to occur during inservice inspection of LWR primary piping systems, and their influence on inspection reliability. These experiments were conducted on 304 stainless steel samples, however, the results are equally applicable to other materials. Later portions of the program will extend these measurements and evaluations to other materials and conditions

  16. State of the art of Monte Carlo technics for reliable activated waste evaluations

    International Nuclear Information System (INIS)

    Culioli, Matthieu; Chapoutier, Nicolas; Barbier, Samuel; Janski, Sylvain

    2016-01-01

    This paper presents the calculation scheme used for many studies to assess the activities inventory of French shutdown reactors (including Pressurized Water Reactor, Heavy Water Reactor, Sodium-Cooled Fast Reactor and Natural Uranium Gas Cooled or UNGG). This calculation scheme is based on Monte Carlo calculations (MCNP) and involves advanced technique for source modeling, geometry modeling (with Computer-Aided Design integration), acceleration methods and depletion calculations coupling on 3D meshes. All these techniques offer efficient and reliable evaluations on large scale model with a high level of details reducing the risks of underestimation or conservatisms. (authors)

  17. Application of Kaplan-Meier analysis in reliability evaluation of products cast from aluminium alloys

    OpenAIRE

    J. Szymszal; A. Gierek; J. Kliś

    2010-01-01

    The article evaluates the reliability of AlSi17CuNiMg alloys using Kaplan-Meier-based technique, very popular as a survival estimation tool in medical science. The main object of survival analysis is a group (or groups) of units for which the time of occurrence of an event (failure) taking place after some time of waiting is estimated. For example, in medicine, the failure can be patient’s death. In this study, the failure was the specimen fracture during a periodical fatigue test, while the ...

  18. Evaluation of the reliability of maize reference assays for GMO quantification.

    Science.gov (United States)

    Papazova, Nina; Zhang, David; Gruden, Kristina; Vojvoda, Jana; Yang, Litao; Buh Gasparic, Meti; Blejec, Andrej; Fouilloux, Stephane; De Loose, Marc; Taverniers, Isabel

    2010-03-01

    A reliable PCR reference assay for relative genetically modified organism (GMO) quantification must be specific for the target taxon and amplify uniformly along the commercialised varieties within the considered taxon. Different reference assays for maize (Zea mays L.) are used in official methods for GMO quantification. In this study, we evaluated the reliability of eight existing maize reference assays, four of which are used in combination with an event-specific polymerase chain reaction (PCR) assay validated and published by the Community Reference Laboratory (CRL). We analysed the nucleotide sequence variation in the target genomic regions in a broad range of transgenic and conventional varieties and lines: MON 810 varieties cultivated in Spain and conventional varieties from various geographical origins and breeding history. In addition, the reliability of the assays was evaluated based on their PCR amplification performance. A single base pair substitution, corresponding to a single nucleotide polymorphism (SNP) reported in an earlier study, was observed in the forward primer of one of the studied alcohol dehydrogenase 1 (Adh1) (70) assays in a large number of varieties. The SNP presence is consistent with a poor PCR performance observed for this assay along the tested varieties. The obtained data show that the Adh1 (70) assay used in the official CRL NK603 assay is unreliable. Based on our results from both the nucleotide stability study and the PCR performance test, we can conclude that the Adh1 (136) reference assay (T25 and Bt11 assays) as well as the tested high mobility group protein gene assay, which also form parts of CRL methods for quantification, are highly reliable. Despite the observed uniformity in the nucleotide sequence of the invertase gene assay, the PCR performance test reveals that this target sequence might occur in more than one copy. Finally, although currently not forming a part of official quantification methods, zein and SSIIb

  19. Developing a contributing factor classification scheme for Rasmussen's AcciMap: Reliability and validity evaluation.

    Science.gov (United States)

    Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F

    2017-10-01

    One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1  = 68.8%; M T2  = 73.9%), and were poor at the descriptor level (M T1  = 58.5%; M T2  = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1  = 73.9%; M T2  = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1  = 67.6%; M T2  = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Uncertainty evaluation of reliability of shutdown system of a medium size fast breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    Zeliang, Chireuding; Singh, Om Pal, E-mail: singhop@iitk.ac.in; Munshi, Prabhat

    2016-11-15

    Highlights: • Uncertainty analysis of reliability of Shutdown System is carried out. • Monte Carlo method of sampling is used. • The effect of various reliability improvement measures of SDS are accounted. - Abstract: In this paper, results are presented on the uncertainty evaluation of the reliability of Shutdown System (SDS) of a Medium Size Fast Breeder Reactor (MSFBR). The reliability analysis results are of Kumar et al. (2005). The failure rate of the components of SDS are taken from International literature and it is assumed that these follow log-normal distribution. Fault tree method is employed to propagate the uncertainty in failure rate from components level to shutdown system level. The beta factor model is used to account different extent of diversity. The Monte Carlo sampling technique is used for the analysis. The results of uncertainty analysis are presented in terms of the probability density function, cumulative distribution function, mean, variance, percentile values, confidence intervals, etc. It is observed that the spread in the probability distribution of SDS failure rate is less than SDS components failure rate and ninety percent values of the failure rate of SDS falls below the target value. As generic values of failure rates are used, sensitivity analysis is performed with respect to failure rate of control and safety rods and beta factor. It is discovered that a large increase in failure rate of SDS rods is not carried to SDS system failure proportionately. The failure rate of SDS is very sensitive to the beta factor of common cause failure between the two systems of SDS. The results of the study provide insight in the propagation of uncertainty in the failure rate of SDS components to failure rate of shutdown system.

  1. Short-Term and Medium-Term Reliability Evaluation for Power Systems With High Penetration of Wind Power

    DEFF Research Database (Denmark)

    Ding, Yi; Singh, Chanan; Goel, Lalit

    2014-01-01

    reliability evaluation techniques for power systems are well developed. These techniques are more focused on steady-state (time-independent) reliability evaluation and have been successfully applied in power system planning and expansion. In the operational phase, however, they may be too rough......The expanding share of the fluctuating and less predictable wind power generation can introduce complexities in power system reliability evaluation and management. This entails a need for the system operator to assess the system status more accurately for securing real-time balancing. The existing...... an approximation of the time-varying behavior of power systems with high penetration of wind power. This paper proposes a time-varying reliability assessment technique. Time-varying reliability models for wind farms, conventional generating units, and rapid start-up generating units are developed and represented...

  2. Impute DC link (IDCL) cell based power converters and control thereof

    Science.gov (United States)

    Divan, Deepakraj M.; Prasai, Anish; Hernendez, Jorge; Moghe, Rohit; Iyer, Amrit; Kandula, Rajendra Prasad

    2016-04-26

    Power flow controllers based on Imputed DC Link (IDCL) cells are provided. The IDCL cell is a self-contained power electronic building block (PEBB). The IDCL cell may be stacked in series and parallel to achieve power flow control at higher voltage and current levels. Each IDCL cell may comprise a gate drive, a voltage sharing module, and a thermal management component in order to facilitate easy integration of the cell into a variety of applications. By providing direct AC conversion, the IDCL cell based AC/AC converters reduce device count, eliminate the use of electrolytic capacitors that have life and reliability issues, and improve system efficiency compared with similarly rated back-to-back inverter system.

  3. A Reliability and Validity of an Instrument to Evaluate the School-Based Assessment System: A Pilot Study

    Science.gov (United States)

    Ghazali, Nor Hasnida Md

    2016-01-01

    A valid, reliable and practical instrument is needed to evaluate the implementation of the school-based assessment (SBA) system. The aim of this study is to develop and assess the validity and reliability of an instrument to measure the perception of teachers towards the SBA implementation in schools. The instrument is developed based on a…

  4. Reliability of the Matson Evaluation of Social Skills with Youngsters (MESSY) for Children with Autism Spectrum Disorders

    Science.gov (United States)

    Matson, Johnny L.; Horovitz, Max; Mahan, Sara; Fodstad, Jill

    2013-01-01

    The purpose of this paper was to update the psychometrics of the "Matson Evaluation of Social Skills for Youngsters" ("MESSY") with children with Autism Spectrum Disorders (ASD), specifically with respect to internal consistency, split-half reliability, and inter-rater reliability. In Study 1, 114 children with ASD (Autistic Disorder, Asperger's…

  5. Multi-state time-varying reliability evaluation of smart grid with flexible demand resources utilizing Lz transform

    Science.gov (United States)

    Jia, Heping; Jin, Wende; Ding, Yi; Song, Yonghua; Yu, Dezhao

    2017-01-01

    With the expanding proportion of renewable energy generation and development of smart grid technologies, flexible demand resources (FDRs) have been utilized as an approach to accommodating renewable energies. However, multiple uncertainties of FDRs may influence reliable and secure operation of smart grid. Multi-state reliability models for a single FDR and aggregating FDRs have been proposed in this paper with regard to responsive abilities for FDRs and random failures for both FDR devices and information system. The proposed reliability evaluation technique is based on Lz transform method which can formulate time-varying reliability indices. A modified IEEE-RTS has been utilized as an illustration of the proposed technique.

  6. Human reliability analysis as an evaluation tool of the emergency evacuation process on industrial installation

    International Nuclear Information System (INIS)

    Santos, Isaac J.A.L. dos; Grecco, Claudio H.S.; Mol, Antonio C.A.; Carvalho, Paulo V.R.; Oliveira, Mauro V.; Botelho, Felipe Mury

    2007-01-01

    Human reliability is the probability that a person correctly performs some required activity by the system in a required time period and performs no extraneous activity that can degrade the system. Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. The human error concept must not have connotation of guilt and punishment, having to be treated as a natural consequence, that emerges due to the not continuity between the human capacity and the system demand. The majority of the human error is a consequence of the work situation and not of the responsibility lack of the worker. The anticipation and the control of potentially adverse impacts of human action or interactions between the humans and the system are integral parts of the process safety, where the factors that influence the human performance must be recognized and managed. The aim of this paper is to propose a methodology to evaluate the emergency evacuation process on industrial installations including SLIM-MAUD, a HRA first-generation method, and using virtual reality and simulation software to build and to simulate the chosen emergency scenes. (author)

  7. Human reliability analysis as an evaluation tool of the emergency evacuation process on industrial installation

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Isaac J.A.L. dos; Grecco, Claudio H.S.; Mol, Antonio C.A.; Carvalho, Paulo V.R.; Oliveira, Mauro V.; Botelho, Felipe Mury [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)]. E-mail: luquetti@ien.gov.br; grecco@ien.gov.br; mol@ien.gov.br; paulov@ien.gov.br; mvitor@ien.gov.br; felipemury@superig.com.br

    2007-07-01

    Human reliability is the probability that a person correctly performs some required activity by the system in a required time period and performs no extraneous activity that can degrade the system. Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. The human error concept must not have connotation of guilt and punishment, having to be treated as a natural consequence, that emerges due to the not continuity between the human capacity and the system demand. The majority of the human error is a consequence of the work situation and not of the responsibility lack of the worker. The anticipation and the control of potentially adverse impacts of human action or interactions between the humans and the system are integral parts of the process safety, where the factors that influence the human performance must be recognized and managed. The aim of this paper is to propose a methodology to evaluate the emergency evacuation process on industrial installations including SLIM-MAUD, a HRA first-generation method, and using virtual reality and simulation software to build and to simulate the chosen emergency scenes. (author)

  8. Identification of a practical and reliable method for the evaluation of litter moisture in turkey production.

    Science.gov (United States)

    Vinco, L J; Giacomelli, S; Campana, L; Chiari, M; Vitale, N; Lombardi, G; Veldkamp, T; Hocking, P M

    2018-02-01

    1. An experiment was conducted to compare 5 different methods for the evaluation of litter moisture. 2. For litter collection and assessment, 55 farms were selected, one shed from each farm was inspected and 9 points were identified within each shed. 3. For each device, used for the evaluation of litter moisture, mean and standard deviation of wetness measures per collection point were assessed. 4. The reliability and overall consistency between the 5 instruments used to measure wetness were high (α = 0.72). 5. Measurement of three out of the 9 collection points were sufficient to provide a reliable assessment of litter moisture throughout the shed. 6. Based on the direct correlation between litter moisture and footpad lesions, litter moisture measurement can be used as a resource based on-farm animal welfare indicator. 7. Among the 5 methods analysed, visual scoring is the most simple and practical, and therefore the best candidate to be used on-farm for animal welfare assessment.

  9. Evaluation of seismic reliability of steel moment resisting frames rehabilitated by concentric braces with probabilistic models

    Directory of Open Access Journals (Sweden)

    Fateme Rezaei

    2017-08-01

    Full Text Available Probability of structure failure which has been designed by "deterministic methods" can be more than the one which has been designed in similar situation using probabilistic methods and models considering "uncertainties". The main purpose of this research was to evaluate the seismic reliability of steel moment resisting frames rehabilitated with concentric braces by probabilistic models. To do so, three-story and nine-story steel moment resisting frames were designed based on resistant criteria of Iranian code and then they were rehabilitated based on controlling drift limitations by concentric braces. Probability of frames failure was evaluated by probabilistic models of magnitude, location of earthquake, ground shaking intensity in the area of the structure, probabilistic model of building response (based on maximum lateral roof displacement and probabilistic methods. These frames were analyzed under subcrustal source by sampling probabilistic method "Risk Tools" (RT. Comparing the exceedance probability of building response curves (or selected points on it of the three-story and nine-story model frames (before and after rehabilitation, seismic response of rehabilitated frames, was reduced and their reliability was improved. Also the main effective variables in reducing the probability of frames failure were determined using sensitivity analysis by FORM probabilistic method. The most effective variables reducing the probability of frames failure are  in the magnitude model, ground shaking intensity model error and magnitude model error

  10. PROOF OF CONCEPT FOR A HUMAN RELIABILITY ANALYSIS METHOD FOR HEURISTIC USABILITY EVALUATION OF SOFTWARE

    International Nuclear Information System (INIS)

    Ronald L. Boring; David I. Gertman; Jeffrey C. Joe; Julie L. Marble

    2005-01-01

    An ongoing issue within human-computer interaction (HCI) is the need for simplified or ''discount'' methods. The current economic slowdown has necessitated innovative methods that are results driven and cost effective. The myriad methods of design and usability are currently being cost-justified, and new techniques are actively being explored that meet current budgets and needs. Recent efforts in human reliability analysis (HRA) are highlighted by the ten-year development of the Standardized Plant Analysis Risk HRA (SPAR-H) method. The SPAR-H method has been used primarily for determining human centered risk at nuclear power plants. The SPAR-H method, however, shares task analysis underpinnings with HCI. Despite this methodological overlap, there is currently no HRA approach deployed in heuristic usability evaluation. This paper presents an extension of the existing SPAR-H method to be used as part of heuristic usability evaluation in HCI

  11. A Novel Evaluation Method for Building Construction Project Based on Integrated Information Entropy with Reliability Theory

    Directory of Open Access Journals (Sweden)

    Xiao-ping Bai

    2013-01-01

    Full Text Available Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  12. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    Science.gov (United States)

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  13. Genomic evaluations with many more genotypes

    Directory of Open Access Journals (Sweden)

    Wiggans George R

    2011-03-01

    Full Text Available Abstract Background Genomic evaluations in Holstein dairy cattle have quickly become more reliable over the last two years in many countries as more animals have been genotyped for 50,000 markers. Evaluations can also include animals genotyped with more or fewer markers using new tools such as the 777,000 or 2,900 marker chips recently introduced for cattle. Gains from more markers can be predicted using simulation, whereas strategies to use fewer markers have been compared using subsets of actual genotypes. The overall cost of selection is reduced by genotyping most animals at less than the highest density and imputing their missing genotypes using haplotypes. Algorithms to combine different densities need to be efficient because numbers of genotyped animals and markers may continue to grow quickly. Methods Genotypes for 500,000 markers were simulated for the 33,414 Holsteins that had 50,000 marker genotypes in the North American database. Another 86,465 non-genotyped ancestors were included in the pedigree file, and linkage disequilibrium was generated directly in the base population. Mixed density datasets were created by keeping 50,000 (every tenth of the markers for most animals. Missing genotypes were imputed using a combination of population haplotyping and pedigree haplotyping. Reliabilities of genomic evaluations using linear and nonlinear methods were compared. Results Differing marker sets for a large population were combined with just a few hours of computation. About 95% of paternal alleles were determined correctly, and > 95% of missing genotypes were called correctly. Reliability of breeding values was already high (84.4% with 50,000 simulated markers. The gain in reliability from increasing the number of markers to 500,000 was only 1.6%, but more than half of that gain resulted from genotyping just 1,406 young bulls at higher density. Linear genomic evaluations had reliabilities 1.5% lower than the nonlinear evaluations with 50

  14. Towards a more efficient representation of imputation operators in TPOT

    OpenAIRE

    Garciarena, Unai; Mendiburu, Alexander; Santana, Roberto

    2018-01-01

    Automated Machine Learning encompasses a set of meta-algorithms intended to design and apply machine learning techniques (e.g., model selection, hyperparameter tuning, model assessment, etc.). TPOT, a software for optimizing machine learning pipelines based on genetic programming (GP), is a novel example of this kind of applications. Recently we have proposed a way to introduce imputation methods as part of TPOT. While our approach was able to deal with problems with missing data, it can prod...

  15. DTW-APPROACH FOR UNCORRELATED MULTIVARIATE TIME SERIES IMPUTATION

    OpenAIRE

    Phan , Thi-Thu-Hong; Poisson Caillault , Emilie; Bigand , André; Lefebvre , Alain

    2017-01-01

    International audience; Missing data are inevitable in almost domains of applied sciences. Data analysis with missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Some well-known methods for multivariate time series imputation require high correlations between series or their features. In this paper , we propose an approach based on the shape-behaviour relation in low/un-correlated multivariate time series under an assumption of...

  16. Which DTW Method Applied to Marine Univariate Time Series Imputation

    OpenAIRE

    Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André

    2017-01-01

    International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...

  17. Imputation of missing data in time series for air pollutants

    Science.gov (United States)

    Junger, W. L.; Ponce de Leon, A.

    2015-02-01

    Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.

  18. A spatial haplotype copying model with applications to genotype imputation.

    Science.gov (United States)

    Yang, Wen-Yun; Hormozdiari, Farhad; Eskin, Eleazar; Pasaniuc, Bogdan

    2015-05-01

    Ever since its introduction, the haplotype copy model has proven to be one of the most successful approaches for modeling genetic variation in human populations, with applications ranging from ancestry inference to genotype phasing and imputation. Motivated by coalescent theory, this approach assumes that any chromosome (haplotype) can be modeled as a mosaic of segments copied from a set of chromosomes sampled from the same population. At the core of the model is the assumption that any chromosome from the sample is equally likely to contribute a priori to the copying process. Motivated by recent works that model genetic variation in a geographic continuum, we propose a new spatial-aware haplotype copy model that jointly models geography and the haplotype copying process. We extend hidden Markov models of haplotype diversity such that at any given location, haplotypes that are closest in the genetic-geographic continuum map are a priori more likely to contribute to the copying process than distant ones. Through simulations starting from the 1000 Genomes data, we show that our model achieves superior accuracy in genotype imputation over the standard spatial-unaware haplotype copy model. In addition, we show the utility of our model in selecting a small personalized reference panel for imputation that leads to both improved accuracy as well as to a lower computational runtime than the standard approach. Finally, we show our proposed model can be used to localize individuals on the genetic-geographical map on the basis of their genotype data.

  19. Application case study of AP1000 automatic depressurization system (ADS) for reliability evaluation by GO-FLOW methodology

    Energy Technology Data Exchange (ETDEWEB)

    Hashim, Muhammad, E-mail: hashimsajid@yahoo.com; Hidekazu, Yoshikawa, E-mail: yosikawa@kib.biglobe.ne.jp; Takeshi, Matsuoka, E-mail: mats@cc.utsunomiya-u.ac.jp; Ming, Yang, E-mail: myang.heu@gmail.com

    2014-10-15

    Highlights: • Discussion on reasons why AP1000 equipped with ADS system comparatively to PWR. • Clarification of full and partial depressurization of reactor coolant system by ADS system. • Application case study of four stages ADS system for reliability evaluation in LBLOCA. • GO-FLOW tool is capable to evaluate dynamic reliability of passive safety systems. • Calculated ADS reliability result significantly increased dynamic reliability of PXS. - Abstract: AP1000 nuclear power plant (NPP) utilized passive means for the safety systems to ensure its safety in events of transient or severe accidents. One of the unique safety systems of AP1000 to be compared with conventional PWR is the “four stages Automatic Depressurization System (ADS)”, and ADS system originally works as an active safety system. In the present study, authors first discussed the reasons of why four stages ADS system is added in AP1000 plant to be compared with conventional PWR in the aspect of reliability. And then explained the full and partial depressurization of RCS system by four stages ADS in events of transient and loss of coolant accidents (LOCAs). Lastly, the application case study of four stages ADS system of AP1000 has been conducted in the aspect of reliability evaluation of ADS system under postulated conditions of full RCS depressurization during large break loss of a coolant accident (LBLOCA) in one of the RCS cold legs. In this case study, the reliability evaluation is made by GO-FLOW methodology to determinate the influence of ADS system in dynamic reliability of passive core cooling system (PXS) of AP1000, i.e. what will happen if ADS system fails or successfully actuate. The GO-FLOW is success-oriented reliability analysis tool and is capable to evaluating the systems reliability/unavailability alternatively to Fault Tree Analysis (FTA) and Event Tree Analysis (ETA) tools. Under these specific conditions of LBLOCA, the GO-FLOW calculated reliability results indicated

  20. Validity, Reliability, and Potential Bias of Short Forms of Students' Evaluation of Teaching: The Case of UAE University

    Science.gov (United States)

    Dodeen, Hamzeh

    2013-01-01

    Students' opinions continue to be a significant factor in the evaluation of teaching in higher education institutions. The purpose of this study was to psychometrically assess short students evaluation of teaching (SET) forms using the UAE University form as a model. The study evaluated the form validity, reliability, the overall question, and…

  1. A review of the models for evaluating organizational factors in human reliability analysis

    International Nuclear Information System (INIS)

    Alvarenga, Marco Antonio Bayout; Fonseca, Renato Alves da; Melo, Paulo Fernando Ferreira Frutuoso e

    2009-01-01

    Human factors should be evaluated in three hierarchical levels. The first level should concern the cognitive behavior of human beings during the control of processes that occur through the man-machine interface. Here, one evaluates human errors through human reliability models of first and second generation, like THERP, ASEP and HCR (first generation) and ATHEANA and CREAM (second generation). In the second level, the focus is in the cognitive behavior of human beings when they work in groups, as in nuclear power plants. The focus here is in the anthropological aspects that govern the interaction among human beings. In the third level, one is interested in the influence that the organizational culture exerts on human beings as well as on the tasks being performed. Here, one adds to the factors of the second level the economical and political aspects that shape the company organizational culture. Nowadays, the methodologies of HRA incorporate organizational factors in the group and organization levels through performance shaping factors. This work makes a critical evaluation of the deficiencies concerning human factors and evaluates the potential of quantitative techniques that have been proposed in the last decade to model organizational factors, including the interaction among groups, with the intention of eliminating this chronic deficiency of HRA models. Two important techniques will be discussed in this context: STAMP, based on system theory and FRAM, which aims at modeling the nonlinearities of socio-technical systems. (author)

  2. A review of the models for evaluating organizational factors in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga, Marco Antonio Bayout; Fonseca, Renato Alves da [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)], e-mail: bayout@cnen.gov.br, e-mail: rfonseca@cnen.gov.br; Melo, Paulo Fernando Ferreira Frutuoso e [Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear], e-mail: frutuoso@con.ufrj.br

    2009-07-01

    Human factors should be evaluated in three hierarchical levels. The first level should concern the cognitive behavior of human beings during the control of processes that occur through the man-machine interface. Here, one evaluates human errors through human reliability models of first and second generation, like THERP, ASEP and HCR (first generation) and ATHEANA and CREAM (second generation). In the second level, the focus is in the cognitive behavior of human beings when they work in groups, as in nuclear power plants. The focus here is in the anthropological aspects that govern the interaction among human beings. In the third level, one is interested in the influence that the organizational culture exerts on human beings as well as on the tasks being performed. Here, one adds to the factors of the second level the economical and political aspects that shape the company organizational culture. Nowadays, the methodologies of HRA incorporate organizational factors in the group and organization levels through performance shaping factors. This work makes a critical evaluation of the deficiencies concerning human factors and evaluates the potential of quantitative techniques that have been proposed in the last decade to model organizational factors, including the interaction among groups, with the intention of eliminating this chronic deficiency of HRA models. Two important techniques will be discussed in this context: STAMP, based on system theory and FRAM, which aims at modeling the nonlinearities of socio-technical systems. (author)

  3. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e, E-mail: julianapduarte@poli.ufrj.br, E-mail: victor.coppo.leite@poli.ufrj.br, E-mail: frutuoso@nuclear.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  4. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    International Nuclear Information System (INIS)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e

    2013-01-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  5. Gearbox Reliability Collaborative Analytic Formulation for the Evaluation of Spline Couplings

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Keller, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Errichello, Robert [GEARTECH, Houston, TX (United States); Halse, Chris [Romax Technology, Nottingham (United Kingdom)

    2013-12-01

    Gearboxes in wind turbines have not been achieving their expected design life; however, they commonly meet and exceed the design criteria specified in current standards in the gear, bearing, and wind turbine industry as well as third-party certification criteria. The cost of gearbox replacements and rebuilds, as well as the down time associated with these failures, has elevated the cost of wind energy. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability using a combined approach of dynamometer testing, field testing, and modeling. As part of the GRC program, this paper investigates the design of the spline coupling often used in modern wind turbine gearboxes to connect the planetary and helical gear stages. Aside from transmitting the driving torque, another common function of the spline coupling is to allow the sun to float between the planets. The amount the sun can float is determined by the spline design and the sun shaft flexibility subject to the operational loads. Current standards address spline coupling design requirements in varying detail. This report provides additional insight beyond these current standards to quickly evaluate spline coupling designs.

  6. Validity and reliability of a new tool to evaluate handwriting difficulties in Parkinson's disease.

    Directory of Open Access Journals (Sweden)

    Evelien Nackaerts

    Full Text Available Handwriting in Parkinson's disease (PD features specific abnormalities which are difficult to assess in clinical practice since no specific tool for evaluation of spontaneous movement is currently available.This study aims to validate the 'Systematic Screening of Handwriting Difficulties' (SOS-test in patients with PD.Handwriting performance of 87 patients and 26 healthy age-matched controls was examined using the SOS-test. Sixty-seven patients were tested a second time within a period of one month. Participants were asked to copy as much as possible of a text within 5 minutes with the instruction to write as neatly and quickly as in daily life. Writing speed (letters in 5 minutes, size (mm and quality of handwriting were compared. Correlation analysis was performed between SOS outcomes and other fine motor skill measurements and disease characteristics. Intrarater, interrater and test-retest reliability were assessed using the intraclass correlation coefficient (ICC and Spearman correlation coefficient.Patients with PD had a smaller (p = 0.043 and slower (p 0.769 for both groups.The SOS-test is a short and effective tool to detect handwriting problems in PD with excellent reliability. It can therefore be recommended as a clinical instrument for standardized screening of handwriting deficits in PD.

  7. Utilisation, Reliability and Validity of Clinical Evaluation Exercise in Otolaryngology Training.

    Science.gov (United States)

    Awad, Z; Hayden, L; Muthuswamy, K; Tolley, N S

    2015-10-01

    To investigate the utilisation, reliability and validity of clinical evaluation exercise (CEX) in otolaryngology training. Retrospective database analysis. Online assessment database. We analysed all CEXs submitted by north London core (CT) and speciality trainees (ST) in otolaryngology from 2010 to 2013. Internal consistency of the 7 CEX items rated as either O: outstanding, S: satisfactory or D: development required. Overall performance rating (pS) of 1-4 assessed against completion of training level. Receiver operating characteristic was used to describe CEX sensitivity and specificity. Overall score (cS), pS and the number of 'D'-rated items were used to investigate construct validity. One thousand one hundred and sixty CEXs from 45 trainees were included. CEX showed good internal consistency (Cronbach's alpha= 0.85). CEX was highly sensitive (99%), yet not specific (6%). cS and pS for ST was higher than CT (99.1% ± 0.4 versus 96.6% ± 0.8 and 3.06 ± 0.05 versus 1.92 ± 0.04, respectively P reliable in assessing early years otolaryngology trainees in clinical examination, but not at higher level. It has the potential to be used in a summative capacity in selecting trainees for ST positions. This would also encourage trainees to master all domains of otolaryngology clinical examination by end of CT. © 2015 John Wiley & Sons Ltd.

  8. Combining item response theory with multiple imputation to equate health assessment questionnaires.

    Science.gov (United States)

    Gu, Chenyang; Gutman, Roee

    2017-09-01

    The assessment of patients' functional status across the continuum of care requires a common patient assessment tool. However, assessment tools that are used in various health care settings differ and cannot be easily contrasted. For example, the Functional Independence Measure (FIM) is used to evaluate the functional status of patients who stay in inpatient rehabilitation facilities, the Minimum Data Set (MDS) is collected for all patients who stay in skilled nursing facilities, and the Outcome and Assessment Information Set (OASIS) is collected if they choose home health care provided by home health agencies. All three instruments or questionnaires include functional status items, but the specific items, rating scales, and instructions for scoring different activities vary between the different settings. We consider equating different health assessment questionnaires as a missing data problem, and propose a variant of predictive mean matching method that relies on Item Response Theory (IRT) models to impute unmeasured item responses. Using real data sets, we simulated missing measurements and compared our proposed approach to existing methods for missing data imputation. We show that, for all of the estimands considered, and in most of the experimental conditions that were examined, the proposed approach provides valid inferences, and generally has better coverages, relatively smaller biases, and shorter interval estimates. The proposed method is further illustrated using a real data set. © 2016, The International Biometric Society.

  9. Using beta coefficients to impute missing correlations in meta-analysis research: Reasons for caution.

    Science.gov (United States)

    Roth, Philip L; Le, Huy; Oh, In-Sue; Van Iddekinge, Chad H; Bobko, Philip

    2018-06-01

    Meta-analysis has become a well-accepted method for synthesizing empirical research about a given phenomenon. Many meta-analyses focus on synthesizing correlations across primary studies, but some primary studies do not report correlations. Peterson and Brown (2005) suggested that researchers could use standardized regression weights (i.e., beta coefficients) to impute missing correlations. Indeed, their beta estimation procedures (BEPs) have been used in meta-analyses in a wide variety of fields. In this study, the authors evaluated the accuracy of BEPs in meta-analysis. We first examined how use of BEPs might affect results from a published meta-analysis. We then developed a series of Monte Carlo simulations that systematically compared the use of existing correlations (that were not missing) to data sets that incorporated BEPs (that impute missing correlations from corresponding beta coefficients). These simulations estimated ρ̄ (mean population correlation) and SDρ (true standard deviation) across a variety of meta-analytic conditions. Results from both the existing meta-analysis and the Monte Carlo simulations revealed that BEPs were associated with potentially large biases when estimating ρ̄ and even larger biases when estimating SDρ. Using only existing correlations often substantially outperformed use of BEPs and virtually never performed worse than BEPs. Overall, the authors urge a return to the standard practice of using only existing correlations in meta-analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Interim reliability evaluation program: analysis of the Arkansas Nuclear One. Unit 1 Nuclear Power Plant

    International Nuclear Information System (INIS)

    Kolb, G.J.; Kunsman, D.M.; Bell, B.J.

    1982-06-01

    This report represents the results of the analysis of Arkansas Nuclear One (ANO) Unit 1 nuclear power plant which was performed as part of the Interim Reliability Evaluation Program (IREP). The IREP has several objectives, two of which are achieved by the analysis presented in this report. These objectives are: (1) the identification, in a preliminary way, of those accident sequences which are expected to dominate the public health and safety risks; and (2) the development of state-of-the-art plant system models which can be used as a foundation for subsequent, more intensive applications of probabilistic risk assessment. The primary methodological tools used in the analysis were event trees and fault trees. These tools were used to study core melt accidents initiated by loss of coolant accidents (LOCAs) of six different break size ranges and eight different types of transients

  11. Insights from the interim reliability evaluation program pertinent to reactor safety issues

    International Nuclear Information System (INIS)

    Carlson, D.D.

    1983-01-01

    The Interim Reliability Evaluation Program (IREP) consisted of concurrent probabilistic analyses of four operating nuclear power plants. This paper presents and integrated view of the results of the analyses drawing insights pertinent to reactor safety. The importance to risk of accident sequences initiated by transients and small loss-of-coolant accidents was confirmed. Support systems were found to contribute significantly to the sets of dominant accident sequences, either due to single failures which could disable one or more mitigating systems or due to their initiating plant transients. Human errors in response to accidents also were important risk contributors. Consideration of operator recovery actions influences accident sequence frequency estimates, the list of accident sequences dominating core melt, and the set of dominant risk contributors. Accidents involving station blackout, reactor coolant pump seal leaks and ruptures, and loss-of-coolant accidents requiring manual initiation of coolant injection were found to be risk significant

  12. A study on the reliability evaluation of shot peened aluminium alloy using accelerated life test

    International Nuclear Information System (INIS)

    Nam, Ji Hun; Cheong, Seong Kyun; Kang, Min Woo

    2006-01-01

    In this paper, the concept of accelerated life test, which is a popular research field nowadays, is applied to the shot peened material. To predict the efficient and exact room temperature fatigue characteristics from the high temperature fatigue data, the adequate accelerated model is investigated. Ono type rotary bending fatigue tester and high temperature chamber were used for the experiment. Room temperature fatigue lives were predicted by applying accelerated models and doing reliability evaluation. Room temperature fatigue tests were accomplished to check the effectiveness of predicted data and the adequate accelerated life test models were presented by considering errors. Experimental result using Arrhenius model, fatigue limit obtain almost 5.45% of error, inverse power law has about 1.36% of error, so we found that inverse power law is applied well to temperature-life relative of shot peended material

  13. Economic evaluation of reliability-centred maintenance (RCM): an electricity transmission industry perspective

    International Nuclear Information System (INIS)

    Bowler, D.J.; Primrose, P.L.; Leonard, R.

    1995-01-01

    Traditional approaches to appraising the introduction of reliability centred maintenance (RCM) are shown to exhibit severe limitations. In particular, the economic implications surrounding its adoption are repeatedly mis-stated, with the consequence that organisations may be investing in unprofitable RCM ventures. Previously quoted benefits are examined and, contrary to established opinion, it is shown that these 'generalised' statements, once redeemed, are able to be quantified. The paper then proceeds to describe a financial methodology, developed by NGC and UMIST, by which the introduction of RCM can be evaluated. Moreover, it shows that, by regarding RCM as an investment decision, rather than an 'act of faith', the economic viability of a potential application can be determined before vital resources are committed. Finally, it is demonstrated that when the methodology is applied within the context of the electricity transmission industry, the economic case underlying the adoption of RCM can be realistically appraised. (author)

  14. A Reliable Method for the Evaluation of the Anaphylactoid Reaction Caused by Injectable Drugs

    Directory of Open Access Journals (Sweden)

    Fang Wang

    2016-10-01

    Full Text Available Adverse reactions of injectable drugs usually occur at first administration and are closely associated with the dosage and speed of injection. This phenomenon is correlated with the anaphylactoid reaction. However, up to now, study methods based on antigen detection have still not gained wide acceptance and single physiological indicators cannot be utilized to differentiate anaphylactoid reactions from allergic reactions and inflammatory reactions. In this study, a reliable method for the evaluation of anaphylactoid reactions caused by injectable drugs was established by using multiple physiological indicators. We used compound 48/80, ovalbumin and endotoxin as the sensitization agents to induce anaphylactoid, allergic and inflammatory reactions. Different experimental animals (guinea pig and nude rat and different modes of administration (intramuscular, intravenous and intraperitoneal injection and different times (15 min, 30 min and 60 min were evaluated to optimize the study protocol. The results showed that the optimal way to achieve sensitization involved treating guinea pigs with the different agents by intravenous injection for 30 min. Further, seven related humoral factors including 5-HT, SC5b-9, Bb, C4d, IL-6, C3a and histamine were detected by HPLC analysis and ELISA assay to determine their expression level. The results showed that five of them, including 5-HT, SC5b-9, Bb, C4d and IL-6, displayed significant differences between anaphylactoid, allergic and inflammatory reactions, which indicated that their combination could be used to distinguish these three reactions. Then different injectable drugs were used to verify this method and the results showed that the chosen indicators exhibited good correlation with the anaphylactoid reaction which indicated that the established method was both practical and reliable. Our research provides a feasible method for the diagnosis of the serious adverse reactions caused by injectable drugs which

  15. Reliability and safety of functional capacity evaluation in patients with whiplash associated disorders.

    Science.gov (United States)

    Trippolini, M A; Reneman, M F; Jansen, B; Dijkstra, P U; Geertzen, J H B

    2013-09-01

    Whiplash-associated disorders (WAD) are a burden for both individuals and society. It is recommended to evaluate patients with WAD at risk of chronification to enhance rehabilitation and promote an early return to work. In patients with low back pain (LBP), functional capacity evaluation (FCE) contributes to clinical decisions regarding fitness-for-work. FCE should have demonstrated sufficient clinimetric properties. Reliability and safety of FCE for patients with WAD is unknown. Thirty-two participants (11 females and 21 males; mean age 39.6 years) with WAD (Grade I or II) were included. The FCE consisted of 12 tests, including material handling, hand grip strength, repetitive arm movements, static arm activities, walking speed, and a 3 min step test. Overall the FCE duration was 60 min. The test-retest interval was 7 days. Interclass correlations (model 1) (ICCs) and limits of agreement (LoA) were calculated. Safety was assessed by a Pain Response Questionnaire, observation criteria and heart rate monitoring. ICCs ranged between 0.57 (3 min step test) and 0.96 (short two-handed carry). LoA relative to mean performance ranged between 15 % (50 m walking test) and 57 % (lifting waist to overhead). Pain reactions after WAD FCE decreased within days. Observations and heart rate measurements fell within the safety criteria. The reliability of the WAD FCE was moderate in two tests, good in five tests and excellent in five tests. Safety-criteria were fulfilled. Interpretation at the patient level should be performed with care because LoA were substantial.

  16. Reliability of candida skin test in the evaluation of T-cell function in ...

    African Journals Online (AJOL)

    Ehab

    2017-01-23

    Jan 23, 2017 ... considered generally reliable under the age of 1 year.6 ... We sought to investigate the reliability of manually ... Conclusion: Candida intradermal test is a cost- effective ..... Ballow M. Historical perspectives in the diagnosis.

  17. Probabilistic evaluation of design S-N curve and reliability assessment of ASME code-based evaluation

    International Nuclear Information System (INIS)

    Zhao Yongxiang

    1999-01-01

    A probabilistic evaluating approach of design S-N curve and a reliability assessment approach of the ASME code-based evaluation are presented on the basis of Langer S-N model-based P-S-N curves. The P-S-N curves are estimated by a so-called general maximum likelihood method. This method can be applied to deal with the virtual stress amplitude-crack initial life data which have a characteristics of double random variables. Investigation of a set of the virtual stress amplitude-crack initial life (S-N) data of 1Cr18Ni9Ti austenitic stainless steel-welded joint reveals that the P-S-N curves can give a good prediction of scatter regularity of the S-N data. Probabilistic evaluation of the design S-N curve with 0.9999 survival probability has considered various uncertainties, besides of the scatter of the S-N data, to an appropriate extent. The ASME code-based evaluation with 20 reduction factor on the mean life is much more conservative than that with 2 reduction factor on the stress amplitude. Evaluation of the latter in 666.61 MPa virtual stress amplitude is equivalent to 0.999522 survival probability and in 2092.18 MPa virtual stress amplitude equivalent to 0.9999999995 survival probability. This means that the evaluation in the low loading level may be non-conservative and in contrast, too conservative in the high loading level. Cause is that the reduction factors are constants and the factors can not take into account the general observation that scatter of the N data increases with the loading level decreasing. This has indicated that it is necessary to apply the probabilistic approach to the evaluation of design S-N curve

  18. Investigation of animal and algal bioassays for reliable saxitoxin ecotoxicity and cytotoxicity risk evaluation.

    Science.gov (United States)

    Perreault, François; Matias, Marcelo Seleme; Melegari, Silvia Pedroso; Pinto, Catia Regina Silva de Carvalho; Creppy, Edmond Ekué; Popovic, Radovan; Matias, William Gerson

    2011-05-01

    Contamination of water bodies by saxitoxin can result in various toxic effects in aquatic organisms. Saxitoxin contamination has also been shown to be a threat to human health in several reported cases, even resulting in death. In this study, we evaluated the sensitivity of animal (Neuro-2A) and algal (Chlamydomonas reinhardtii) bioassays to saxitoxin effect. Neuro-2A cells were found to be sensitive to saxitoxin, as shown by a 24 h EC50 value of 1.5 nM, which was obtained using a cell viability assay. Conversely, no saxitoxin effect was found in any of the algal biomarkers evaluated, for the concentration range tested (2-128 nM). These results indicate that saxitoxin may induce toxic effects in animal and human populations at concentrations where phytoplankton communities are not affected. Therefore, when evaluating STX risk of toxicity, algal bioassays do not appear to be reliable indicators and should always be conducted in combination with animal bioassays. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE), Version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Hoffman, C.L.

    1995-10-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Graphical Evaluation Module (GEM) is a special application tool designed for evaluation of operational occurrences using the Accident Sequence Precursor (ASP) program methods. GEM provides the capability for an analyst to quickly and easily perform conditional core damage probability (CCDP) calculations. The analyst can then use the CCDP calculations to determine if the occurrence of an initiating event or a condition adversely impacts safety. It uses models and data developed in the SAPHIRE specially for the ASP program. GEM requires more data than that normally provided in SAPHIRE and will not perform properly with other models or data bases. This is the first release of GEM and the developers of GEM welcome user comments and feedback that will generate ideas for improvements to future versions. GEM is designated as version 5.0 to track GEM codes along with the other SAPHIRE codes as the GEM relies on the same, shared database structure

  20. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    OpenAIRE

    Chassin, David P.; Posse, Christian

    2004-01-01

    The reliability of electric transmission systems is examined using a scale-free model of network structure and failure propagation. The topologies of the North American eastern and western electric networks are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using s...

  1. Performance of genotype imputation for low frequency and rare variants from the 1000 genomes.

    Science.gov (United States)

    Zheng, Hou-Feng; Rong, Jing-Jing; Liu, Ming; Han, Fang; Zhang, Xing-Wei; Richards, J Brent; Wang, Li

    2015-01-01

    Genotype imputation is now routinely applied in genome-wide association studies (GWAS) and meta-analyses. However, most of the imputations have been run using HapMap samples as reference, imputation of low frequency and rare variants (minor allele frequency (MAF) 1000 Genomes panel) are available to facilitate imputation of these variants. Therefore, in order to estimate the performance of low frequency and rare variants imputation, we imputed 153 individuals, each of whom had 3 different genotype array data including 317k, 610k and 1 million SNPs, to three different reference panels: the 1000 Genomes pilot March 2010 release (1KGpilot), the 1000 Genomes interim August 2010 release (1KGinterim), and the 1000 Genomes phase1 November 2010 and May 2011 release (1KGphase1) by using IMPUTE version 2. The differences between these three releases of the 1000 Genomes data are the sample size, ancestry diversity, number of variants and their frequency spectrum. We found that both reference panel and GWAS chip density affect the imputation of low frequency and rare variants. 1KGphase1 outperformed the other 2 panels, at higher concordance rate, higher proportion of well-imputed variants (info>0.4) and higher mean info score in each MAF bin. Similarly, 1M chip array outperformed 610K and 317K. However for very rare variants (MAF ≤ 0.3%), only 0-1% of the variants were well imputed. We conclude that the imputation of low frequency and rare variants improves with larger reference panels and higher density of genome-wide genotyping arrays. Yet, despite a large reference panel size and dense genotyping density, very rare variants remain difficult to impute.

  2. FORENSIC-CLINICAL INTERVIEW: RELIABILITY AND VALIDITY FOR THE EVALUATION OF PSYCHOLOGICAL INJURY

    Directory of Open Access Journals (Sweden)

    Francisca Fariña

    2013-01-01

    Full Text Available Forensic evaluation of psychological injury involves the use of a multimethod approximation i.e., a psychometric instrument, normally the MMPI-2, and a clinical interview. In terms of the clinical interview, the traditional clinical interview (e.g., SCID is not valid for forensic settings as it does not fulfil the triple objective of forensic evaluation: diagnosis of psychological injury in terms of Post Traumatic Stress Disorder (PTSD, a differential diagnosis of feigning, and establishing a causal relationship between allegations of intimate partner violence (IPV and psychological injury. To meet this requirement, Arce and Fariña (2001 created the forensic-clinical interview based on two techniques that do not contaminate the contents i.e., reinstating the contexts and free recall, and a methodic categorical system of contents analysis for the diagnosis of psychological injury and a differential diagnosis of feigning. The reliability and validity of the forensic-clinical interview designed for the forensic evaluation of psychological injury was assessed in 51 genuine cases of (IPV and 54 mock victims of IPV who were evaluated using a forensic-clinical interview and the MMPI-2. The result revealed that the forensic-clinical interview was a reliable instrument (α = .85 for diagnostic criteria of psychological injury, and α = .744 for feigning strategies. Moreover, the results corroborated the predictive validity (the diagnosis of PTSD was similar to the expected rate; the convergence validity (the diagnosis of PTSD in the interview strongly correlated with the Pk Scale of the MMPI-2, and discriminant validity (the diagnosis of PTSD in the interview did not correlate with the Pk Scale in feigners. The feigning strategies (differential diagnosis also showed convergent validity (high correlation with the Scales and indices of the MMPI2 for the measure of feigning and discriminant validity (no genuine victim was classified as a feigner

  3. Systematic evaluation of the teaching qualities of Obstetrics and Gynecology faculty: reliability and validity of the SETQ tools.

    Directory of Open Access Journals (Sweden)

    Renée van der Leeuw

    Full Text Available BACKGROUND: The importance of effective clinical teaching for the quality of future patient care is globally understood. Due to recent changes in graduate medical education, new tools are needed to provide faculty with reliable and individualized feedback on their teaching qualities. This study validates two instruments underlying the System for Evaluation of Teaching Qualities (SETQ aimed at measuring and improving the teaching qualities of obstetrics and gynecology faculty. METHODS AND FINDINGS: This cross-sectional multi-center questionnaire study was set in seven general teaching hospitals and two academic medical centers in the Netherlands. Seventy-seven residents and 114 faculty were invited to complete the SETQ instruments in the duration of one month from September 2008 to September 2009. To assess reliability and validity of the instruments, we used exploratory factor analysis, inter-item correlation, reliability coefficient alpha and inter-scale correlations. We also compared composite scales from factor analysis to global ratings. Finally, the number of residents' evaluations needed per faculty for reliable assessments was calculated. A total of 613 evaluations were completed by 66 residents (85.7% response rate. 99 faculty (86.8% response rate participated in self-evaluation. Factor analysis yielded five scales with high reliability (Cronbach's alpha for residents' and faculty: learning climate (0.86 and 0.75, professional attitude (0.89 and 0.81, communication of learning goals (0.89 and 0.82, evaluation of residents (0.87 and 0.79 and feedback (0.87 and 0.86. Item-total, inter-scale and scale-global rating correlation coefficients were significant (P<0.01. Four to six residents' evaluations are needed per faculty (reliability coefficient 0.60-0.80. CONCLUSIONS: Both SETQ instruments were found reliable and valid for evaluating teaching qualities of obstetrics and gynecology faculty. Future research should examine improvement of

  4. Assessing and comparison of different machine learning methods in parent-offspring trios for genotype imputation.

    Science.gov (United States)

    Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi

    2016-06-21

    Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Imputation of genotypes in Danish two-way crossbred pigs using low density panels

    DEFF Research Database (Denmark)

    Xiang, Tao; Christensen, Ole Fredslund; Legarra, Andres

    Genotype imputation is commonly used as an initial step of genomic selection. Studies on humans, plants and ruminants suggested many factors would affect the performance of imputation. However, studies rarely investigated pigs, especially crossbred pigs. In this study, different scenarios...... of imputation from 5K SNPs to 7K SNPs on Danish Landrace, Yorkshire, and crossbred Landrace-Yorkshire were compared. In conclusion, genotype imputation on crossbreds performs equally well as in purebreds, when parental breeds are used as the reference panel. When the size of reference is considerably large...... SNPs. This dataset will be analyzed for genomic selection in a future study...

  6. The Maastricht Clinical Teaching Questionnaire (MCTQ) as a valid and reliable instrument for the evaluation of clinical teachers.

    Science.gov (United States)

    Stalmeijer, Renée E; Dolmans, Diana H J M; Wolfhagen, Ineke H A P; Muijtjens, Arno M M; Scherpbier, Albert J J A

    2010-11-01

    Clinical teaching's importance in the medical curriculum has led to increased interest in its evaluation. Instruments for evaluating clinical teaching must be theory based, reliable, and valid. The Maastricht Clinical Teaching Questionnaire (MCTQ), based on the theoretical constructs of cognitive apprenticeship, elicits evaluations of individual clinical teachers' performance at the workplace. The authors investigated its construct validity and reliability, and they used the underlying factors to test a causal model representing effective clinical teaching. Between March 2007 and December 2008, the authors asked students who had completed clerkship rotations in different departments of two teaching hospitals to use the MCTQ to evaluate their clinical teachers. To establish construct validity, the authors performed a confirmatory factor analysis of the evaluation data, and they estimated reliability by calculating the generalizability coefficient and standard error measurement. Finally, to test a model of the factors, they fitted a structural linear model to the data. Confirmatory factor analysis yielded a five-factor model which fit the data well. Generalizability studies indicated that 7 to 10 student ratings can produce reliable ratings of individual teachers. The hypothesized structural linear model underlined the central roles played by modeling and coaching (mediated by articulation). The MCTQ is a valid and reliable evaluation instrument, thereby demonstrating the usefulness of the cognitive apprenticeship concept for clinical teaching during clerkships. Furthermore, a valuable model of clinical teaching emerged, highlighting modeling, coaching, and stimulating students' articulation and exploration as crucial to effective teaching at the clinical workplace.

  7. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  8. The water balance questionnaire: design, reliability and validity of a questionnaire to evaluate water balance in the general population.

    Science.gov (United States)

    Malisova, Olga; Bountziouka, Vassiliki; Panagiotakos, Demosthenes B; Zampelas, Antonis; Kapsokefalou, Maria

    2012-03-01

    There is a need to develop a questionnaire as a research tool for the evaluation of water balance in the general population. The water balance questionnaire (WBQ) was designed to evaluate water intake from fluid and solid foods and drinking water, and water loss from urine, faeces and sweat at sedentary conditions and physical activity. For validation purposes, the WBQ was administrated in 40 apparently healthy participants aged 22-57 years (37.5% males). Hydration indices in urine (24 h volume, osmolality, specific gravity, pH, colour) were measured through established procedures. Furthermore, the questionnaire was administered twice to 175 subjects to evaluate its reliability. Kendall's τ-b and the Bland and Altman method were used to assess the questionnaire's validity and reliability. The proposed WBQ to assess water balance in healthy individuals was found to be valid and reliable, and it could thus be a useful tool in future projects that aim to evaluate water balance.

  9. Test-Retest Reliability and Practice Effects of the Stability Evaluation Test.

    Science.gov (United States)

    Williams, Richelle M; Corvo, Matthew A; Lam, Kenneth C; Williams, Travis A; Gilmer, Lesley K; McLeod, Tamara C Valovich

    2017-01-17

    Postural control plays an essential role in concussion evaluation. The Stability Evaluation Test (SET) aims to objectively analyze postural control by measuring sway velocity on the NeuroCom's VSR portable force platform (Natus, San Carlos, CA). To assess the test-retest reliability and practice effects of the SET protocol. Cohort. Research Laboratory. Fifty healthy adults (males=20, females=30, age=25.30±3.60 years, height=166.60±12.80 cm, mass=68.80±13.90 kg). All participants completed four trials of the SET. Each trial consisted of six 20-second balance tests with eyes closed, under the following conditions: double-leg firm (DFi), single-leg firm (SFi), tandem firm (TFi), double-leg foam (DFo), single-leg foam (SFo), and tandem foam (TFo). Each trial was separated by a 5-minute seated rest period. The dependent variable was sway velocity (deg/sec), with lower values indicating better balance. Sway velocity was recorded for each of the six conditions as well as a composite score for each trial. Test-retest reliability was analyzed across four trials with Intraclass Correlation Coefficients. Practice effects analyzed with repeated measures analysis of variance, followed by Tukey post-hoc comparisons for any significant main effects (preliability values were good to excellent: DFi (ICC=0.88;95%CI:0.81,0.92), SFi (ICC=0.75;95%CI:0.61,0.85), TFi (ICC=0.84;95%CI:0.75,0.90), DFo (ICC=0.83;95%CI:0.74,0.90), SFo (ICC=0.82;95%CI:0.72,0.89), TFo (ICC=0.81;95%CI:0.69,0.88), and composite score (ICC=0.93;95%CI:0.88,0.95). Significant practice effects (preliability for the assessment of postural control in healthy adults. Due to the practice effects noted, a familiarization session is recommended (i.e., all 6 conditions) prior to recording the data. Future studies should evaluate injured patients to determine meaningful change scores during various injuries.

  10. Evaluation of the Validity and Reliability of the Chinese Healthy Eating Index

    Directory of Open Access Journals (Sweden)

    Ya-Qun Yuan

    2018-01-01

    Full Text Available The Chinese Healthy Eating Index (CHEI is a measuring instrument of diet quality in accordance with the Dietary Guidelines for Chinese (DGC-2016. The objective of the study was to evaluate the validity and reliability of the CHEI. Data from 12,473 adults from the China Health and Nutrition Survey (CHNS-2011, including 3-day–24-h dietary recalls were used in this study. The CHEI was assessed by four exemplary menus developed by the DGC-2016, the general linear models, the independent t-test and the Mann–Whitney U-test, the Spearman’s correlation analysis, the principal components analysis (PCA, the Cronbach’s coefficient, and the Pearson correlation with nutrient intakes. A higher CHEI score was linked with lower exposure to known risk factors of Chinese diets. The CHEI scored nearly perfect for exemplary menus for adult men (99.8, adult women (99.7, and the healthy elderly (99.1, but not for young children (91.2. The CHEI was able to distinguish the difference in diet quality between smokers and non-smokers (P < 0.0001, people with higher and lower education levels (P < 0.0001, and people living in urban and rural areas (P < 0.0001. Low correlations with energy intake for the CHEI total and component scores (|r| < 0.34, P < 0.01 supported the index assessed diet quality independently of diet quantity. The PCA indicated that underlying multiple dimensions compose the CHEI, and Cronbach’s coefficient α was 0.22. Components of dairy, fruits and cooking oils had the greatest impact on the total score. People with a higher CHEI score had not only a higher absolute intake of nutrients (P < 0.001, but also a more nutrient-dense diet (P < 0.001. Our findings support the validity and reliability of the CHEI when using the 3-day–24-h recalls.

  11. Reliability and cost/worth evaluation of generating systems utilizing wind and solar energy

    Science.gov (United States)

    Bagen

    The utilization of renewable energy resources such as wind and solar energy for electric power supply has received considerable attention in recent years due to adverse environmental impacts and fuel cost escalation associated with conventional generation. At the present time, wind and/or solar energy sources are utilized to generate electric power in many applications. Wind and solar energy will become important sources for power generation in the future because of their environmental, social and economic benefits, together with public support and government incentives. The wind and sunlight are, however, unstable and variable energy sources, and behave far differently than conventional sources. Energy storage systems are, therefore, often required to smooth the fluctuating nature of the energy conversion system especially in small isolated applications. The research work presented in this thesis is focused on the development and application of reliability and economic benefits assessment associated with incorporating wind energy, solar energy and energy storage in power generating systems. A probabilistic approach using sequential Monte Carlo simulation was employed in this research and a number of analyses were conducted with regards to the adequacy and economic assessment of generation systems containing wind energy, solar energy and energy storage. The evaluation models and techniques incorporate risk index distributions and different operating strategies associated with diesel generation in small isolated systems. Deterministic and probabilistic techniques are combined in this thesis using a system well-being approach to provide useful adequacy indices for small isolated systems that include renewable energy and energy storage. The concepts presented and examples illustrated in this thesis will help power system planners and utility managers to assess the reliability and economic benefits of utilizing wind energy conversion systems, solar energy conversion

  12. A hybrid load flow and event driven simulation approach to multi-state system reliability evaluation

    International Nuclear Information System (INIS)

    George-Williams, Hindolo; Patelli, Edoardo

    2016-01-01

    Structural complexity of systems, coupled with their multi-state characteristics, renders their reliability and availability evaluation difficult. Notwithstanding the emergence of various techniques dedicated to complex multi-state system analysis, simulation remains the only approach applicable to realistic systems. However, most simulation algorithms are either system specific or limited to simple systems since they require enumerating all possible system states, defining the cut-sets associated with each state and monitoring their occurrence. In addition to being extremely tedious for large complex systems, state enumeration and cut-set definition require a detailed understanding of the system's failure mechanism. In this paper, a simple and generally applicable simulation approach, enhanced for multi-state systems of any topology is presented. Here, each component is defined as a Semi-Markov stochastic process and via discrete-event simulation, the operation of the system is mimicked. The principles of flow conservation are invoked to determine flow across the system for every performance level change of its components using the interior-point algorithm. This eliminates the need for cut-set definition and overcomes the limitations of existing techniques. The methodology can also be exploited to account for effects of transmission efficiency and loading restrictions of components on system reliability and performance. The principles and algorithms developed are applied to two numerical examples to demonstrate their applicability. - Highlights: • A discrete event simulation model based on load flow principles. • Model does not require system path or cut sets. • Applicable to binary and multi-state systems of any topology. • Supports multiple output systems with competing demand. • Model is intuitive and generally applicable.

  13. [Physical activity patterns of school adolescents: Validity, reliability and percentiles proposal for their evaluation].

    Science.gov (United States)

    Cossío Bolaños, Marco; Méndez Cornejo, Jorge; Luarte Rocha, Cristian; Vargas Vitoria, Rodrigo; Canqui Flores, Bernabé; Gomez Campos, Rossana

    2017-02-01

    Regular physical activity (PA) during childhood and adolescence is important for the prevention of non-communicable diseases and their risk factors. To validate a questionnaire for measuring patterns of PA, verify the reliability, comparing the levels of PA aligned with chronological and biological age, and to develop percentile curves to assess PA levels depending on biological maturation. Descriptive cross-sectional study was performed on a sample non-probabilistic quota of 3,176 Chilean adolescents (1685 males and 1491 females), with a mean age range from 10.0 to 18.9 years. An analysis was performed on, weight, standing and sitting height. The biological age through the years of peak growth rate and chronological age in years was determined. Body Mass Index was calculated and a survey of PA was applied. The LMS method was used to develop percentiles. The values for the confirmatory analysis showed saturations between 0.517 and 0.653. The value of adequacy of Kaiser-Meyer-Olkin (KMO) was 0.879 and with 70.8% of the variance explained. The Cronbach alpha values ranged from 0.81 to 0.86. There were differences between the genders when aligned chronological age. There were no differences when aligned by biological age. Percentiles are proposed to classify the PA of adolescents of both genders according to biological age and sex. The questionnaire used was valid and reliable, plus the PA should be evaluated by biological age. These findings led to the development of percentiles to assess PA according to biological age and gender.

  14. Reliability and Discriminative Ability of a New Method for Soccer Kicking Evaluation

    Science.gov (United States)

    Radman, Ivan; Wessner, Barbara; Bachl, Norbert; Ruzic, Lana; Hackl, Markus; Baca, Arnold; Markovic, Goran

    2016-01-01

    The study aimed to evaluate the test–retest reliability of a newly developed 356 Soccer Shooting Test (356-SST), and the discriminative ability of this test with respect to the soccer players' proficiency level and leg dominance. Sixty-six male soccer players, divided into three groups based on their proficiency level (amateur, n = 24; novice semi-professional, n = 18; and experienced semi-professional players, n = 24), performed 10 kicks following a two-step run up. Forty-eight of them repeated the test on a separate day. The following shooting variables were derived: ball velocity (BV; measured via radar gun), shooting accuracy (SA; average distance from the ball-entry point to the goal centre), and shooting quality (SQ; shooting accuracy divided by the time elapsed from hitting the ball to the point of entry). No systematic bias was evident in the selected shooting variables (SA: 1.98±0.65 vs. 2.00±0.63 m; BV: 24.6±2.3 vs. 24.5±1.9 m s-1; SQ: 2.92±1.0 vs. 2.93±1.0 m s-1; all p>0.05). The intra-class correlation coefficients were high (ICC = 0.70–0.88), and the coefficients of variation were low (CV = 5.3–5.4%). Finally, all three 356-SST variables identify, with adequate sensitivity, differences in soccer shooting ability with respect to the players' proficiency and leg dominance. The results suggest that the 356-SST is a reliable and sensitive test of specific shooting ability in men’s soccer. Future studies should test the validity of these findings in a fatigued state, as well as in other populations. PMID:26812247

  15. The reliability evaluation of reclaimed water reused in power plant project

    Science.gov (United States)

    Yang, Jie; Jia, Ru-sheng; Gao, Yu-lan; Wang, Wan-fen; Cao, Peng-qiang

    2017-12-01

    The reuse of reclaimed water has become one of the important measures to solve the shortage of water resources in many cities, But there is no unified way to evaluate the engineering. Concerning this issue, it took Wanneng power plant project in Huai city as a example, analyzed the reliability of wastewater reuse from the aspects of quality in reclaimed water, water quality of sewage plant, the present sewage quantity in the city and forecast of reclaimed water yield, in particular, it was necessary to make a correction to the actual operation flow rate of the sewage plant. the results showed that on the context of the fluctuation of inlet water quality, the outlet water quality of sewage treatment plants is basically stable, and it can meet the requirement of circulating cooling water, but suspended solids(SS) and total hardness in boiler water exceed the limit, and some advanced treatment should be carried out. In addition, the total sewage discharge will reach 13.91×104m3/d and 14.21×104m3/d respectively in the two planning level years of the project. They are greater than the normal collection capacity of the sewage system which is 12.0×104 m3/d, and the reclaimed water yield can reach 10.74×104m3/d, which is greater than the actual needed quantity 8.25×104m3/d of the power plant, so the wastewater reuse of this sewage plant are feasible and reliable to the power plant in view of engineering.

  16. Exploring the validity and reliability of a questionnaire for evaluating veterinary clinical teachers' supervisory skills during clinical rotations

    NARCIS (Netherlands)

    Boerboom, T. B. B.; Dolmans, D. H. J. M.; Jaarsma, Debbie; Muijtjens, A. M. M.; Van Beukelen, P.; Scherpbier, A. J. J. A.; Jaarsma, Debbie

    2011-01-01

    Background: Feedback to aid teachers in improving their teaching requires validated evaluation instruments. When implementing an evaluation instrument in a different context, it is important to collect validity evidence from multiple sources. Aim: We examined the validity and reliability of the

  17. Reliability of Clinician Rated Physical Effort Determination During Functional Capacity Evaluation in Patients with Chronic Musculoskeletal Pain

    NARCIS (Netherlands)

    Trippolini, M. A.; Dijkstra, P. U.; Jansen, B.; Oesch, P.; Geertzen, J. H. B.; Reneman, M. F.

    Introduction Functional capacity evaluation (FCE) can be used to make clinical decisions regarding fitness-for-work. During FCE the evaluator attempts to assess the amount of physical effort of the patient. The aim of this study is to analyze the reliability of physical effort determination using

  18. Reliability and Validity of a Physical Capacity Evaluation Used to Assess Individuals with Intellectual Disabilities and Mental Illness

    Science.gov (United States)

    Jang, Yuh; Chang, Tzyh-Chyang; Lin, Keh-Chung

    2009-01-01

    Physical capacity evaluations (PCEs) are important and frequently offered services in work practice. This study investigates the reliability and validity of the National Taiwan University Hospital Physical Capacity Evaluation (NTUH PCE) on a sample of 149 participants consisted of three groups: 45 intellectual disability (ID), 56 mental illness…

  19. Mechanical reliability evaluation of alternate motors for use in a radioiodine air sampler

    International Nuclear Information System (INIS)

    Bird, S.K.; Huchton, R.L.; Motes, B.G.

    1984-03-01

    Detailed mechanical reliability studies of two alternate motors identified for use in the BNL Air Sampler wer conducted. The two motor types were obtained from Minnesota Electric Technology, Incorporated (MET) and TCS Industries (TCSI). Planned testing included evaluation of motor lifetimes and motor operability under different conditions of temperature, relative humidity, simulated rainfall, and dusty air. The TCSI motors were not lifetime tested due to their poor performance during the temperature/relative humidity tests. While operation on alternating current was satisfactory, on direct current only one of five TCSI motors completed all environmental testing. The MET motors had average lifetimes of 47 hours, 97 hours, and 188 hours, respectively, and exhibited satisfactory operation under all environmental test conditions. Therefore, the MET motor appears to be the better candidate motor for use in the BNL Air Sampler. However, because of the relatively high cost of purchasing and incorporating the MET motor into the BNL Air Sampler System, it is recommended that commercial air sampler systems be evaluated for use instead of the BNL system

  20. Evaluation of pharmaceuticals in surface water: reliability of PECs compared to MECs.

    Science.gov (United States)

    Celle-Jeanton, Hélène; Schemberg, Dimitri; Mohammed, Nabaz; Huneau, Frédéric; Bertrand, Guillaume; Lavastre, Véronique; Le Coustumer, Philippe

    2014-12-01

    Due to the current analytical processes that are not able to measure all the pharmaceutical molecules and to the high costs and the consumption of time to sample and analyze PhACs, models to calculate Predicted Environmental Concentrations (PECs) have been developed. However a comparison between MECs and PECs, taking into account the methods of calculations and peculiarly the parameters included in the calculation (consumption data, pharmacokinetic parameters, elimination rate in STPs and in the environment), is necessary to assess the validity of PECs. MEC variations of sixteen target PhACs [acetaminophen (ACE), amlodipine (AML), atenolol (ATE), caffeine (CAF), carbamazepine (CAR), doxycycline (DOX), epoxycarbamazepine (EPO), fluvoxamine (FLU), furosemide (FUR), hydrochlorothiazide (HYD), ifosfamide (IFO), losartan (LOS), pravastatin (PRA), progesterone (PROG), ramipril (RAM), trimetazidine (TRI)] have been evaluated during one hydrological cycle, from October 2011 to October 2012 and compared to PECs calculated by using an adaptation of the models proposed by Heberer and Feldmann (2005) and EMEA (2006). Comparison of PECs and MECS has been achieved for six molecules: ATE, CAR, DOX, FUR, HYD and PRA. DOX, FUR and HYD present differences between PECs and MECs on an annual basis but their temporal evolutions follow the same trends. PEC evaluation for these PhACs could then be possible but need some adjustments of consumption patterns, pharmacokinetic parameters and/or mechanisms of (bio)degradation. ATE, CAR and PRA are well modeled; PECs can then be used as reliable estimation of concentrations without any reserve. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Is the Atopy Patch Test Reliable in the Evaluation of Food Allergy-Related Atopic Dermatitis?

    Science.gov (United States)

    Mansouri, Mahboubeh; Rafiee, Elham; Darougar, Sepideh; Mesdaghi, Mehrnaz; Chavoshzadeh, Zahra

    2018-01-01

    Aeroallergens and food allergens are found to be relevant in atopic dermatitis. The atopy patch test (APT) can help to detect food allergies in children with atopic dermatitis. This study evaluates if the APT is a valuable tool in the diagnostic workup of children with food allergy-related atopic dermatitis. 42 children between 6 months and 12 years of age were selected at the Mofid Children Hospital. Atopic dermatitis was diagnosed, and the severity of the disease was determined. At the test visit, the patients underwent a skin prick test (SPT), APT, and serum IgE level measurement for cow's milk, egg yolk, egg white, wheat, and soy. We found a sensitivity of 91.7%, a specificity of 72.7%, a positive predictive value (PPV) of 88%, a negative predictive value (NPV) of 80%, and an accuracy of 85.7% for APT performed for cow's milk. APT performed for egg yolk had a sensitivity and a NPV of 100%, while the same parameters obtained with egg white were 84.2 and 75%, respectively. The sensitivity, specificity, and NPV of the APT for wheat were 100, 75, and 100%, respectively. The sensitivity, PPV, and NPV of the APT for soy were 87.5, 70, and 87.5%, respectively. Our data demonstrate that the APT is a reliable diagnostic tool to evaluate suspected food allergy-related skin symptoms in childhood and infancy. © 2018 S. Karger AG, Basel.

  2. Reliability of Baropodometry on the Evaluation of Plantar Load Distribution: A Transversal Study.

    Science.gov (United States)

    Baumfeld, Daniel; Baumfeld, Tiago; da Rocha, Romário Lopes; Macedo, Benjamim; Raduan, Fernando; Zambelli, Roberto; Alves Silva, Thiago Alexandre; Nery, Caio

    2017-01-01

    Introduction . Baropodometry is used to measure the load distribution on feet during rest and walking. The aim of this study was to evaluate changes in plantar foot pressures distribution due to period of working and due to stretching exercises of the posterior muscular chain. Methods . In this transversal study, all participants were submitted to baropodometric evaluation at two different times: before and after the working period and before and after stretching the muscles of the posterior chain. Results . We analyzed a total of 54 feet of 27 participants. After the working period, there was an average increase in the forefoot pressure of 0.16 Kgf/cm 2 and an average decrease in the hindfoot pressure of 0.17 Kgf/cm 2 . After stretching the posterior muscular chain, the average increase in the forefoot pressure was 0.56 Kgf/cm 2 and the hindfoot average pressure decrease was 0.56 Kgf/cm 2 . These changes were not statistically significant. Discussion . It was reported that the strength of the Achilles tendon generates greater forefoot load transferred from the hindfoot. In our study, no significant variation in the distribution of plantar pressure was observed. It can be inferred that baropodometry was a reliable instrument to determine the plantar pressure, regardless of the tension of the posterior chain muscles.

  3. Reliability assessment and correlation analysis of evaluating orthodontic treatment outcome in Chinese patients.

    Science.gov (United States)

    Song, Guang-Ying; Zhao, Zhi-He; Ding, Yin; Bai, Yu-Xing; Wang, Lin; He, Hong; Shen, Gang; Li, Wei-Ran; Baumrind, Sheldon; Geng, Zhi; Xu, Tian-Min

    2014-03-01

    This study aimed to assess the reliability of experienced Chinese orthodontists in evaluating treatment outcome and to determine the correlations between three diagnostic information sources. Sixty-nine experienced Chinese orthodontic specialists each evaluated the outcome of orthodontic treatment of 108 Chinese patients. Three different information sources: study casts (SC), lateral cephalometric X-ray images (LX) and facial photographs (PH) were generated at the end of treatment for 108 patients selected randomly from six orthodontic treatment centers throughout China. Six different assessments of treatment outcome were made by each orthodontist using data from the three information sources separately and in combination. Each assessment included both ranking and grading for each patient. The rankings of each of the 69 judges for the 108 patients were correlated with the rankings of each of the other judges yielding 13 873 Spearman rs values, ranging from -0.08 to +0.85. Of these, 90% were greater than 0.4, showing moderate-to-high consistency among the 69 orthodontists. In the combined evaluations, study casts were the most significant predictive component (R(2)=0.86, P<0.000 1), while the inclusion of lateral cephalometric films and facial photographs also contributed to a more comprehensive assessment (R(2)=0.96, P<0.000 1). Grading scores for SC+LX and SC+PH were highly significantly correlated with those for SC+LX+PH (r(SC+LX)vs.(SC+LX+PH)=0.96, r(SC+PH)vs.(SC+LX+PH)=0.97), showing that either SC+LX or SC+PH is an excellent substitute for all three combined assessment.

  4. An Appropriate Wind Model for Wind Integrated Power Systems Reliability Evaluation Considering Wind Speed Correlations

    Directory of Open Access Journals (Sweden)

    Rajesh Karki

    2013-02-01

    Full Text Available Adverse environmental impacts of carbon emissions are causing increasing concerns to the general public throughout the world. Electric energy generation from conventional energy sources is considered to be a major contributor to these harmful emissions. High emphasis is therefore being given to green alternatives of energy, such as wind and solar. Wind energy is being perceived as a promising alternative. This source of energy technology and its applications have undergone significant research and development over the past decade. As a result, many modern power systems include a significant portion of power generation from wind energy sources. The impact of wind generation on the overall system performance increases substantially as wind penetration in power systems continues to increase to relatively high levels. It becomes increasingly important to accurately model the wind behavior, the interaction with other wind sources and conventional sources, and incorporate the characteristics of the energy demand in order to carry out a realistic evaluation of system reliability. Power systems with high wind penetrations are often connected to multiple wind farms at different geographic locations. Wind speed correlations between the different wind farms largely affect the total wind power generation characteristics of such systems, and therefore should be an important parameter in the wind modeling process. This paper evaluates the effect of the correlation between multiple wind farms on the adequacy indices of wind-integrated systems. The paper also proposes a simple and appropriate probabilistic analytical model that incorporates wind correlations, and can be used for adequacy evaluation of multiple wind-integrated systems.

  5. Validity and reliability of a new instrument for the evaluation of dental collaboration in disabled people

    Directory of Open Access Journals (Sweden)

    Scilla Sparabombe

    2013-10-01

    Full Text Available Background: nowadays, oral health in people with disabilities is an important topic. The phsychological and behavioural problems of these people, their difficulties with environmental adaptations and the absence of any traditional communication determine the compliance needed for treatment The aim of this work was to test the validity and reliability of an original questionnaire that could become an instrument assessing the individual features in people with mental retardation and other developmental disabilities at the time of dental treatment.Methods: it was created a questionnaire with standardised answers regarding four specific areas: neuropsychology, emotional-affect, autonomy and environmental resources. The questionnaire was completed by 63 patients from three different institutes (two rehabilitation institutes and an Institute of Dentistry for patients with special needs. To analyse the answers, each item was transformed into a numeric value. A value of 1 was displayed as the minimum while 4 represented full possession of the considered skills. A total of 17 variables were analysed with descriptive statistics and multivariate analysis. Internal consistency reliability was measured using Cronbach’s alpha. Furthermore, an analysis on convergent/discriminant validity was provided.Results: all variables were positively correlated. The most significant were “guidance”, “communication”, “sociability”, “view”, “hearing” and “feeding”. Items like “self-control”, “equanimity”, “problematic behaviour”, “extroversion” and “autonomy” offered vague and less significant information in identifying the patient’s collaboration level. Variables like “evaluation by the compiler about the patient’s collaboration”, “previous dental experiences” and “attendant” were confirmed. Cronbach’s alpha was 0.77 (standardized result, which meet the a priori criterion of 0.90≥alpha≥0.70.Conclusions

  6. 26+ Year Old Photovoltaic Power Plant: Degradation and Reliability Evaluation of Crystalline Silicon Modules -- South Array

    Science.gov (United States)

    Olakonu, Kolapo

    As the use of photovoltaic (PV) modules in large power plants continues to increase globally, more studies on degradation, reliability, failure modes, and mechanisms of field aged modules are needed to predict module life expectancy based on accelerated lifetime testing of PV modules. In this work, a 26+ year old PV power plant in Phoenix, Arizona has been evaluated for performance, reliability, and durability. The PV power plant, called Solar One, is owned and operated by John F. Long's homeowners association. It is a 200 kW dc, standard test conditions (STC) rated power plant comprised of 4000 PV modules or frameless laminates, in 100 panel groups (rated at 175 kW ac). The power plant is made of two center-tapped bipolar arrays, the north array and the south array. Due to a limited time frame to execute this large project, this work was performed by two masters students (Jonathan Belmont and Kolapo Olakonu) and the test results are presented in two masters theses. This thesis presents the results obtained on the south array and the other thesis presents the results obtained on the north array. Each of these two arrays is made of four sub arrays, the east sub arrays (positive and negative polarities) and the west sub arrays (positive and negative polarities), making up eight sub arrays. The evaluation and analyses of the power plant included in this thesis consists of: visual inspection, electrical performance measurements, and infrared thermography. A possible presence of potential induced degradation (PID) due to potential difference between ground and strings was also investigated. Some installation practices were also studied and found to contribute to the power loss observed in this investigation. The power output measured in 2011 for all eight sub arrays at STC is approximately 76 kWdc and represents a power loss of 62% (from 200 kW to 76 kW) over 26+ years. The 2011 measured power output for the four south sub arrays at STC is 39 kWdc and represents a power

  7. Intra- and interobserver reliability of gray scale/dynamic range evaluation of ultrasonography using a standardized phantom

    International Nuclear Information System (INIS)

    Lee, Song; Choi, Joon Il; Park, Michael Yong; Yeo, Dong Myung; Byun, Jae Young; Jung, Seung Eun; Rha, Sung Eun; Oh, Soon Nam; Lee, Young Joon

    2014-01-01

    To evaluate intra- and interobserver reliability of the gray scale/dynamic range of the phantom image evaluation of ultrasonography using a standardized phantom, and to assess the effect of interactive education on the reliability. Three radiologists (a resident, and two board-certified radiologists with 2 and 7 years of experience in evaluating ultrasound phantom images) performed the gray scale/dynamic range test for an ultrasound machine using a standardized phantom. They scored the number of visible cylindrical structures of varying degrees of brightness and made a pass or fail decision. First, they scored 49 phantom images twice from a 2010 survey with limited knowledge of phantom images. After this, the radiologists underwent two hours of interactive education for the phantom images and scored another 91 phantom images from a 2011 survey twice. Intra- and interobserver reliability before and after the interactive education session were analyzed using K analyses. Before education, the K-value for intraobserver reliability for the radiologist with 7 years of experience, 2 years of experience, and the resident was 0.386, 0.469, and 0.465, respectively. After education, the K-values were improved (0.823, 0.611, and 0.711, respectively). For interobserver reliability, the K-value was also better after the education for the 3 participants (0.067, 0.002, and 0.547 before education; 0.635, 0.667, and 0.616 after education, respectively). The intra- and interobserver reliability of the gray scale/dynamic range was fair to substantial. Interactive education can improve reliability. For more reliable results, double- checking of phantom images by multiple reviewers is recommended.

  8. Use of reliability analysis for the safety evaluation of technical facilities

    International Nuclear Information System (INIS)

    Balfanz, H.P.; Eggert, H.; Lindauer, E.

    1975-01-01

    Using examples from nuclear technology, the following is discussed: how efficient the present practical measures are for increasing reliability, which weak points can be recognized and what appears to be the most promising direction to take for improvements. The following are individually dealt with: 1) determination of the relevant parameters for the safety of a plant; 2) definition and fixing of reliability requirements; 3) process to prove the fulfilment of requirements; 4) measures to guarantee the reliability; 5) data feed-back to check and improve the reliability. (HP/LH) [de

  9. A study on the quantitative evaluation of the reliability for safety critical software using Bayesian belief nets

    International Nuclear Information System (INIS)

    Eom, H. S.; Jang, S. C.; Ha, J. J.

    2003-01-01

    Despite the efforts to avoid undesirable risks, or at least to bring them under control in the world, new risks that are highly difficult to manage continue to emerge from the use of new technologies, such as the use of digital instrumentation and control (I and C) components in nuclear power plant. Whenever new risk issues came out by now, we have endeavored to find the most effective ways to reduce risks, or to allocate limited resources to do this. One of the major challenges is the reliability analysis of safety-critical software associated with digital safety systems. Though many activities such as testing, verification and validation (V and V) techniques have been carried out in the design stage of software, however, the process of quantitatively evaluating the reliability of safety-critical software has not yet been developed because of the irrelevance of the conventional software reliability techniques to apply for the digital safety systems. This paper focuses on the applicability of Bayesian Belief Net (BBN) techniques to quantitatively estimate the reliability of safety-critical software adopted in digital safety system. In this paper, a typical BBN model was constructed using the dedication process of the Commercial-Off-The-Shelf (COTS) installed by KAERI. In conclusion, the adoption of BBN technique can facilitate the process of evaluating the safety-critical software reliability in nuclear power plant, as well as provide very useful information (e.g., 'what if' analysis) associated with software reliability in the viewpoint of practicality

  10. A questionnaire to evaluate the impact of chronic diseases: validated translation and Illness Effects Questionnaire (IEQ reliability study

    Directory of Open Access Journals (Sweden)

    Patrícia Pinto Fonseca

    2012-01-01

    Full Text Available INTRODUCTION: Patients' perception about their health condition, mainly involving chronic diseases, has been investigated in many studies and it has been associated to depression, compliance with the treatment, quality of life and prognosis. The Illness Effects Questionnaire (IEQ is a tool which makes the standardized evaluation of patients' perception about their illness possible, so that it is brief and accessible to the different clinical settings. This work aims to begin the transcultural adaptation of the IEQ to Brazil through the validated translation and the reliability study. METHODS: The back-translation method and the test-retest reliability study were used in a sample of 30 adult patients under chronic hemodialysis. The reliability indexes were estimated using the Pearson, Spearman, Weighted Kappa and Cronbach's alpha coefficients. RESULTS: The semantic equivalence was reached through the validated translation. In this study, the reliability indexes obtained were respectively: 0.85 and 0.75 (p < 0.001; 0.68 and 0.92 (p < 0.0001. DISCUSSION: The reliability indexes obtained attest to the stability of responses in both evaluations. Additional procedures are necessary for the transcultural adaptation of the IEQ to be complete. CONCLUSION: The results indicate the translation validity and the reliability of the Brazilian version of the IEQ for the sample studied.

  11. A Correlated Model for Evaluating Performance and Energy of Cloud System Given System Reliability

    Directory of Open Access Journals (Sweden)

    Hongli Zhang

    2015-01-01

    Full Text Available The serious issue of energy consumption for high performance computing systems has attracted much attention. Performance and energy-saving have become important measures of a computing system. In the cloud computing environment, the systems usually allocate various resources (such as CPU, Memory, Storage, etc. on multiple virtual machines (VMs for executing tasks. Therefore, the problem of resource allocation for running VMs should have significant influence on both system performance and energy consumption. For different processor utilizations assigned to the VM, there exists the tradeoff between energy consumption and task completion time when a given task is executed by the VMs. Moreover, the hardware failure, software failure and restoration characteristics also have obvious influences on overall performance and energy. In this paper, a correlated model is built to analyze both performance and energy in the VM execution environment given the reliability restriction, and an optimization model is presented to derive the most effective solution of processor utilization for the VM. Then, the tradeoff between energy-saving and task completion time is studied and balanced when the VMs execute given tasks. Numerical examples are illustrated to build the performance-energy correlated model and evaluate the expected values of task completion time and consumed energy.

  12. Incorporating reliability evaluation into the uncertainty analysis of electricity market price

    International Nuclear Information System (INIS)

    Kang, Chongqing; Bai, Lichao; Xia, Qing; Jiang, Jianjian; Zhao, Jing

    2005-01-01

    A novel model and algorithm for analyzing the uncertainties in electricity market is proposed in this paper. In this model, bidding decision is formulated as a probabilistic model that takes into account the decision-maker's willingness to bid, risk preferences, the fluctuation of fuel-price, etc. At the same time, generating unit's uncertain output model is considered by its forced outage rate (FOR). Based on the model, the uncertainty of market price is then analyzed. Taking the analytical results into consideration, not only the reliability of the power system can be conventionally analyzed, but also the possible distribution of market prices can be easily obtained. The probability distribution of market prices can be further used to calculate the expected output and the sales income of generating unit in the market. Based on these results, it is also possible to evaluate the risk involved by generating units. A simple system with four generating units is used to illustrate the proposed algorithm. The proposed algorithm and the modeling technique are expected to helpful to the market participants in making their economic decisions

  13. Use of a valve operation test and evaluation system to enhance valve reliability

    International Nuclear Information System (INIS)

    Lowry, D.A.

    1990-01-01

    Power plant owners have emphasized the need for assuring safe, reliable operation of valves. While most valves must simply open or close, the mechanisms involved can be quite complex. Motor operated valves (MOVs) must be properly adjusted to assure operability. Individual operator components determine the performance of the entire MOV. Failure in MOVs could cripple or shut down a unit. Thus, a complete valve program consisting of design reviews, operational testing, and preventive and predictive maintenance activities will enhance an owner's confidence level that his valves win operate as expected. Liberty's Valve Operation Test and Evaluation System (VOTES) accurately measures stein thrust without intruding on valve operation. Since mounting a strain gage to a valve stem is a desirable but impractical way of obtaining precise stem thrust, Liberty developed a method to obtain identical data by placing a strain gage sensor on the valve yoke. VOTES provides information which effectively eliminates costly, unscheduled downtime. This paper presents the results of infield VOTES testing. The system's proven ability to identify and characterize actuator and valve performance is demonstrated. Specific topics of discussion include the ability of VOTES to ease a utility's IE Bulletin 8543 concerns and conclusively diagnose MOV components. Data from static and differential pressure testing are presented. Technical, operational, and financial advantages resulting from VOTES technology are explored in detail

  14. Evaluation of piping reliability and failure data for use in risk-based inspections of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, V. de; Soares, W.A.; Costa, A.C.L. da; Rabello, E.G.; Marques, R.O., E-mail: vasconv@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2016-07-01

    During operation of industrial facilities, components and systems can deteriorate over time, thus increasing the possibility of accidents. Risk-Based Inspection (RBI) involves inspection planning based on information about risks, through assessing of probability and consequence of failures. In-service inspections are used in nuclear power plants, in order to ensure reliable and safe operation. Traditional deterministic inspection approaches investigate generic degradation mechanisms on all systems. However, operating experience indicates that degradation occurs where there are favorable conditions for developing a specific mechanism. Inspections should be prioritized at these places. Risk-Informed In-service Inspections (RI-ISI) are types of RBI that use Probabilistic Safety Assessment results, increasing reliability and plant safety, and reducing radiation exposure. These assessments use both available generic reliability and failure data, as well as plant specific information. This paper proposes a method for evaluating piping reliability and failure data important for RI-ISI programs, as well as the techniques involved. (author)

  15. Evaluation of piping reliability and failure data for use in risk-based inspections of nuclear power plants

    International Nuclear Information System (INIS)

    Vasconcelos, V. de; Soares, W.A.; Costa, A.C.L. da; Rabello, E.G.; Marques, R.O.

    2016-01-01

    During operation of industrial facilities, components and systems can deteriorate over time, thus increasing the possibility of accidents. Risk-Based Inspection (RBI) involves inspection planning based on information about risks, through assessing of probability and consequence of failures. In-service inspections are used in nuclear power plants, in order to ensure reliable and safe operation. Traditional deterministic inspection approaches investigate generic degradation mechanisms on all systems. However, operating experience indicates that degradation occurs where there are favorable conditions for developing a specific mechanism. Inspections should be prioritized at these places. Risk-Informed In-service Inspections (RI-ISI) are types of RBI that use Probabilistic Safety Assessment results, increasing reliability and plant safety, and reducing radiation exposure. These assessments use both available generic reliability and failure data, as well as plant specific information. This paper proposes a method for evaluating piping reliability and failure data important for RI-ISI programs, as well as the techniques involved. (author)

  16. Improving accuracy of genomic prediction in Brangus cattle by adding animals with imputed low-density SNP genotypes.

    Science.gov (United States)

    Lopes, F B; Wu, X-L; Li, H; Xu, J; Perkins, T; Genho, J; Ferretti, R; Tait, R G; Bauck, S; Rosa, G J M

    2018-02-01

    Reliable genomic prediction of breeding values for quantitative traits requires the availability of sufficient number of animals with genotypes and phenotypes in the training set. As of 31 October 2016, there were 3,797 Brangus animals with genotypes and phenotypes. These Brangus animals were genotyped using different commercial SNP chips. Of them, the largest group consisted of 1,535 animals genotyped by the GGP-LDV4 SNP chip. The remaining 2,262 genotypes were imputed to the SNP content of the GGP-LDV4 chip, so that the number of animals available for training the genomic prediction models was more than doubled. The present study showed that the pooling of animals with both original or imputed 40K SNP genotypes substantially increased genomic prediction accuracies on the ten traits. By supplementing imputed genotypes, the relative gains in genomic prediction accuracies on estimated breeding values (EBV) were from 12.60% to 31.27%, and the relative gain in genomic prediction accuracies on de-regressed EBV was slightly small (i.e. 0.87%-18.75%). The present study also compared the performance of five genomic prediction models and two cross-validation methods. The five genomic models predicted EBV and de-regressed EBV of the ten traits similarly well. Of the two cross-validation methods, leave-one-out cross-validation maximized the number of animals at the stage of training for genomic prediction. Genomic prediction accuracy (GPA) on the ten quantitative traits was validated in 1,106 newly genotyped Brangus animals based on the SNP effects estimated in the previous set of 3,797 Brangus animals, and they were slightly lower than GPA in the original data. The present study was the first to leverage currently available genotype and phenotype resources in order to harness genomic prediction in Brangus beef cattle. © 2018 Blackwell Verlag GmbH.

  17. Improved Correction of Misclassification Bias With Bootstrap Imputation.

    Science.gov (United States)

    van Walraven, Carl

    2018-07-01

    Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.

  18. A reliable in vitro fruiting system for armillaria mellea for evaluation of agrobacterium tumefaciens transformation vectors

    Science.gov (United States)

    Armillaria mellea is a serious pathogen of horticultural and agricultural systems in Europe and North America. The lack of a reliable in vitro fruiting system has hindered research, and necessitated dependence on intermittently available wild-collected basidiospores. Here we describe a reliable, rep...

  19. Reliability of candida skin test in the evaluation of T-cell function in ...

    African Journals Online (AJOL)

    Background: Both standardized and non-standardized candida skin tests are used in clinical practice for functional in-vivo assessment of cellular immunity with variable results and are considered not reliable under the age of 1 year. We sought to investigate the reliability of using manually prepared candida intradermal test ...

  20. Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2012-01-01

    A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…

  1. Using Bayesian belief networks for reliability management : construction and evaluation: a step by step approach

    NARCIS (Netherlands)

    Houben, M.J.H.A.

    2010-01-01

    In the capital goods industry, there is a growing need to manage reliability throughout the product development process. A number of trends can be identified that have a strong effect on the way in which reliability prediction and management is approached, i.e.: - The lifecycle costs approach that

  2. Fuzzy sets as extension of probabilistic models for evaluating human reliability

    International Nuclear Information System (INIS)

    Przybylski, F.

    1996-11-01

    On the base of a survey of established quantification methodologies for evaluating human reliability, a new computerized methodology was developed in which a differential consideration of user uncertainties is made. In this quantification method FURTHER (FUzzy Sets Related To Human Error Rate Prediction), user uncertainties are quantified separately from model and data uncertainties. As tools fuzzy sets are applied which, however, stay hidden to the method's user. The user in the quantification process only chooses an action pattern, performance shaping factors and natural language expressions. The acknowledged method HEART (Human Error Assessment and Reduction Technique) serves as foundation of the fuzzy set approach FURTHER. By means of this method, the selection of a basic task in connection with its basic error probability, the decision how correct the basic task's selection is, the selection of a peformance shaping factor, and the decision how correct the selection and how important the performance shaping factor is, were identified as aspects of fuzzification. This fuzzification is made on the base of data collection and information from literature as well as of the estimation by competent persons. To verify the ammount of additional information to be received by the usage of fuzzy sets, a benchmark session was accomplished. In this benchmark twelve actions were assessed by five test-persons. In case of the same degree of detail in the action modelling process, the bandwidths of the interpersonal evaluations decrease in FURTHER in comparison with HEART. The uncertainties of the single results could not be reduced up to now. The benchmark sessions conducted so far showed plausible results. A further testing of the fuzzy set approach by using better confirmed fuzzy sets can only be achieved in future practical application. Adequate procedures, however, are provided. (orig.) [de

  3. Systems analysis programs for hands-on integrated reliability evaluations (SAPHIRE) version 5.0

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. This volume is the reference manual for the Systems Analysis and Risk Assessment (SARA) System Version 5.0, a microcomputer-based system used to analyze the safety issues of a open-quotes familyclose quotes [i.e., a power plant, a manufacturing facility, any facility on which a probabilistic risk assessment (PRA) might be performed]. The SARA database contains PRA data primarily for the dominant accident sequences of a family and descriptive information about the family including event trees, fault trees, and system model diagrams. The number of facility databases that can be accessed is limited only by the amount of disk storage available. To simulate changes to family systems, SARA users change the failure rates of initiating and basic events and/or modify the structure of the cut sets that make up the event trees, fault trees, and systems. The user then evaluates the effects of these changes through the recalculation of the resultant accident sequence probabilities and importance measures. The results are displayed in tables and graphs that may be printed for reports. A preliminary version of the SARA program was completed in August 1985 and has undergone several updates in response to user suggestions and to maintain compatibility with the other SAPHIRE programs. Version 5.0 of SARA provides the same capability as earlier versions and adds the ability to process unlimited cut sets; display fire, flood, and seismic data; and perform more powerful cut set editing

  4. Improving motor reliability in nuclear power plants: Volume 1, Performance evaluation and maintenance practices

    International Nuclear Information System (INIS)

    Subudhi, M.; Gunther, W.E.; Taylor, J.H.; Sugarman, A.C.; Sheets, M.W.

    1987-11-01

    This report constitutes the first of the three volumes under this NUREG. The report presents recommendations for developing a cost-effective program for performance evaluation and maintenance of electric motors in nuclear power plants. These recommendations are based on current industry practices, available techniques for monitoring degradation in motor components, manufacturer's recommendations, operating experience, and results from two laboratory tests on aged motors. Two laboratory test reports on a small and a large motor are presented in separate volumes of this NUREG. These provide the basis for the various functional indicators recommended for maintenance programs in this report. The overall preventive maintenance program is separated into two broad areas of activity aimed at mitigating the potential effects of equipment aging: Performance Evaluation and Equipment Maintenance. The latter involves actually maintaining the condition of the equipment while the former involves those activities undertaken to monitor degradation due to aging. These monitoring methods are further categorized into periodic testing, surveillance testing, continuous monitoring and inspections. This study focuses on the methods and procedures for performing the above activities to maintain the motors operationally ready in a nuclear facility. This includes an assessment of various functional indicators to determine their suitability for trending to monitor motor component condition. The intrusiveness of test methods and the present state-of-the-art for using the test equipment in a plant environment are discussed. In conclusion, implementation of the information provided in this report, will improve motor reliability in nuclear power plants. The study indicates the kinds of tests to conduct, how and when to conduct them, and to which motors the tests should be applied. 44 refs., 12 figs., 13 tabs

  5. Estimating Stand Height and Tree Density in Pinus taeda plantations using in-situ data, airborne LiDAR and k-Nearest Neighbor Imputation

    Directory of Open Access Journals (Sweden)

    CARLOS ALBERTO SILVA

    Full Text Available ABSTRACT Accurate forest inventory is of great economic importance to optimize the entire supply chain management in pulp and paper companies. The aim of this study was to estimate stand dominate and mean heights (HD and HM and tree density (TD of Pinus taeda plantations located in South Brazil using in-situ measurements, airborne Light Detection and Ranging (LiDAR data and the non- k-nearest neighbor (k-NN imputation. Forest inventory attributes and LiDAR derived metrics were calculated at 53 regular sample plots and we used imputation models to retrieve the forest attributes at plot and landscape-levels. The best LiDAR-derived metrics to predict HD, HM and TD were H99TH, HSD, SKE and HMIN. The Imputation model using the selected metrics was more effective for retrieving height than tree density. The model coefficients of determination (adj.R2 and a root mean squared difference (RMSD for HD, HM and TD were 0.90, 0.94, 0.38m and 6.99, 5.70, 12.92%, respectively. Our results show that LiDAR and k-NN imputation can be used to predict stand heights with high accuracy in Pinus taeda. However, furthers studies need to be realized to improve the accuracy prediction of TD and to evaluate and compare the cost of acquisition and processing of LiDAR data against the conventional inventory procedures.

  6. Estimating Stand Height and Tree Density in Pinus taeda plantations using in-situ data, airborne LiDAR and k-Nearest Neighbor Imputation.

    Science.gov (United States)

    Silva, Carlos Alberto; Klauberg, Carine; Hudak, Andrew T; Vierling, Lee A; Liesenberg, Veraldo; Bernett, Luiz G; Scheraiber, Clewerson F; Schoeninger, Emerson R

    2018-01-01

    Accurate forest inventory is of great economic importance to optimize the entire supply chain management in pulp and paper companies. The aim of this study was to estimate stand dominate and mean heights (HD and HM) and tree density (TD) of Pinus taeda plantations located in South Brazil using in-situ measurements, airborne Light Detection and Ranging (LiDAR) data and the non- k-nearest neighbor (k-NN) imputation. Forest inventory attributes and LiDAR derived metrics were calculated at 53 regular sample plots and we used imputation models to retrieve the forest attributes at plot and landscape-levels. The best LiDAR-derived metrics to predict HD, HM and TD were H99TH, HSD, SKE and HMIN. The Imputation model using the selected metrics was more effective for retrieving height than tree density. The model coefficients of determination (adj.R2) and a root mean squared difference (RMSD) for HD, HM and TD were 0.90, 0.94, 0.38m and 6.99, 5.70, 12.92%, respectively. Our results show that LiDAR and k-NN imputation can be used to predict stand heights with high accuracy in Pinus taeda. However, furthers studies need to be realized to improve the accuracy prediction of TD and to evaluate and compare the cost of acquisition and processing of LiDAR data against the conventional inventory procedures.

  7. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2008-08-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for transforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  8. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Code Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Smith; K. J. Kvarfordt; S. T. Wood

    2006-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer. SAPHIRE is funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). The INL's primary role in this project is that of software developer. However, the INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users comprised of a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system’s response to initiating events, quantify associated damage outcome frequencies, and identify important contributors to this damage (Level 1 PRA) and to analyze containment performance during a severe accident and quantify radioactive releases (Level 2 PRA). It can be used for a PRA evaluating a variety of operating conditions, for example, for a nuclear reactor at full power, low power, or at shutdown conditions. Furthermore, SAPHIRE can be used to analyze both internal and external initiating events and has special features for ansforming models built for internal event analysis to models for external event analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to both the public and the environment (Level 3 PRA). SAPHIRE includes a separate module called the Graphical Evaluation Module (GEM). GEM provides a highly specialized user interface with SAPHIRE that automates SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events in a very efficient and expeditious manner. This reference guide will introduce the SAPHIRE Version 7.0 software. A brief discussion of the purpose and history of the software is included along with

  9. Whole-Genome Sequencing Coupled to Imputation Discovers Genetic Signals for Anthropometric Traits

    NARCIS (Netherlands)

    I. Tachmazidou (Ioanna); Süveges, D. (Dániel); J. Min (Josine); G.R.S. Ritchie (Graham R.S.); Steinberg, J. (Julia); K. Walter (Klaudia); V. Iotchkova (Valentina); J.A. Schwartzentruber (Jeremy); J. Huang (Jian); Y. Memari (Yasin); McCarthy, S. (Shane); Crawford, A.A. (Andrew A.); C. Bombieri (Cristina); M. Cocca (Massimiliano); A.-E. Farmaki (Aliki-Eleni); T.R. Gaunt (Tom); P. Jousilahti (Pekka); M.N. Kooijman (Marjolein ); Lehne, B. (Benjamin); G. Malerba (Giovanni); S. Männistö (Satu); A. Matchan (Angela); M.C. Medina-Gomez (Carolina); S. Metrustry (Sarah); A. Nag (Abhishek); I. Ntalla (Ioanna); L. Paternoster (Lavinia); N.W. Rayner (Nigel William); C. Sala (Cinzia); W.R. Scott (William R.); H.A. Shihab (Hashem A.); L. Southam (Lorraine); B. St Pourcain (Beate); M. Traglia (Michela); K. Trajanoska (Katerina); Zaza, G. (Gialuigi); W. Zhang (Weihua); M.S. Artigas; Bansal, N. (Narinder); M. Benn (Marianne); Chen, Z. (Zhongsheng); P. Danecek (Petr); Lin, W.-Y. (Wei-Yu); A. Locke (Adam); J. Luan (Jian'An); A.K. Manning (Alisa); Mulas, A. (Antonella); C. Sidore (Carlo); A. Tybjaerg-Hansen; A. Varbo (Anette); M. Zoledziewska (Magdalena); C. Finan (Chris); Hatzikotoulas, K. (Konstantinos); A.E. Hendricks (Audrey E.); J.P. Kemp (John); A. Moayyeri (Alireza); Panoutsopoulou, K. (Kalliope); Szpak, M. (Michal); S.G. Wilson (Scott); M. Boehnke (Michael); F. Cucca (Francesco); Di Angelantonio, E. (Emanuele); C. Langenberg (Claudia); C.M. Lindgren (Cecilia M.); McCarthy, M.I. (Mark I.); A.P. Morris (Andrew); B.G. Nordestgaard (Børge); R.A. Scott (Robert); M.D. Tobin (Martin); N.J. Wareham (Nick); P.R. Burton (Paul); J.C. Chambers (John); Smith, G.D. (George Davey); G.V. Dedoussis (George); J.F. Felix (Janine); O.H. Franco (Oscar); Gambaro, G. (Giovanni); P. Gasparini (Paolo); C.J. Hammond (Christopher J.); A. Hofman (Albert); V.W.V. Jaddoe (Vincent); M.E. Kleber (Marcus); J.S. Kooner (Jaspal S.); M. Perola (Markus); C.L. Relton (Caroline); S.M. Ring (Susan); F. Rivadeneira Ramirez (Fernando); V. Salomaa (Veikko); T.D. Spector (Timothy); O. Stegle (Oliver); D. Toniolo (Daniela); A.G. Uitterlinden (André); I.E. Barroso (Inês); C.M.T. Greenwood (Celia); Perry, J.R.B. (John R.B.); Walker, B.R. (Brian R.); A.S. Butterworth (Adam); Y. Xue (Yali); R. Durbin (Richard); K.S. Small (Kerrin); N. Soranzo (Nicole); N.J. Timpson (Nicholas); E. Zeggini (Eleftheria)

    2016-01-01

    textabstractDeep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the

  10. Whole-Genome Sequencing Coupled to Imputation Discovers Genetic Signals for Anthropometric Traits

    DEFF Research Database (Denmark)

    Tachmazidou, Ioanna; Süveges, Dániel; Min, Josine L

    2017-01-01

    Deep sequence-based imputation can enhance the discovery power of genome-wide association studies by assessing previously unexplored variation across the common- and low-frequency spectra. We applied a hybrid whole-genome sequencing (WGS) and deep imputation approach to examine the broader alleli...

  11. 48 CFR 1830.7002-4 - Determining imputed cost of money.

    Science.gov (United States)

    2010-10-01

    ... money. 1830.7002-4 Section 1830.7002-4 Federal Acquisition Regulations System NATIONAL AERONAUTICS AND... Determining imputed cost of money. (a) Determine the imputed cost of money for an asset under construction, fabrication, or development by applying a cost of money rate (see 1830.7002-2) to the representative...

  12. On the reliability evaluation of communication equipment for SMART using FMEA

    International Nuclear Information System (INIS)

    Kim, D. H.; Suh, Y. S.; Koo, I. S.; Song, Ki Sang; Han, Byung Rae

    2000-07-01

    This report describes the reliability analysis method for communication equipment using FMEA and FTA. The major equipments to be applicable for SMART communication networks are repeater, bridge, router and gateway and we can apply the FMEA or FTA technique. In the FMEA process, analysis of tagged system, decision of the level of analysis of the target system, drawing reliability block diagram according to the function, decision of failure mode, writing the fault reasons, writing on the FMEA sheet and FMEA level decision are included. Also, the FTA, it is possible to figure out top event reasons and system reliability. We have considered these in mind and we did the FMEA and FTA for NIC, hub, client server and router. Also, we suggested and integrated network model for nuclear power plant and we have shown the reliability analysis procedure according to FTA. If any proprietary communication device is developed, the reliability can be easily determined with proposed procedures

  13. Reliability Evaluation of a Single-phase H-bridge Inverter with Integrated Active Power Decoupling

    DEFF Research Database (Denmark)

    Tang, Junchaojie; Wang, Haoran; Ma, Siyuan

    2016-01-01

    it with the traditional passive DC-link solution. The converter level reliability is obtained by component level electro-thermal stress modeling, lifetime model, Weibull distribution, and Reliability Block Diagram (RBD) method. The results are demonstrated by a 2 kW single-phase inverter application.......Various power decoupling methods have been proposed recently to replace the DC-link Electrolytic Capacitors (E-caps) in single-phase conversion system, in order to extend the lifetime and improve the reliability of the DC-link. However, it is still an open question whether the converter level...... reliability becomes better or not, since additional components are introduced and the loading of the existing components may be changed. This paper aims to study the converter level reliability of a single-phase full-bridge inverter with two kinds of active power decoupling module and to compare...

  14. System Reliability Evaluation of Data Transmission in Commercial Banks with Multiple Branches

    Directory of Open Access Journals (Sweden)

    Yi-Kuei Lin

    2014-01-01

    Full Text Available The main purpose of this paper is to assess the system reliability of electronic transaction data transmissions made by commercial banks in terms of stochastic flow network. System reliability is defined as the probability of demand satisfaction and it can be used to measure quality of service. In this paper, we study the system reliability of data transmission from the headquarters of a commercial bank to its multiple branches. The network structure of the bank and the probability of successful data transmission are obtained through the collection of real data. The system reliability, calculated using the minimal path method and the recursive sum of disjoint products algorithm, provides banking managers with a view to comprehend the current state of the entire system. Besides, the system reliability can be used not only as a measurement of quality of service, but also an improvement reference of the system by adopting sensitivity analysis.

  15. On the reliability evaluation of communication equipment for SMART using FMEA

    Energy Technology Data Exchange (ETDEWEB)

    Kim, D. H.; Suh, Y. S.; Koo, I. S.; Song, Ki Sang; Han, Byung Rae

    2000-07-01

    This report describes the reliability analysis method for communication equipment using FMEA and FTA. The major equipments to be applicable for SMART communication networks are repeater, bridge, router and gateway and we can apply the FMEA or FTA technique. In the FMEA process, analysis of tagged system, decision of the level of analysis of the target system, drawing reliability block diagram according to the function, decision of failure mode, writing the fault reasons, writing on the FMEA sheet and FMEA level decision are included. Also, the FTA, it is possible to figure out top event reasons and system reliability. We have considered these in mind and we did the FMEA and FTA for NIC, hub, client server and router. Also, we suggested and integrated network model for nuclear power plant and we have shown the reliability analysis procedure according to FTA. If any proprietary communication device is developed, the reliability can be easily determined with proposed procedures.

  16. Reliability, Resilience, and Vulnerability criteria for the evaluation of Human Health Risks

    Science.gov (United States)

    Rodak, C. M.; Silliman, S. E.; Bolster, D.

    2011-12-01

    Understanding the impact of water quality on the health of a general population is challenging due high degrees of uncertainty and variability in hydrological, toxicological and human aspects of the system. Assessment of the impact of changes in water quality of a public water supply is critical to management of that water supply. We propose the use of three different system evaluation criteria: Reliability, Resilience and Vulnerability (RRV) as a tool for assessing the impact of uncertainty in the arrival of contaminant mass through time with respect to human health risks on a variable population. These criteria were first introduced to the water resources community by Hashimoto et al (1982). Most simply one can understand these criteria as the following: Reliability is the likelihood of the system being in a state of success; Resilience is the probability that the system will return to a state of success at t+1 if it is in failure at time step t, and Vulnerability is the severity of failure, which here is defined as the maximum health risk. These concepts are applied to a theoretical example where the water quality at a water supply well varies over time: health impact is considered based on sliding, 30-year windows of exposure to water derived from the well. We apply the methodology, in terms of uncertainty in water quality deviations, to eight simulated breakthrough curves of a contaminant at the well: each curve represents equal mass of contaminant arriving at the well over a 70-year lifetime of the well, but different mass distributions over time. These curves are used to investigate the impact of uncertainty in the distribution through time of the contaminant mass at the well, as well as the initial arrival of the contaminant over the 70-year lifetime of the well. In addition to extending the health risk through time with uncertainty in mass distribution, we incorporate variability in the human population to examine the evolution of the three criteria within

  17. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Integrated Reliability and Risk Analysis System (IRRAS) reference manual. Volume 2

    International Nuclear Information System (INIS)

    Russell, K.D.; Kvarfordt, K.J.; Skinner, N.L.; Wood, S.T.; Rasmuson, D.M.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The Integrated Reliability and Risk Analysis System (IRRAS) is a state-of-the-art, microcomputer-based probabilistic risk assessment (PRA) model development and analysis tool to address key nuclear plant safety issues. IRRAS is an integrated software tool that gives the use the ability to create and analyze fault trees and accident sequences using a microcomputer. This program provides functions that range from graphical fault tree construction to cut set generation and quantification to report generation. Version 1.0 of the IRRAS program was released in February of 1987. Since then, many user comments and enhancements have been incorporated into the program providing a much more powerful and user-friendly system. This version has been designated IRRAS 5.0 and is the subject of this Reference Manual. Version 5.0 of IRRAS provides the same capabilities as earlier versions and ads the ability to perform location transformations, seismic analysis, and provides enhancements to the user interface as well as improved algorithm performance. Additionally, version 5.0 contains new alphanumeric fault tree and event used for event tree rules, recovery rules, and end state partitioning

  18. Blinded evaluation of interrater reliability of an operative competency assessment tool for direct laryngoscopy and rigid bronchoscopy.

    Science.gov (United States)

    Ishman, Stacey L; Benke, James R; Johnson, Kaalan Erik; Zur, Karen B; Jacobs, Ian N; Thorne, Marc C; Brown, David J; Lin, Sandra Y; Bhatti, Nasir; Deutsch, Ellen S

    2012-10-01

    OBJECTIVES To confirm interrater reliability using blinded evaluation of a skills-assessment instrument to assess the surgical performance of resident and fellow trainees performing pediatric direct laryngoscopy and rigid bronchoscopy in simulated models. DESIGN Prospective, paired, blinded observational validation study. SUBJECTS Paired observers from multiple institutions simultaneously evaluated residents and fellows who were performing surgery in an animal laboratory or using high-fidelity manikins. The evaluators had no previous affiliation with the residents and fellows and did not know their year of training. INTERVENTIONS One- and 2-page versions of an objective structured assessment of technical skills (OSATS) assessment instrument composed of global and a task-specific surgical items were used to evaluate surgical performance. RESULTS Fifty-two evaluations were completed by 17 attending evaluators. The instrument agreement for the 2-page assessment was 71.4% when measured as a binary variable (ie, competent vs not competent) (κ = 0.38; P = .08). Evaluation as a continuous variable revealed a 42.9% percentage agreement (κ = 0.18; P = .14). The intraclass correlation was 0.53, considered substantial/good interrater reliability (69% reliable). For the 1-page instrument, agreement was 77.4% when measured as a binary variable (κ = 0.53, P = .0015). Agreement when evaluated as a continuous measure was 71.0% (κ = 0.54, P formative feedback on operational competency.

  19. [Imputing missing data in public health: general concepts and application to dichotomous variables].

    Science.gov (United States)

    Hernández, Gilma; Moriña, David; Navarro, Albert

    The presence of missing data in collected variables is common in health surveys, but the subsequent imputation thereof at the time of analysis is not. Working with imputed data may have certain benefits regarding the precision of the estimators and the unbiased identification of associations between variables. The imputation process is probably still little understood by many non-statisticians, who view this process as highly complex and with an uncertain goal. To clarify these questions, this note aims to provide a straightforward, non-exhaustive overview of the imputation process to enable public health researchers ascertain its strengths. All this in the context of dichotomous variables which are commonplace in public health. To illustrate these concepts, an example in which missing data is handled by means of simple and multiple imputation is introduced. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Imputing data that are missing at high rates using a boosting algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Cauthen, Katherine Regina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lambert, Gregory [Apple Inc., Cupertino, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lefantzi, Sophia [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2016-09-01

    Traditional multiple imputation approaches may perform poorly for datasets with high rates of missingness unless many m imputations are used. This paper implements an alternative machine learning-based approach to imputing data that are missing at high rates. Here, we use boosting to create a strong learner from a weak learner fitted to a dataset missing many observations. This approach may be applied to a variety of types of learners (models). The approach is demonstrated by application to a spatiotemporal dataset for predicting dengue outbreaks in India from meteorological covariates. A Bayesian spatiotemporal CAR model is boosted to produce imputations, and the overall RMSE from a k-fold cross-validation is used to assess imputation accuracy.

  1. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  2. A critical evaluation of deterministic methods in size optimisation of reliable and cost effective standalone hybrid renewable energy systems

    International Nuclear Information System (INIS)

    Maheri, Alireza

    2014-01-01

    cost-effective system cannot be quantified without employing probabilistic methods of analysis. It is also shown that deterministic cost analysis yields inaccurate results for all of the investigated configurations. - Graphical abstract: Deterministic size optimisation methods are unreliable in design of reliable and cost effective wind–PV–battery hybrid renewable systems irrespective of selected worst-case-scenarios and safety factors. - Highlights: • Deterministic design optimisation methods do not predict the cost of standalone HRES accurately. • Deterministic methods do not evaluate power reliability of standalone HRES directly. • Deterministic methods of design of HRES lead to solutions with unpredictable power reliability. • New robust design methods are required to be developed for standalone HRES

  3. Reliability/Cost Evaluation on Power System connected with Wind Power for the Reserve Estimation

    DEFF Research Database (Denmark)

    Lee, Go-Eun; Cha, Seung-Tae; Shin, Je-Seok

    2012-01-01

    Wind power is ideally a renewable energy with no fuel cost, but has a risk to reduce reliability of the whole system because of uncertainty of the output. If the reserve of the system is increased, the reliability of the system may be improved. However, the cost would be increased. Therefore...... the reserve needs to be estimated considering the trade-off between reliability and economic aspects. This paper suggests a methodology to estimate the appropriate reserve, when wind power is connected to the power system. As a case study, when wind power is connected to power system of Korea, the effects...

  4. Cross-cultural adaptation and reliability and validity of the Dutch Patient-Rated Tennis Elbow Evaluation (PRTEE-D)

    NARCIS (Netherlands)

    van Ark, Mathijs; Zwerver, Johannes; Diercks, Ronald L; van den Akker-Scheek, Inge

    2014-01-01

    Background: Lateral Epicondylalgia (LE) is a common injury for which no reliable and valid measure exists to determine severity in the Dutch language. The Patient-Rated Tennis Elbow Evaluation (PRTEE) is the first questionnaire specifically designed for LE but in English. The aim of this study was

  5. A critical evaluation of the validity and the reliability of global competency constructs for supervisor assessment of junior medical trainees

    NARCIS (Netherlands)

    McGill, D.A.; Vleuten, C.P.M. van der; Clarke, M.J.

    2013-01-01

    Supervisor assessments are critical for both formative and summative assessment in the workplace. Supervisor ratings remain an important source of such assessment in many educational jurisdictions even though there is ambiguity about their validity and reliability. The aims of this evaluation is to

  6. Evaluation of the Influence of the Logistic Operations Reliability on the Total Costs of a Supply Chain

    Directory of Open Access Journals (Sweden)

    Lukinskiy Valery

    2016-12-01

    Full Text Available Nowadays in logistics integral processes between the material and related flows in supply chains are getting developed more and more. However, in spite of increasing volume of statistical data which reflect the integral processes, the influence evaluation issues of the logistic operations reliability indexes on the total logistics costs remain open and require the corresponding researches implementation.

  7. Assessing Reliability and Validity of the "GroPromo" Audit Tool for Evaluation of Grocery Store Marketing and Promotional Environments

    Science.gov (United States)

    Kerr, Jacqueline; Sallis, James F.; Bromby, Erica; Glanz, Karen

    2012-01-01

    Objective: To evaluate reliability and validity of a new tool for assessing the placement and promotional environment in grocery stores. Methods: Trained observers used the "GroPromo" instrument in 40 stores to code the placement of 7 products in 9 locations within a store, along with other promotional characteristics. To test construct validity,…

  8. Nutrition and Physical Activity Knowledge Assessment: Development of Questionnaires and Evaluation of Reliability in African American and Latino Children

    Science.gov (United States)

    Roberts, Lindsay S.; Sharma, Sushma; Hudes, Mark L.; Fleming, Sharon E.

    2012-01-01

    Background: African-American and Latino children living in neighborhoods with a low-socioeconomic index are more at risk of obesity-associated metabolic disease than their higher socioeconomic index and/or white peers. Currently, consistent and reliable questionnaires to evaluate nutrition and physical activity knowledge in these children are…

  9. A new method to evaluate the sealing reliability of the flanged connections for Molten Salt Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Li, Qiming, E-mail: liqiming@sinap.ac.cn [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Key Laboratory of Nuclear Radiation and Nuclear Energy Technology, Chinese Academy of Sciences, Shanghai 201800 (China); Tian, Jian; Zhou, Chong [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Key Laboratory of Nuclear Radiation and Nuclear Energy Technology, Chinese Academy of Sciences, Shanghai 201800 (China); Wang, Naxiu, E-mail: wangnaxiu@sinap.ac.cn [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Key Laboratory of Nuclear Radiation and Nuclear Energy Technology, Chinese Academy of Sciences, Shanghai 201800 (China)

    2015-06-15

    Highlights: • We novelly valuate the sealing reliability of the flanged connections for MSRs. • We focus on the passive decrease of the leak impetus in flanged connections. • The modified flanged connections are acquired a sealing ability of self-adjustment. • Effects of redesigned flange configurations on molten salt leakage are discussed. - Abstract: The Thorium based Molten Salt Reactor (TMSR) project is a future Generation IV nuclear reactor system proposed by the Chinese Academy of Sciences with the strategic goal of meeting the growing energy needs in the Chinese economic development and social progress. It is based on liquid salts served as both fuel and primary coolant and consequently great challenges are brought into the sealing of the flanged connections. In this study, an improved prototype flange assembly is performed on the strength of the Freeze-Flange initially developed by Oak Ridge National Laboratory (ORNL). The calculation results of the finite element model established to analyze the temperature profile of the Freeze-Flange agree well with the experimental data, which indicates that the numerical simulation method is credible. For further consideration, the ideal-gas thermodynamic model, together with the mathematical approximation, is novelly borrowed to theoretically evaluate the sealing performance of the modified Freeze-Flange and the traditional double gaskets bolted flange joint. This study focuses on the passive decrease of the leak driving force due to multiple gaskets introduced in flanged connections for MSR. The effects of the redesigned flange configuration on molten salt leakage resistance are discussed in detail.

  10. Identification and Evaluation of Reliable Reference Genes in the Medicinal Fungus Shiraia bambusicola.

    Science.gov (United States)

    Song, Liang; Li, Tong; Fan, Li; Shen, Xiao-Ye; Hou, Cheng-Lin

    2016-04-01

    The stability of reference genes plays a vital role in real-time quantitative reverse transcription polymerase chain reaction (qRT-PCR) analysis, which is generally regarded as a convenient and sensitive tool for the analysis of gene expression. A well-known medicinal fungus, Shiraia bambusicola, has great potential in the pharmaceutical, agricultural and food industries, but its suitable reference genes have not yet been determined. In the present study, 11 candidate reference genes in S. bambusicola were first evaluated and validated comprehensively. To identify the suitable reference genes for qRT-PCR analysis, three software-based algorithms, geNorm, NormFinder and Best Keeper, were applied to rank the tested genes. RNA samples were collected from seven fermentation stages using different media (potato dextrose or Czapek medium) and under different light conditions (12-h light/12-h dark and all-dark). The three most appropriate reference genes, ubi, tfc and ags, were able to normalize the qRT-PCR results under the culturing conditions of 12-h light/12-h dark, whereas the other three genes, vac, gke and acyl, performed better in the culturing conditions of all-dark growth. Therefore, under different light conditions, at least two reference genes (ubi and vac) could be employed to assure the reliability of qRT-PCR results. For both the natural culture medium (the most appropriate genes of this group: ubi, tfc and ags) and the chemically defined synthetic medium (the most stable genes of this group: tfc, vac and ef), the tfc gene remained the best gene used for normalizing the gene expression found with qRT-PCR. It is anticipated that these results would improve the selection of suitable reference genes for qRT-PCR assays and lay the foundation for an accurate analysis of gene expression in S. bambusicola.

  11. Reliability of multiparametric prostatic MRI quantitative data in the evaluation of prostate cancer aggressiveness

    Directory of Open Access Journals (Sweden)

    Haisam Atta

    2017-09-01

    Full Text Available Purpose: To compare the quantitative data of multiparametric prostatic MRI with Gleason scores of histopathological analysis. Materials and methods: One hundred twenty-two patients performed Multiparametric MRI of the prostate. Functional MRI quantitative data (including diffusion with mean ADC value and spectroscopic metabolic ratio where the DWI is employing b 50, 400, 800, 1000 and 2000 sec/mm2 and multivoxel MR spectroscopy compared with of Gleason scores of histopathological results. Malignant cases are classified into three groups according to their Gleason score as group I with Gleason score ≤6, group II Gleason score 7, while Gleason score 8–10 stratified as Group III. Results: The histopathological analysis reveals 78 malignant cases and 44 benign Cases. The significant statistical difference between group I and the other two groups (p < 0.001 regarding the quantitative mean ADC value and metabolic spectroscopic ratio. No significant statistical difference between group II and III with p = 0.2 for mean ADC difference and p = 0.8 for the metabolic spectroscopic ratio with a weak negative correlation between ADCand Gleason score [rs = −0.26] and significant positive correlation (p = 0.02 for MRSI metabolic ratio [rs = 0.2]. Conclusion: The quantitative data of functional imaging of the prostate is reliable in evaluating prostatic cancer aggressiveness and proper construction of therapeutic plan. Keywords: mpMRI prostate cancer aggressiveness

  12. RELIABILITY OF POSITRON EMISSION TOMOGRAPHY-COMPUTED TOMOGRAPHY IN EVALUATION OF TESTICULAR CARCINOMA PATIENTS.

    Science.gov (United States)

    Nikoletić, Katarina; Mihailović, Jasna; Matovina, Emil; Žeravica, Radmila; Srbovan, Dolores

    2015-01-01

    The study was aimed at assessing the reliability of 18F-fluorodeoxyglucose positron emission tomography-computed tomography scan in evaluation of testicular carcinoma patients. The study sample consisted of 26 scans performed in 23 patients with testicular carcinoma. According to the pathohistological finding, 14 patients had seminomas, 7 had nonseminomas and 2 patients had a mixed histological type. In 17 patients, the initial treatment was orchiectomy+chemotherapy, 2 patients had orchiectomy+chemotherapy+retroperitoneal lymph node dissection, 3 patients had orchiectomy only and one patient was treated with chemotherapy only. Abnormal computed tomography was the main cause for the oncologist to refer the patient to positron emission tomography-computed tomography scan (in 19 scans), magnetic resonance imaging abnormalities in 1 scan, high level oftumor markers in 3 and 3 scans were perforned for follow-up. Positron emission tomography-computed tomography imaging results were compared with histological results, other imaging modalities or the clinical follow-up of the patients. Positron emission tomography-computed tomography scans were positive in 6 and negative in 20 patients. In two patients, positron emission tomography-computed tomography was false positive. There were 20 negative positron emission omography-computed tomography scans perforned in 18 patients, one patient was lost for data analysis. Clinically stable disease was confirmed in 18 follow-up scans performed in 16 patients. The values of sensitivty, specificity, accuracy, and positive- and negative predictive value were 60%, 95%, 75%, 88% and 90.5%, respectively. A hgh negative predictive value obtained in our study (90.5%) suggests that there is a small possibility for a patient to have future relapse after normal positron emission tomography-computed tomography study. However, since the sensitivity and positive predictive value of the study ire rather low, there are limitations of positive

  13. A new method to evaluate the sealing reliability of the flanged connections for Molten Salt Reactors

    International Nuclear Information System (INIS)

    Li, Qiming; Tian, Jian; Zhou, Chong; Wang, Naxiu

    2015-01-01

    Highlights: • We novelly valuate the sealing reliability of the flanged connections for MSRs. • We focus on the passive decrease of the leak impetus in flanged connections. • The modified flanged connections are acquired a sealing ability of self-adjustment. • Effects of redesigned flange configurations on molten salt leakage are discussed. - Abstract: The Thorium based Molten Salt Reactor (TMSR) project is a future Generation IV nuclear reactor system proposed by the Chinese Academy of Sciences with the strategic goal of meeting the growing energy needs in the Chinese economic development and social progress. It is based on liquid salts served as both fuel and primary coolant and consequently great challenges are brought into the sealing of the flanged connections. In this study, an improved prototype flange assembly is performed on the strength of the Freeze-Flange initially developed by Oak Ridge National Laboratory (ORNL). The calculation results of the finite element model established to analyze the temperature profile of the Freeze-Flange agree well with the experimental data, which indicates that the numerical simulation method is credible. For further consideration, the ideal-gas thermodynamic model, together with the mathematical approximation, is novelly borrowed to theoretically evaluate the sealing performance of the modified Freeze-Flange and the traditional double gaskets bolted flange joint. This study focuses on the passive decrease of the leak driving force due to multiple gaskets introduced in flanged connections for MSR. The effects of the redesigned flange configuration on molten salt leakage resistance are discussed in detail

  14. The Evaluation Method of the Lightning Strike on Transmission Lines Aiming at Power Grid Reliability

    Science.gov (United States)

    Wen, Jianfeng; Wu, Jianwei; Huang, Liandong; Geng, Yinan; Yu, zhanqing

    2018-01-01

    Lightning protection of power system focuses on reducing the flashover rate, only distinguishing by the voltage level, without considering the functional differences between the transmission lines, and being lack of analysis the effect on the reliability of power grid. This will lead lightning protection design of general transmission lines is surplus but insufficient for key lines. In order to solve this problem, the analysis method of lightning striking on transmission lines for power grid reliability is given. Full wave process theory is used to analyze the lightning back striking; the leader propagation model is used to describe the process of shielding failure of transmission lines. The index of power grid reliability is introduced and the effect of transmission line fault on the reliability of power system is discussed in detail.

  15. Reliability-based evaluation of bridge components for consistent safety margins.

    Science.gov (United States)

    2010-10-01

    The Load and Resistant Factor Design (LRFD) approach is based on the concept of structural reliability. The approach is more : rational than the former design approaches such as Load Factor Design or Allowable Stress Design. The LRFD Specification fo...

  16. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    OpenAIRE

    Chen, Xuyong; Chen, Qian; Bian, Xiaoya; Fan, Jianping

    2017-01-01

    Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic ...

  17. Reliability measures of functional magnetic resonance imaging in a longitudinal evaluation of mild cognitive impairment.

    Science.gov (United States)

    Zanto, Theodore P; Pa, Judy; Gazzaley, Adam

    2014-01-01

    As the aging population grows, it has become increasingly important to carefully characterize amnestic mild cognitive impairment (aMCI), a preclinical stage of Alzheimer's disease (AD). Functional magnetic resonance imaging (fMRI) is a valuable tool for monitoring disease progression in selectively vulnerable brain regions associated with AD neuropathology. However, the reliability of fMRI data in longitudinal studies of older adults with aMCI is largely unexplored. To address this, aMCI participants completed two visual working tasks, a Delayed-Recognition task and a One-Back task, on three separate scanning sessions over a three-month period. Test-retest reliability of the fMRI blood oxygen level dependent (BOLD) activity was assessed using an intraclass correlation (ICC) analysis approach. Results indicated that brain regions engaged during the task displayed greater reliability across sessions compared to regions that were not utilized by the task. During task-engagement, differential reliability scores were observed across the brain such that the frontal lobe, medial temporal lobe, and subcortical structures exhibited fair to moderate reliability (ICC=0.3-0.6), while temporal, parietal, and occipital regions exhibited moderate to good reliability (ICC=0.4-0.7). Additionally, reliability across brain regions was more stable when three fMRI sessions were used in the ICC calculation relative to two fMRI sessions. In conclusion, the fMRI BOLD signal is reliable across scanning sessions in this population and thus a useful tool for tracking longitudinal change in observational and interventional studies in aMCI. © 2013.

  18. The precision and reliability evaluation of 3-dimensional printed damaged bone and prosthesis models by stereo lithography appearance.

    Science.gov (United States)

    Zou, Yun; Han, Qing; Weng, Xisheng; Zou, Yongwei; Yang, Yingying; Zhang, Kesong; Yang, Kerong; Xu, Xiaolin; Wang, Chenyu; Qin, Yanguo; Wang, Jincheng

    2018-02-01

    Recently, clinical application of 3D printed model was increasing. However, there was no systemic study for confirming the precision and reliability of 3D printed model. Some senior clinical doctors mistrusted its reliability in clinical application. The purpose of this study was to evaluate the precision and reliability of stereolithography appearance (SLA) 3D printed model.Some related parameters were selected to research the reliability of SLA 3D printed model. The computed tomography (CT) data of bone/prosthesis and model were collected and 3D reconstructed. Some anatomical parameters were measured and statistical analysis was performed; the intraclass correlation coefficient (ICC) was used to was used to evaluate the similarity between the model and real bone/prosthesis. the absolute difference (mm) and relative difference (%) were conducted. For prosthesis model, the 3-dimensional error was measured.There was no significant difference in the anatomical parameters except max height (MH) of long bone. All the ICCs were greater than 0.990. The maximum absolute and relative difference were 0.45 mm and 1.10%; The 3-dimensional error analysis showed that positive/minus distance were 0.273 mm/0.237 mm.The application of SLA 3D printed model in diagnosis and treatment process of complex orthopedic disease was reliable and precise.

  19. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  20. Systematic evaluation of the teaching qualities of Obstetrics and Gynecology faculty: reliability and validity of the SETQ tools.

    Science.gov (United States)

    van der Leeuw, Renée; Lombarts, Kiki; Heineman, Maas Jan; Arah, Onyebuchi

    2011-05-03

    The importance of effective clinical teaching for the quality of future patient care is globally understood. Due to recent changes in graduate medical education, new tools are needed to provide faculty with reliable and individualized feedback on their teaching qualities. This study validates two instruments underlying the System for Evaluation of Teaching Qualities (SETQ) aimed at measuring and improving the teaching qualities of obstetrics and gynecology faculty. This cross-sectional multi-center questionnaire study was set in seven general teaching hospitals and two academic medical centers in the Netherlands. Seventy-seven residents and 114 faculty were invited to complete the SETQ instruments in the duration of one month from September 2008 to September 2009. To assess reliability and validity of the instruments, we used exploratory factor analysis, inter-item correlation, reliability coefficient alpha and inter-scale correlations. We also compared composite scales from factor analysis to global ratings. Finally, the number of residents' evaluations needed per faculty for reliable assessments was calculated. A total of 613 evaluations were completed by 66 residents (85.7% response rate). 99 faculty (86.8% response rate) participated in self-evaluation. Factor analysis yielded five scales with high reliability (Cronbach's alpha for residents' and faculty): learning climate (0.86 and 0.75), professional attitude (0.89 and 0.81), communication of learning goals (0.89 and 0.82), evaluation of residents (0.87 and 0.79) and feedback (0.87 and 0.86). Item-total, inter-scale and scale-global rating correlation coefficients were significant (Pteaching qualities of obstetrics and gynecology faculty. Future research should examine improvement of teaching qualities when using SETQ.

  1. Reliability of a functional test battery evaluating functionality, proprioception, and strength in recreational athletes with functional ankle instability.

    Science.gov (United States)

    Sekir, U; Yildiz, Y; Hazneci, B; Ors, F; Saka, T; Aydin, T

    2008-12-01

    In contrast to the single evaluation methods used in the past, the combination of multiple tests allows one to obtain a global assessment of the ankle joint. The aim of this study was to determine the reliability of the different tests in a functional test battery. Twenty-four male recreational athletes with unilateral functional ankle instability (FAI) were recruited for this study. One component of the test battery included five different functional ability tests. These tests included a single limb hopping course, single-legged and triple-legged hop for distance, and six and cross six meter hop for time. The ankle joint position sense and one leg standing test were used for evaluation of proprioception and sensorimotor control. The isokinetic strengths of the ankle invertor and evertor muscles were evaluated at a velocity of 120 degrees /s. The reliability of the test battery was assessed by calculating the intraclass correlation coefficient (ICC). Each subject was tested two times, with an interval of 3-5 days between the test sessions. The ICCs for ankle functional and proprioceptive ability showed high reliability (ICCs ranging from 0.94 to 0.98). Additionally, isokinetic ankle joint inversion and eversion strength measurements represented good to high reliability (ICCs between 0.82 and 0.98). The functional test battery investigated in this study proved to be a reliable tool for the assessment of athletes with functional ankle instability. Therefore, clinicians may obtain reliable information from the functional test battery during the assessment of ankle joint performance in patients with functional ankle instability.

  2. Evaluating the reliability of an injury prevention screening tool: Test-retest study.

    Science.gov (United States)

    Gittelman, Michael A; Kincaid, Madeline; Denny, Sarah; Wervey Arnold, Melissa; FitzGerald, Michael; Carle, Adam C; Mara, Constance A

    2016-10-01

    A standardized injury prevention (IP) screening tool can identify family risks and allow pediatricians to address behaviors. To assess behavior changes on later screens, the tool must be reliable for an individual and ideally between household members. Little research has examined the reliability of safety screening tool questions. This study utilized test-retest reliability of parent responses on an existing IP questionnaire and also compared responses between household parents. Investigators recruited parents of children 0 to 1 year of age during admission to a tertiary care children's hospital. When both parents were present, one was chosen as the "primary" respondent. Primary respondents completed the 30-question IP screening tool after consent, and they were re-screened approximately 4 hours later to test individual reliability. The "second" parent, when present, only completed the tool once. All participants received a 10-dollar gift card. Cohen's Kappa was used to estimate test-retest reliability and inter-rater agreement. Standard test-retest criteria consider Kappa values: 0.0 to 0.40 poor to fair, 0.41 to 0.60 moderate, 0.61 to 0.80 substantial, and 0.81 to 1.00 as almost perfect reliability. One hundred five families participated, with five lost to follow-up. Thirty-two (30.5%) parent dyads completed the tool. Primary respondents were generally mothers (88%) and Caucasian (72%). Test-retest of the primary respondents showed their responses to be almost perfect; average 0.82 (SD = 0.13, range 0.49-1.00). Seventeen questions had almost perfect test-retest reliability and 11 had substantial reliability. However, inter-rater agreement between household members for 12 objective questions showed little agreement between responses; inter-rater agreement averaged 0.35 (SD = 0.34, range -0.19-1.00). One question had almost perfect inter-rater agreement and two had substantial inter-rater agreement. The IP screening tool used by a single individual had excellent

  3. Field reliability of competence to stand trial opinions: How often do evaluators agree, and what do judges decide when evaluators disagree?

    Science.gov (United States)

    Gowensmith, W Neil; Murrie, Daniel C; Boccaccini, Marcus T

    2012-04-01

    Despite many studies that examine the reliability of competence to stand trial (CST) evaluations, few shed light on "field reliability," or agreement among forensic evaluators in routine practice. We reviewed 216 cases from Hawaii, which requires three separate evaluations from independent clinicians for each felony defendant referred for CST evaluation. Results revealed moderate agreement. In 71% of initial CST evaluations, all evaluators agreed about a defendant's competence or incompetence (kappa = .65). Agreement was somewhat lower (61%, kappa = .57) in re-evaluations of defendants who were originally found incompetent and sent for restoration services. We also examined the decisions judges made about a defendant's CST. When evaluators disagreed, judges tended to make decisions consistent with the majority opinion. But when judges disagreed with the majority opinion, they more often did so to find a defendant incompetent than competent, suggesting a generally conservative approach. Overall, results reveal moderate agreement among independent evaluators in routine practice. But we discuss the potential for standardized training and methodology to further improve the field reliability of CST evaluations.

  4. Differential network analysis with multiply imputed lipidomic data.

    Directory of Open Access Journals (Sweden)

    Maiju Kujala

    Full Text Available The importance of lipids for cell function and health has been widely recognized, e.g., a disorder in the lipid composition of cells has been related to atherosclerosis caused cardiovascular disease (CVD. Lipidomics analyses are characterized by large yet not a huge number of mutually correlated variables measured and their associations to outcomes are potentially of a complex nature. Differential network analysis provides a formal statistical method capable of inferential analysis to examine differences in network structures of the lipids under two biological conditions. It also guides us to identify potential relationships requiring further biological investigation. We provide a recipe to conduct permutation test on association scores resulted from partial least square regression with multiple imputed lipidomic data from the LUdwigshafen RIsk and Cardiovascular Health (LURIC study, particularly paying attention to the left-censored missing values typical for a wide range of data sets in life sciences. Left-censored missing values are low-level concentrations that are known to exist somewhere between zero and a lower limit of quantification. To make full use of the LURIC data with the missing values, we utilize state of the art multiple imputation techniques and propose solutions to the challenges that incomplete data sets bring to differential network analysis. The customized network analysis helps us to understand the complexities of the underlying biological processes by identifying lipids and lipid classes that interact with each other, and by recognizing the most important differentially expressed lipids between two subgroups of coronary artery disease (CAD patients, the patients that had a fatal CVD event and the ones who remained stable during two year follow-up.

  5. Water chemistry data acquisition, processing, evaluation and diagnostic systems in Light Water Reactors: Future improvement of plant reliability and safety

    International Nuclear Information System (INIS)

    Uchida, S.; Takiguchi, H.; Ishigure, K.

    2006-01-01

    Data acquisition, processing and evaluation systems have been applied in major Japanese PWRs and BWRs to provide (1) reliable and quick data acquisition with manpower savings in plant chemical laboratories and (2) smooth and reliable information transfer among chemists, plant operators, and supervisors. Data acquisition systems in plants consist of automatic and semi-automatic instruments for chemical analyses, e. g., X-ray fluorescence analysis and ion chromatography, while data processing systems consist of PC base-sub-systems, e.g., data storage, reliability evaluation, clear display, and document preparation for understanding the plant own water chemistry trends. Precise and reliable evaluations of water chemistry data are required in order to improve plant reliability and safety. For this, quality assurance of the water chemistry data acquisition system is needed. At the same time, theoretical models are being applied to bridge the gaps between measured water chemistry data and the information desired to understand the interaction of materials and cooling water in plants. Major models which have already been applied for plant evaluation are: (1) water radiolysis models for BWRs and PWRs; (2) crevice radiolysis model for SCC in BWRs; and (3) crevice pH model for SG tubing in PWRs. High temperature water chemistry sensors and automatic plant diagnostic systems have been applied in only restricted areas. ECP sensors are gaining popularity as tools to determine the effects of hydrogen injection in BWR systems. Automatic plant diagnostic systems based on artificial intelligence will be more popular after having sufficient experience with off line diagnostic systems. (author)

  6. Good validity and reliability of the forgotten joint score in evaluating the outcome of total knee arthroplasty

    DEFF Research Database (Denmark)

    Thomsen, Morten G; Latifi, Roshan; Kallemose, Thomas

    2016-01-01

    . We investigated the validity and reliability of the FJS. Patients and methods - A Danish version of the FJS questionnaire was created according to internationally accepted standards. 360 participants who underwent primary TKA were invited to participate in the study. Of these, 315 were included...... in a validity study and 150 in a reliability study. Correlation between the Oxford knee score (OKS) and the FJS was examined and test-retest evaluation was performed. A ceiling effect was defined as participants reaching a score within 15% of the maximum achievable score. Results - The validity study revealed...... of the FJS (ICC? 0.79). We found a high level of internal consistency (Cronbach's? = 0.96). The ceiling effect for the FJS was 16%, as compared to 37% for the OKS. Interpretation - The FJS showed good construct validity and test-retest reliability. It had a lower ceiling effect than the OKS. The FJS appears...

  7. Principles of performance and reliability modeling and evaluation essays in honor of Kishor Trivedi on his 70th birthday

    CERN Document Server

    Puliafito, Antonio

    2016-01-01

    This book presents the latest key research into the performance and reliability aspects of dependable fault-tolerant systems and features commentary on the fields studied by Prof. Kishor S. Trivedi during his distinguished career. Analyzing system evaluation as a fundamental tenet in the design of modern systems, this book uses performance and dependability as common measures and covers novel ideas, methods, algorithms, techniques, and tools for the in-depth study of the performance and reliability aspects of dependable fault-tolerant systems. It identifies the current challenges that designers and practitioners must face in order to ensure the reliability, availability, and performance of systems, with special focus on their dynamic behaviors and dependencies, and provides system researchers, performance analysts, and practitioners with the tools to address these challenges in their work. With contributions from Prof. Trivedi's former PhD students and collaborators, many of whom are internationally recognize...

  8. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE)

    International Nuclear Information System (INIS)

    C. L. Smith

    2006-01-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) is a software application developed for performing a complete probabilistic risk assessment (PRA) using a personal computer (PC) running the Microsoft Windows operating system. SAPHIRE is primarily funded by the U.S. Nuclear Regulatory Commission (NRC) and developed by the Idaho National Laboratory (INL). INL's primary role in this project is that of software developer and tester. However, INL also plays an important role in technology transfer by interfacing and supporting SAPHIRE users, who constitute a wide range of PRA practitioners from the NRC, national laboratories, the private sector, and foreign countries. SAPHIRE can be used to model a complex system's response to initiating events and quantify associated consequential outcome frequencies. Specifically, for nuclear power plant applications, SAPHIRE can identify important contributors to core damage (Level 1 PRA) and containment failure during a severe accident which lead to releases (Level 2 PRA). It can be used for a PRA where the reactor is at full power, low power, or at shutdown conditions. Furthermore, it can be used to analyze both internal and external initiating events and has special features for transforming an internal events model to a model for external events, such as flooding and fire analysis. It can also be used in a limited manner to quantify risk in terms of release consequences to the public and environment (Level 3 PRA). SAPHIRE also includes a separate module called the Graphical Evaluation Module (GEM). GEM is a special user interface linked to SAPHIRE that automates the SAPHIRE process steps for evaluating operational events at commercial nuclear power plants. Using GEM, an analyst can estimate the risk associated with operational events (for example, to calculate a conditional core damage probability) very efficiently and expeditiously. This report provides an overview of the functions

  9. A national drug related problems database: evaluation of use in practice, reliability and reproducibility

    DEFF Research Database (Denmark)

    Kjeldsen, Lene Juel; Birkholm, Trine; Fischer, Hanne Lis

    2014-01-01

    Background A drug related problems database (DRP-database) was developed on request by clinical pharmacists. The information from the DRP-database has only been used locally e.g. to identify focus areas and to communicate identified DRPs to the hospital wards. Hence the quality of the data...... by clinical pharmacists with categorization performed by the project group. Reproducibility was explored by re-categorization of a sample of existing records in the DRP-database by two project group members individually. Main outcome measures Observed proportion of agreement and Fleiss' kappa as measures...... reliability study of 34 clinical pharmacists showed high inter-rater reliability with the project group (Fleiss' kappa = 0.79 with 95 % CI (0.70; 0.88)), and the reproducibility study also documented high inter-rater reliability of a sample of 379 records from the DRP-database re-categorized by two project...

  10. A comparison of genomic selection models across time in interior spruce (Picea engelmannii × glauca) using unordered SNP imputation methods.

    Science.gov (United States)

    Ratcliffe, B; El-Dien, O G; Klápště, J; Porth, I; Chen, C; Jaquish, B; El-Kassaby, Y A

    2015-12-01

    Genomic selection (GS) potentially offers an unparalleled advantage over traditional pedigree-based selection (TS) methods by reducing the time commitment required to carry out a single cycle of tree improvement. This quality is particularly appealing to tree breeders, where lengthy improvement cycles are the norm. We explored the prospect of implementing GS for interior spruce (Picea engelmannii × glauca) utilizing a genotyped population of 769 trees belonging to 25 open-pollinated families. A series of repeated tree height measurements through ages 3-40 years permitted the testing of GS methods temporally. The genotyping-by-sequencing (GBS) platform was used for single nucleotide polymorphism (SNP) discovery in conjunction with three unordered imputation methods applied to a data set with 60% missing information. Further, three diverse GS models were evaluated based on predictive accuracy (PA), and their marker effects. Moderate levels of PA (0.31-0.55) were observed and were of sufficient capacity to deliver improved selection response over TS. Additionally, PA varied substantially through time accordingly with spatial competition among trees. As expected, temporal PA was well correlated with age-age genetic correlation (r=0.99), and decreased substantially with increasing difference in age between the training and validation populations (0.04-0.47). Moreover, our imputation comparisons indicate that k-nearest neighbor and singular value decomposition yielded a greater number of SNPs and gave higher predictive accuracies than imputing with the mean. Furthermore, the ridge regression (rrBLUP) and BayesCπ (BCπ) models both yielded equal, and better PA than the generalized ridge regression heteroscedastic effect model for the traits evaluated.

  11. Evaluation of reliability and validity of three dental color-matching devices.

    Science.gov (United States)

    Tsiliagkou, Aikaterini; Diamantopoulou, Sofia; Papazoglou, Efstratios; Kakaboura, Afrodite

    2016-01-01

    To assess the repeatability and accuracy of three dental color-matching devices under standardized and freehand measurement conditions. Two shade guides (Vita Classical A1-D4, Vita; and Vita Toothguide 3D-Master, Vita), and three color-matching devices (Easyshade, Vita; SpectroShade, MHT Optic Research; and ShadeVision, X-Rite) were used. Five shade tabs were selected from the Vita Classical A1-D4 (A2, A3.5, B1, C4, D3), and five from the Vita Toothguide 3D-Master (1M1, 2R1.5, 3M2, 4L2.5, 5M3) shade guides. Each shade tab was recorded 15 continuous, repeated times with each device under two different measurement conditions (standardized, and freehand). Both qualitative (color shade) and quantitative (L, a, and b) color characteristics were recorded. The color difference (ΔE) of each recorded value with the known values of the shade tab was calculated. The repeatability of each device was evaluated by the coefficient of variance. The accuracy of each device was determined by comparing the recorded values with the known values of the reference shade tab (one sample t test; α = 0.05). The agreement between the recorded shade and the reference shade tab was calculated. The influence of the parameters (devices and conditions) on the parameter ΔE was investigated (two-way ANOVA). Comparison of the devices was performed with Bonferroni pairwise post-hoc analysis. Under standardized conditions, repeatability of all three devices was very good, except for ShadeVision with Vita Classical A1-D4. Accuracy ranged from good to fair, depending on the device and the shade guide. Under freehand conditions, repeatability and accuracy for Easyshade and ShadeVision were negatively influenced, but not for SpectroShade, regardless of the shade guide. Based on the total of the color parameters assessed per device, SpectroShade was the most reliable of the three color-matching devices studied.

  12. Examples of fatigue lifetime and reliability evaluation of larger wind turbine components

    DEFF Research Database (Denmark)

    Tarp-Johansen, N.J.

    2003-01-01

    This report is one out of several that constitute the final report on the ELSAM funded PSO project “Vindmøllekomponenters udmattelsesstyrke og levetid”, project no. 2079, which regards the lifetime distribution of larger wind turbine components in ageneric turbine that has real life dimensions....... Though it was the initial intention of the project to consider only the distribution of lifetimes the work reported in this document provides also calculations of reliabilities and partial load safetyfactors under specific assumptions about uncertainty sources, as reliabilities are considered...

  13. Reliability database development for use with an object-oriented fault tree evaluation program

    Science.gov (United States)

    Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann

    1989-01-01

    A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.

  14. Increasing imputation and prediction accuracy for Chinese Holsteins using joint Chinese-Nordic reference population

    DEFF Research Database (Denmark)

    Ma, Peipei; Lund, Mogens Sandø; Ding, X

    2015-01-01

    This study investigated the effect of including Nordic Holsteins in the reference population on the imputation accuracy and prediction accuracy for Chinese Holsteins. The data used in this study include 85 Chinese Holstein bulls genotyped with both 54K chip and 777K (HD) chip, 2862 Chinese cows...... was improved slightly when using the marker data imputed based on the combined HD reference data, compared with using the marker data imputed based on the Chinese HD reference data only. On the other hand, when using the combined reference population including 4398 Nordic Holstein bulls, the accuracy...... to increase reference population rather than increasing marker density...

  15. BACHSCORE. A tool for evaluating efficiently and reliably the quality of large sets of protein structures

    Science.gov (United States)

    Sarti, E.; Zamuner, S.; Cossio, P.; Laio, A.; Seno, F.; Trovato, A.

    2013-12-01

    In protein structure prediction it is of crucial importance, especially at the refinement stage, to score efficiently large sets of models by selecting the ones that are closest to the native state. We here present a new computational tool, BACHSCORE, that allows its users to rank different structural models of the same protein according to their quality, evaluated by using the BACH++ (Bayesian Analysis Conformation Hunt) scoring function. The original BACH statistical potential was already shown to discriminate with very good reliability the protein native state in large sets of misfolded models of the same protein. BACH++ features a novel upgrade in the solvation potential of the scoring function, now computed by adapting the LCPO (Linear Combination of Pairwise Orbitals) algorithm. This change further enhances the already good performance of the scoring function. BACHSCORE can be accessed directly through the web server: bachserver.pd.infn.it. Catalogue identifier: AEQD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQD_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 130159 No. of bytes in distributed program, including test data, etc.: 24 687 455 Distribution format: tar.gz Programming language: C++. Computer: Any computer capable of running an executable produced by a g++ compiler (4.6.3 version). Operating system: Linux, Unix OS-es. RAM: 1 073 741 824 bytes Classification: 3. Nature of problem: Evaluate the quality of a protein structural model, taking into account the possible “a priori” knowledge of a reference primary sequence that may be different from the amino-acid sequence of the model; the native protein structure should be recognized as the best model. Solution method: The contact potential scores the occurrence of any given type of residue pair in 5 possible

  16. Reliable change indices and standardized regression-based change score norms for evaluating neuropsychological change in children with epilepsy.

    Science.gov (United States)

    Busch, Robyn M; Lineweaver, Tara T; Ferguson, Lisa; Haut, Jennifer S

    2015-06-01

    Reliable change indices (RCIs) and standardized regression-based (SRB) change score norms permit evaluation of meaningful changes in test scores following treatment interventions, like epilepsy surgery, while accounting for test-retest reliability, practice effects, score fluctuations due to error, and relevant clinical and demographic factors. Although these methods are frequently used to assess cognitive change after epilepsy surgery in adults, they have not been widely applied to examine cognitive change in children with epilepsy. The goal of the current study was to develop RCIs and SRB change score norms for use in children with epilepsy. Sixty-three children with epilepsy (age range: 6-16; M=10.19, SD=2.58) underwent comprehensive neuropsychological evaluations at two time points an average of 12 months apart. Practice effect-adjusted RCIs and SRB change score norms were calculated for all cognitive measures in the battery. Practice effects were quite variable across the neuropsychological measures, with the greatest differences observed among older children, particularly on the Children's Memory Scale and Wisconsin Card Sorting Test. There was also notable variability in test-retest reliabilities across measures in the battery, with coefficients ranging from 0.14 to 0.92. Reliable change indices and SRB change score norms for use in assessing meaningful cognitive change in children following epilepsy surgery are provided for measures with reliability coefficients above 0.50. This is the first study to provide RCIs and SRB change score norms for a comprehensive neuropsychological battery based on a large sample of children with epilepsy. Tables to aid in evaluating cognitive changes in children who have undergone epilepsy surgery are provided for clinical use. An Excel sheet to perform all relevant calculations is also available to interested clinicians or researchers. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Reliability and Validity of a New Method for Isometric Back Extensor Strength Evaluation Using A Hand-Held Dynamometer.

    Science.gov (United States)

    Park, Hee-Won; Baek, Sora; Kim, Hong Young; Park, Jung-Gyoo; Kang, Eun Kyoung

    2017-10-01

    To investigate the reliability and validity of a new method for isometric back extensor strength measurement using a portable dynamometer. A chair equipped with a small portable dynamometer was designed (Power Track II Commander Muscle Tester). A total of 15 men (mean age, 34.8±7.5 years) and 15 women (mean age, 33.1±5.5 years) with no current back problems or previous history of back surgery were recruited. Subjects were asked to push the back of the chair while seated, and their isometric back extensor strength was measured by the portable dynamometer. Test-retest reliability was assessed with intraclass correlation coefficient (ICC). For the validity assessment, isometric back extensor strength of all subjects was measured by a widely used physical performance evaluation instrument, BTE PrimusRS system. The limit of agreement (LoA) from the Bland-Altman plot was evaluated between two methods. The test-retest reliability was excellent (ICC=0.82; 95% confidence interval, 0.65-0.91). The Bland-Altman plots demonstrated acceptable agreement between the two methods: the lower 95% LoA was -63.1 N and the upper 95% LoA was 61.1 N. This study shows that isometric back extensor strength measurement using a portable dynamometer has good reliability and validity.

  18. Evaluation of the Validity and Reliability of the Waterlow Pressure Ulcer Risk Assessment Scale.

    Science.gov (United States)

    Charalambous, Charalambos; Koulori, Agoritsa; Vasilopoulos, Aristidis; Roupa, Zoe

    2018-04-01

    Prevention is the ideal strategy to tackle the problem of pressure ulcers. Pressure ulcer risk assessment scales are one of the most pivotal measures applied to tackle the problem, much criticisms has been developed regarding the validity and reliability of these scales. To investigate the validity and reliability of the Waterlow pressure ulcer risk assessment scale. The methodology used is a narrative literature review, the bibliography was reviewed through Cinahl, Pubmed, EBSCO, Medline and Google scholar, 26 scientific articles where identified. The articles where chosen due to their direct correlation with the objective under study and their scientific relevance. The construct and face validity of the Waterlow appears adequate, but with regards to content validity changes in the category age and gender can be beneficial. The concurrent validity cannot be assessed. The predictive validity of the Waterlow is characterized by high specificity and low sensitivity. The inter-rater reliability has been demonstrated to be inadequate, this may be due to lack of clear definitions within the categories and differentiating level of knowledge between the users. Due to the limitations presented regarding the validity and reliability of the Waterlow pressure ulcer risk assessment scale, the scale should be used in conjunction with clinical assessment to provide optimum results.

  19. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    Science.gov (United States)

    Islam, Muhammad Faysal

    2013-01-01

    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  20. Reliability of pedigree-based and genomic evaluations in selected populations

    NARCIS (Netherlands)

    Gorjanc, G.; Bijma, P.; Hickey, J.M.

    2015-01-01

    Background: Reliability is an important parameter in breeding. It measures the precision of estimated breeding values (EBV) and, thus, potential response to selection on those EBV. The precision of EBV is commonly measured by relating the prediction error variance (PEV) of EBV to the base population

  1. An Evaluation of the Reliability of the Food Label Literacy Questionnaire in Russian

    Science.gov (United States)

    Gurevich, Konstantin G.; Reynolds, Jesse; Bifulco, Lauren; Doughty, Kimberly; Njike, Valentine; Katz, David L.

    2016-01-01

    Objective: School-based nutrition education can promote the development of skills, such as food label reading, that can contribute to making healthier food choices. The purpose of this study was to assess the reliability of a Russian language version of the previously validated Food Label Literacy for Applied Nutrition Knowledge (FLLANK)…

  2. The Reliability, Validity, and Evaluation of the Objective Structured Clinical Examination in Podiatry (Chiropody).

    Science.gov (United States)

    Woodburn, Jim; Sutcliffe, Nick

    1996-01-01

    The Objective Structured Clinical Examination (OSCE), initially developed for undergraduate medical education, has been adapted for assessment of clinical skills in podiatry students. A 12-month pilot study found the test had relatively low levels of reliability, high construct and criterion validity, and good stability of performance over time.…

  3. Reliability evaluation of power supply and distribution for special heat removal systems in nuclear power stations

    International Nuclear Information System (INIS)

    Jazbec, D.

    1982-01-01

    An example of the power supply and distribution of a Special Emergency Heat Removal System (SEHR) shows how an engineering organization may, with the aid of the analytical method of min-cut sets optimize the system reliability. Herein are given the necessary simple calculation methods. (Auth.)

  4. Evaluation of the Validity and Reliability of the Waterlow Pressure Ulcer Risk Assessment Scale

    Science.gov (United States)

    Charalambous, Charalambos; Koulori, Agoritsa; Vasilopoulos, Aristidis; Roupa, Zoe

    2018-01-01

    Introduction Prevention is the ideal strategy to tackle the problem of pressure ulcers. Pressure ulcer risk assessment scales are one of the most pivotal measures applied to tackle the problem, much criticisms has been developed regarding the validity and reliability of these scales. Objective To investigate the validity and reliability of the Waterlow pressure ulcer risk assessment scale. Method The methodology used is a narrative literature review, the bibliography was reviewed through Cinahl, Pubmed, EBSCO, Medline and Google scholar, 26 scientific articles where identified. The articles where chosen due to their direct correlation with the objective under study and their scientific relevance. Results The construct and face validity of the Waterlow appears adequate, but with regards to content validity changes in the category age and gender can be beneficial. The concurrent validity cannot be assessed. The predictive validity of the Waterlow is characterized by high specificity and low sensitivity. The inter-rater reliability has been demonstrated to be inadequate, this may be due to lack of clear definitions within the categories and differentiating level of knowledge between the users. Conclusion Due to the limitations presented regarding the validity and reliability of the Waterlow pressure ulcer risk assessment scale, the scale should be used in conjunction with clinical assessment to provide optimum results. PMID:29736104

  5. Evaluation and Design Tools for the Reliability of Wind Power Converter System

    DEFF Research Database (Denmark)

    Ma, Ke; Zhou, Dao; Blaabjerg, Frede

    2015-01-01

    grid. As a result, the correct assessment of reliable performance for power electronics is a crucial and emerging need; the assessment is essential for design improvement, as well as for the extension of converter lifetime and reduction of energy cost. Unfortunately, there still exists a lack...

  6. Evaluation of reproducibility and reliability of 3D soft tissue analysis using 3D stereophotogrammetry.

    NARCIS (Netherlands)

    Plooij, J.M.; Swennen, G.R.J.; Rangel, F.A.; Maal, T.J.J.; Schutyser, F.A.C.; Bronkhorst, E.M.; Kuijpers-Jagtman, A.M.; Berge, S.J.

    2009-01-01

    In 3D photographs the bony structures are neither available nor palpable, therefore, the bone-related landmarks, such as the soft tissue gonion, need to be redefined. The purpose of this study was to determine the reproducibility and reliability of 49 soft tissue landmarks, including newly defined

  7. Reliability and sensitivity of visual scales versus volumetry for evaluating white matter hyperintensity progression

    DEFF Research Database (Denmark)

    Gouw, A A; van der Flier, W M; van Straaten, E C W

    2008-01-01

    the reliability and sensitivity of cross-sectional and longitudinal visual scales with volumetry for measuring WMH progression. METHODS: Twenty MRI scan pairs (interval 2 years) were included from the Amsterdam center of the LADIS study. Semi-automated volumetry of WMH was performed twice by one rater. Three...

  8. Evaluation of conventional electric power generating industry quality assurance and reliability practices

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, R.T.; Lauffenburger, H.A.

    1981-03-01

    The techniques and practices utilized in an allied industry (electric power generation) that might serve as a baseline for formulating Quality Assurance and Reliability (QA and R) procedures for photovoltaic solar energy systems were studied. The study results provide direct near-term input for establishing validation methods as part of the SERI performance criteria and test standards development task.

  9. Reliability and Validity of SERVQUAL Scores Used To Evaluate Perceptions of Library Service Quality.

    Science.gov (United States)

    Thompson, Bruce; Cook, Colleen

    Research libraries are increasingly supplementing collection counts with perceptions of service quality as indices of status and productivity. The present study was undertaken to explore the reliability and validity of scores from the SERVQUAL measurement protocol (A. Parasuraman and others, 1991), which has previously been used in this type of…

  10. A study on the dependency evaluation for multiple human actions in human reliability analysis of probabilistic safety assessment

    International Nuclear Information System (INIS)

    Kang, D. I.; Yang, J. E.; Jung, W. D.; Sung, T. Y.; Park, J. H.; Lee, Y. H.; Hwang, M. J.; Kim, K. Y.; Jin, Y. H.; Kim, S. C.

    1997-02-01

    This report describes the study results on the method of the dependency evaluation and the modeling, and the limited value of human error probability (HEP) for multiple human actions in accident sequences of probabilistic safety assessment (PSA). THERP and Parry's method, which have been generally used in dependency evaluation of human reliability analysis (HRA), are introduced and their limitations are discussed. New dependency evaluation method in HRA is established to make up for the weak points of THERP and Parry's methods. The limited value of HEP is also established based on the review of several HRA related documents. This report describes the definition, the type, the evaluation method, and the evaluation example of dependency to help the reader's understanding. It is expected that this study results will give a guidance to HRA analysts in dependency evaluation of multiple human actions and enable PSA analysts to understand HRA in detail. (author). 23 refs., 3 tabs., 2 figs

  11. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  12. Reliability Evaluation of Base-Metal-Electrode Multilayer Ceramic Capacitors for Potential Space Applications

    Science.gov (United States)

    Liu, David (Donhang); Sampson, Michael J.

    2011-01-01

    Base-metal-electrode (BME) ceramic capacitors are being investigated for possible use in high-reliability spacelevel applications. This paper focuses on how BME capacitors construction and microstructure affects their lifetime and reliability. Examination of the construction and microstructure of commercial off-the-shelf (COTS) BME capacitors reveals great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and 0.5 m, which is much less than that of most PME capacitors. BME capacitors can be fabricated with more internal electrode layers and thinner dielectric layers than PME capacitors because they have a fine-grained microstructure and do not shrink much during ceramic sintering. This makes it possible for BME capacitors to achieve a very high capacitance volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT). Most BME capacitors were found to fail with an early avalanche breakdown, followed by a regular dielectric wearout failure during the HALT test. When most of the early failures, characterized with avalanche breakdown, were removed, BME capacitors exhibited a minimum mean time-to-failure (MTTF) of more than 105 years at room temperature and rated voltage. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically around 12 for a number of BME capacitors with a rated voltage of 25V. This may suggest that the number of grains per dielectric layer is more critical than the

  13. A Step by Step Approach for Evaluating the Reliability of the Main Engine Lube Oil System for a Ship's Propulsion System

    Directory of Open Access Journals (Sweden)

    Mohan Anantharaman

    2014-09-01

    Full Text Available Effective and efficient maintenance is essential to ensure reliability of a ship's main propulsion system, which in turn is interdependent on the reliability of a number of associated sub- systems. A primary step in evaluating the reliability of the ship's propulsion system will be to evaluate the reliability of each of the sub- system. This paper discusses the methodology adopted to quantify reliability of one of the vital sub-system viz. the lubricating oil system, and development of a model, based on Markov analysis thereof. Having developed the model, means to improve reliability of the system should be considered. The cost of the incremental reliability should be measured to evaluate cost benefits. A maintenance plan can then be devised to achieve the higher level of reliability. Similar approach could be considered to evaluate the reliability of all other sub-systems. This will finally lead to development of a model to evaluate and improve the reliability of the main propulsion system.

  14. Development of advanced methods and related software for human reliability evaluation within probabilistic safety analyses

    International Nuclear Information System (INIS)

    Kosmowski, K.T.; Mertens, J.; Degen, G.; Reer, B.

    1994-06-01

    Human Reliability Analysis (HRA) is an important part of Probabilistic Safety Analysis (PSA). The first part of this report consists of an overview of types of human behaviour and human error including the effect of significant performance shaping factors on human reliability. Particularly with regard to safety assessments for nuclear power plants a lot of HRA methods have been developed. The most important of these methods are presented and discussed in the report, together with techniques for incorporating HRA into PSA and with models of operator cognitive behaviour. Based on existing HRA methods the concept of a software system is described. For the development of this system the utilization of modern programming tools is proposed; the essential goal is the effective application of HRA methods. A possible integration of computeraided HRA within PSA is discussed. The features of Expert System Technology and examples of applications (PSA, HRA) are presented in four appendices. (orig.) [de

  15. The application of cognitive models to the evaluation and prediction of human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.; Reason, J.T.

    1986-01-01

    The first section of the paper provides a brief overview of a number of important principles relevant to human reliability modeling that have emerged from cognitive models, and presents a synthesis of these approaches in the form of a Generic Error Modeling System (GEMS). The next section illustrates the application of GEMS to some well known nuclear power plant (NPP) incidents in which human error was a major contributor. The way in which design recommendations can emerge from analyses of this type is illustrated. The third section describes the use of cognitive models in the classification of human errors for prediction and data collection purposes. The final section addresses the predictive modeling of human error as part of human reliability assessment in Probabilistic Risk Assessment

  16. Reliability Assessment and Energy Loss Evaluation for Modern Wind Turbine Systems

    DEFF Research Database (Denmark)

    Zhou, Dao

    . The cost of energy in wind turbine system is then addressed in Chapter 5, where different wind classes and operation modes of the reactive power injection are taken into account. Finally, the internal and external challenges for power converters in the DFIG systems to ride through balanced grid faults......With a steady increase of the wind power penetration, the demands to the wind power technology are becoming the same as those to the conventional energy sources. In order to fulfill the requirements, power electronics technology is the key for the modern wind turbine systems – both the Doubly...... to explore the reliability and cost of energy in the modern wind turbine systems. Moreover, advanced control strategies have been proposed and developed for an efficient and reliable operation during the normal condition as well as under grid faults. The documented thesis starts with the descriptions...

  17. An evaluation of the reliability and usefulness of external-initiator PRA [probabilistic risk analysis] methodologies

    International Nuclear Information System (INIS)

    Budnitz, R.J.; Lambert, H.E.

    1990-01-01

    The discipline of probabilistic risk analysis (PRA) has become so mature in recent years that it is now being used routinely to assist decision-making throughout the nuclear industry. This includes decision-making that affects design, construction, operation, maintenance, and regulation. Unfortunately, not all sub-areas within the larger discipline of PRA are equally ''mature,'' and therefore the many different types of engineering insights from PRA are not all equally reliable. 93 refs., 4 figs., 1 tab

  18. An evaluation of the reliability and usefulness of external-initiator PRA (probabilistic risk analysis) methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Budnitz, R.J.; Lambert, H.E. (Future Resources Associates, Inc., Berkeley, CA (USA))

    1990-01-01

    The discipline of probabilistic risk analysis (PRA) has become so mature in recent years that it is now being used routinely to assist decision-making throughout the nuclear industry. This includes decision-making that affects design, construction, operation, maintenance, and regulation. Unfortunately, not all sub-areas within the larger discipline of PRA are equally mature,'' and therefore the many different types of engineering insights from PRA are not all equally reliable. 93 refs., 4 figs., 1 tab.

  19. Reliability evaluation methodologies for ensuring container integrity of stored transuranic (TRU) waste

    International Nuclear Information System (INIS)

    Smith, K.L.

    1995-06-01

    This report provides methodologies for providing defensible estimates of expected transuranic waste storage container lifetimes at the Radioactive Waste Management Complex. These methodologies can be used to estimate transuranic waste container reliability (for integrity and degradation) and as an analytical tool to optimize waste container integrity. Container packaging and storage configurations, which directly affect waste container integrity, are also addressed. The methodologies presented provide a means for demonstrating Resource Conservation and Recovery Act waste storage requirements

  20. Soil-structure interaction effects on the reliability evaluation of reactor containments

    International Nuclear Information System (INIS)

    Pires, J.; Hwang, H.; Reich, M.

    1986-01-01

    The probability-based method for the seismic reliability assessment of nuclear structures, which has been developed at Brookhaven National Laboratory (BNL), is extended to include the effects of soil-structure interaction. A reinforced concrete containment building is analyzed in order to examine soil-structure interaction effects on: (1) structural fragilities; (2) floor response spectra statistics; and (3) correlation coefficients for total acceleration responses at specified structural locations