WorldWideScience

Sample records for reliable consistent results

  1. Multi-virulence-locus sequence typing of Staphylococcus lugdunensis generates results consistent with a clonal population structure and is reliable for epidemiological typing.

    Science.gov (United States)

    Didi, Jennifer; Lemée, Ludovic; Gibert, Laure; Pons, Jean-Louis; Pestel-Caron, Martine

    2014-10-01

    Staphylococcus lugdunensis is an emergent virulent coagulase-negative staphylococcus responsible for severe infections similar to those caused by Staphylococcus aureus. To understand its potentially pathogenic capacity and have further detailed knowledge of the molecular traits of this organism, 93 isolates from various geographic origins were analyzed by multi-virulence-locus sequence typing (MVLST), targeting seven known or putative virulence-associated loci (atlLR2, atlLR3, hlb, isdJ, SLUG_09050, SLUG_16930, and vwbl). The polymorphisms of the putative virulence-associated loci were moderate and comparable to those of the housekeeping genes analyzed by multilocus sequence typing (MLST). However, the MVLST scheme generated 43 virulence types (VTs) compared to 20 sequence types (STs) based on MLST, indicating that MVLST was significantly more discriminating (Simpson's index [D], 0.943). No hypervirulent lineage or cluster specific to carriage strains was defined. The results of multilocus sequence analysis of known and putative virulence-associated loci are consistent with a clonal population structure for S. lugdunensis, suggesting a coevolution of these genes with housekeeping genes. Indeed, the nonsynonymous to synonymous evolutionary substitutions (dN/dS) ratio, the Tajima's D test, and Single-likelihood ancestor counting (SLAC) analysis suggest that all virulence-associated loci were under negative selection, even atlLR2 (AtlL protein) and SLUG_16930 (FbpA homologue), for which the dN/dS ratios were higher. In addition, this analysis of virulence-associated loci allowed us to propose a trilocus sequence typing scheme based on the intragenic regions of atlLR3, isdJ, and SLUG_16930, which is more discriminant than MLST for studying short-term epidemiology and further characterizing the lineages of the rare but highly pathogenic S. lugdunensis. Copyright © 2014, American Society for Microbiology. All Rights Reserved.

  2. Improving risk assessment by defining consistent and reliable system scenarios

    Directory of Open Access Journals (Sweden)

    B. Mazzorana

    2009-02-01

    Full Text Available During the entire procedure of risk assessment for hydrologic hazards, the selection of consistent and reliable scenarios, constructed in a strictly systematic way, is fundamental for the quality and reproducibility of the results. However, subjective assumptions on relevant impact variables such as sediment transport intensity on the system loading side and weak point response mechanisms repeatedly cause biases in the results, and consequently affect transparency and required quality standards. Furthermore, the system response of mitigation measures to extreme event loadings represents another key variable in hazard assessment, as well as the integral risk management including intervention planning. Formative Scenario Analysis, as a supplement to conventional risk assessment methods, is a technique to construct well-defined sets of assumptions to gain insight into a specific case and the potential system behaviour. By two case studies, carried out (1 to analyse sediment transport dynamics in a torrent section equipped with control measures, and (2 to identify hazards induced by woody debris transport at hydraulic weak points, the applicability of the Formative Scenario Analysis technique is presented. It is argued that during scenario planning in general and with respect to integral risk management in particular, Formative Scenario Analysis allows for the development of reliable and reproducible scenarios in order to design more specifically an application framework for the sustainable assessment of natural hazards impact. The overall aim is to optimise the hazard mapping and zoning procedure by methodologically integrating quantitative and qualitative knowledge.

  3. Internal Consistency, Retest Reliability, and their Implications For Personality Scale Validity

    Science.gov (United States)

    McCrae, Robert R.; Kurtz, John E.; Yamagata, Shinji; Terracciano, Antonio

    2010-01-01

    We examined data (N = 34,108) on the differential reliability and validity of facet scales from the NEO Inventories. We evaluated the extent to which (a) psychometric properties of facet scales are generalizable across ages, cultures, and methods of measurement; and (b) validity criteria are associated with different forms of reliability. Composite estimates of facet scale stability, heritability, and cross-observer validity were broadly generalizable. Two estimates of retest reliability were independent predictors of the three validity criteria; none of three estimates of internal consistency was. Available evidence suggests the same pattern of results for other personality inventories. Internal consistency of scales can be useful as a check on data quality, but appears to be of limited utility for evaluating the potential validity of developed scales, and it should not be used as a substitute for retest reliability. Further research on the nature and determinants of retest reliability is needed. PMID:20435807

  4. RELIABILITY ASSESSMENT OF ENTROPY METHOD FOR SYSTEM CONSISTED OF IDENTICAL EXPONENTIAL UNITS

    Institute of Scientific and Technical Information of China (English)

    Sun Youchao; Shi Jun

    2004-01-01

    The reliability assessment of unit-system near two levels is the most important content in the reliability multi-level synthesis of complex systems. Introducing the information theory into system reliability assessment, using the addible characteristic of information quantity and the principle of equivalence of information quantity, an entropy method of data information conversion is presented for the system consisted of identical exponential units. The basic conversion formulae of entropy method of unit test data are derived based on the principle of information quantity equivalence. The general models of entropy method synthesis assessment for system reliability approximate lower limits are established according to the fundamental principle of the unit reliability assessment. The applications of the entropy method are discussed by way of practical examples. Compared with the traditional methods, the entropy method is found to be valid and practicable and the assessment results are very satisfactory.

  5. Visual perspective in autobiographical memories: reliability, consistency, and relationship to objective memory performance.

    Science.gov (United States)

    Siedlecki, Karen L

    2015-01-01

    Visual perspective in autobiographical memories was examined in terms of reliability, consistency, and relationship to objective memory performance in a sample of 99 individuals. Autobiographical memories may be recalled from two visual perspectives--a field perspective in which individuals experience the memory through their own eyes, or an observer perspective in which individuals experience the memory from the viewpoint of an observer in which they can see themselves. Participants recalled nine word-cued memories that differed in emotional valence (positive, negative and neutral) and rated their memories on 18 scales. Results indicate that visual perspective was the most reliable memory characteristic overall and is consistently related to emotional intensity at the time of recall and amount of emotion experienced during the memory. Visual perspective is unrelated to memory for words, stories, abstract line drawings or faces.

  6. Reliability and consistency of plantarflexor stretch-shortening cycle function using an adapted force sledge apparatus

    International Nuclear Information System (INIS)

    Furlong, Laura-Anne M; Harrison, Andrew J

    2013-01-01

    There are various limitations to existing methods of studying plantarflexor stretch-shortening cycle (SSC) function and muscle-tendon unit (MTU) mechanics, predominantly related to measurement validity and reliability. This study utilizes an innovative adaptation to a force sledge which isolates the plantarflexors and ankle for analysis. The aim of this study was to determine the sledge loading protocol to be used, most appropriate method of data analysis and measurement reliability in a group of healthy, non-injured subjects. Twenty subjects (11 males, 9 females; age: 23.5 ±2.3 years; height: 1.73 ±0.08 m; mass: 74.2 ±11.3 kg) completed 11 impacts at five different loadings rated on a scale of perceived exertion from 1 to 5, where 5 is a loading that the subject could only complete the 11 impacts using the adapted sledge. Analysis of impacts 4–8 or 5–7 using loading 2 provided consistent results that were highly reliable (single intra-class correlation, ICC > 0.85, average ICC > 0.95) and replicated kinematics found in hopping and running. Results support use of an adapted force sledge apparatus as an ecologically valid, reliable method of investigating plantarflexor SSC function and MTU mechanics in a dynamic controlled environment. (paper)

  7. Content validation: clarity/relevance, reliability and internal consistency of enunciative signs of language acquisition.

    Science.gov (United States)

    Crestani, Anelise Henrich; Moraes, Anaelena Bragança de; Souza, Ana Paula Ramos de

    2017-08-10

    To analyze the results of the validation of building enunciative signs of language acquisition for children aged 3 to 12 months. The signs were built based on mechanisms of language acquisition in an enunciative perspective and on clinical experience with language disorders. The signs were submitted to judgment of clarity and relevance by a sample of six experts, doctors in linguistic in with knowledge of psycholinguistics and language clinic. In the validation of reliability, two judges/evaluators helped to implement the instruments in videos of 20% of the total sample of mother-infant dyads using the inter-evaluator method. The method known as internal consistency was applied to the total sample, which consisted of 94 mother-infant dyads to the contents of the Phase 1 (3-6 months) and 61 mother-infant dyads to the contents of Phase 2 (7 to 12 months). The data were collected through the analysis of mother-infant interaction based on filming of dyads and application of the parameters to be validated according to the child's age. Data were organized in a spreadsheet and then converted to computer applications for statistical analysis. The judgments of clarity/relevance indicated no modifications to be made in the instruments. The reliability test showed an almost perfect agreement between judges (0.8 ≤ Kappa ≥ 1.0); only the item 2 of Phase 1 showed substantial agreement (0.6 ≤ Kappa ≥ 0.79). The internal consistency for Phase 1 had alpha = 0.84, and Phase 2, alpha = 0.74. This demonstrates the reliability of the instruments. The results suggest adequacy as to content validity of the instruments created for both age groups, demonstrating the relevance of the content of enunciative signs of language acquisition.

  8. Reliability, Dimensionality, and Internal Consistency as Defined by Cronbach: Distinct Albeit Related Concepts

    Science.gov (United States)

    Davenport, Ernest C.; Davison, Mark L.; Liou, Pey-Yan; Love, Quintin U.

    2015-01-01

    This article uses definitions provided by Cronbach in his seminal paper for coefficient a to show the concepts of reliability, dimensionality, and internal consistency are distinct but interrelated. The article begins with a critique of the definition of reliability and then explores mathematical properties of Cronbach's a. Internal consistency…

  9. Two Impossibility Results on the Converse Consistency Principle in Bargaining

    OpenAIRE

    Youngsub Chun

    1999-01-01

    We present two impossibility results on the converse consistency principle in the context of bargaining. First, we show that there is no solution satis-fying Pareto optimality, contraction independence, and converse consistency. Next, we show that there is no solution satisfying Pareto optimality, strong individual rationality, individual monotonicity, and converse consistency.

  10. Reliability and consistency of a validated sun exposure questionnaire in a population-based Danish sample.

    Science.gov (United States)

    Køster, B; Søndergaard, J; Nielsen, J B; Olsen, A; Bentzen, J

    2018-06-01

    An important feature of questionnaire validation is reliability. To be able to measure a given concept by questionnaire validly, the reliability needs to be high. The objectives of this study were to examine reliability of attitude and knowledge and behavioral consistency of sunburn in a developed questionnaire for monitoring and evaluating population sun-related behavior. Sun related behavior, attitude and knowledge was measured weekly by a questionnaire in the summer of 2013 among 664 Danes. Reliability was tested in a test-retest design. Consistency of behavioral information was tested similarly in a questionnaire adapted to measure behavior throughout the summer. The response rates for questionnaire 1, 2 and 3 were high and the drop out was not dependent on demographic characteristic. There was at least 73% agreement between sunburns in the measurement week and the entire summer, and a possible sunburn underestimation in questionnaires summarizing the entire summer. The participants underestimated their outdoor exposure in the evaluation covering the entire summer as compared to the measurement week. The reliability of scales measuring attitude and knowledge was high for majority of scales, while consistency in protection behavior was low. To our knowledge, this is the first study to report reliability for a completely validated questionnaire on sun-related behavior in a national random population based sample. Further, we show that attitude and knowledge questions confirmed their validity with good reliability, while consistency of protection behavior in general and in a week's measurement was low.

  11. Reliability and consistency of a validated sun exposure questionnaire in a population-based Danish sample

    Directory of Open Access Journals (Sweden)

    B. Køster

    2018-06-01

    Full Text Available An important feature of questionnaire validation is reliability. To be able to measure a given concept by questionnaire validly, the reliability needs to be high.The objectives of this study were to examine reliability of attitude and knowledge and behavioral consistency of sunburn in a developed questionnaire for monitoring and evaluating population sun-related behavior.Sun related behavior, attitude and knowledge was measured weekly by a questionnaire in the summer of 2013 among 664 Danes. Reliability was tested in a test-retest design. Consistency of behavioral information was tested similarly in a questionnaire adapted to measure behavior throughout the summer.The response rates for questionnaire 1, 2 and 3 were high and the drop out was not dependent on demographic characteristic. There was at least 73% agreement between sunburns in the measurement week and the entire summer, and a possible sunburn underestimation in questionnaires summarizing the entire summer. The participants underestimated their outdoor exposure in the evaluation covering the entire summer as compared to the measurement week. The reliability of scales measuring attitude and knowledge was high for majority of scales, while consistency in protection behavior was low.To our knowledge, this is the first study to report reliability for a completely validated questionnaire on sun-related behavior in a national random population based sample. Further, we show that attitude and knowledge questions confirmed their validity with good reliability, while consistency of protection behavior in general and in a week's measurement was low. Keywords: Questionnaire, Validation, Reliability, Skin cancer, Prevention, Ultraviolet radiation

  12. Human Reliability Data Bank: evaluation results

    International Nuclear Information System (INIS)

    Comer, M.K.; Donovan, M.D.; Gaddy, C.D.

    1985-01-01

    The US Nuclear Regulatory Commission (NRC), Sandia National Laboratories (SNL), and General Physics Corporation are conducting a research program to determine the practicality, acceptability, and usefulness of a Human Reliability Data Bank for nuclear power industry probabilistic risk assessment (PRA). As part of this program, a survey was conducted of existing human reliability data banks from other industries, and a detailed concept of a Data Bank for the nuclear industry was developed. Subsequently, a detailed specification for implementing the Data Bank was developed. An evaluation of this specification was conducted and is described in this report. The evaluation tested data treatment, storage, and retrieval using the Data Bank structure, as modified from NUREG/CR-2744, and detailed procedures for data processing and retrieval, developed prior to this evaluation and documented in the test specification. The evaluation consisted of an Operability Demonstration and Evaluation of the data processing procedures, a Data Retrieval Demonstration and Evaluation, a Retrospective Analysis that included a survey of organizations currently operating data banks for the nuclear power industry, and an Internal Analysis of the current Data Bank System

  13. [Consistency and Reliability of MDK Expertise Examining the Encoding in the German DRG System].

    Science.gov (United States)

    Gaertner, T; Lehr, F; Blum, B; van Essen, J

    2015-09-01

    Hospital inpatient stays are reimbursed on the basis of German diagnosis-related groups (G-DRG). The G-DRG classification system is based on complex coding guidelines. The Medical Review Board of the Statutory Health Insurance Funds (MDK) examines the encoding by hospitals and delivers individual expertises on behalf of the German statutory health insurance companies in cases in which irregularities are suspected. A study was conducted on the inter-rater reliability of the MDK expertises regarding the scope of the assessment. A representative sample of 212 MDK expertises was taken from a selected pool of 1 392 MDK expertises in May 2013. This representative sample underwent a double-examination by 2 independent MDK experts using a special software based on the 3MTM G-DRG Grouper 2013 of 3M Medica, Germany. The following items encoded by the hospitals were examined: DRG, principal diagnosis, secondary diagnoses, procedures and additional payments. It was analysed whether the results of MDK expertises were consistent, reliable and correct. 202 expertises were eligible for evaluation, containing a total of 254 questions regarding one or more of the 5 items encoded by hospitals. The double-examination by 2 independent MDK experts showed matching results in 187 questions (73.6%) meaning they had been examined consistently and correctly. 59 questions (23.2%) did not show matching results, nevertheless they had been examined correctly regarding the scope of the assessment. None of the principal diagnoses was significantly affected by inconsistent or wrong judgment. A representative sample of MDK expertises examining the DRG encoding by hospitals showed a very high percentage of correct examination by the MDK experts. Identical MDK expertises cannot be achieved in all cases due to the scope of the assessment. Further improvement and simplification of codes and coding guidelines are required to reduce the scope of assessment with regard to correct DRG encoding and its

  14. SLAC modulator system improvements and reliability results

    International Nuclear Information System (INIS)

    Donaldson, A.R.

    1998-06-01

    In 1995, an improvement project was completed on the 244 klystron modulators in the linear accelerator. The modulator system has been previously described. This article offers project details and their resulting effect on modulator and component reliability. Prior to the project, the authors had collected four operating cycles (1991 through 1995) of MTTF data. In this discussion, the '91 data will be excluded since the modulators operated at 60 Hz. The five periods following the '91 run were reviewed due to the common repetition rate at 120 Hz

  15. Reliability Estimation Based Upon Test Plan Results

    National Research Council Canada - National Science Library

    Read, Robert

    1997-01-01

    The report contains a brief summary of aspects of the Maximus reliability point and interval estimation technique as it has been applied to the reliability of a device whose surveillance tests contain...

  16. Assessment of disabilities in stroke patients with apraxia : Internal consistency and inter-observer reliability

    NARCIS (Netherlands)

    van Heugten, CM; Dekker, J; Deelman, BG; Stehmann-Saris, JC; Kinebanian, A

    1999-01-01

    In this paper the internal consistency and inter-observer reliability of the assessment of disabilities in stroke patients with apraxia is presented. Disabilities were assessed by means of observation of activities of daily living (ADL). The study was conducted at occupational therapy departments in

  17. Assessment of disabilities in stroke patients with apraxia: internal consistency and inter-observer reliability.

    NARCIS (Netherlands)

    Heugten, C.M. van; Dekker, J.; Deelman, B.G.; Stehmann-Saris, J.C.; Kinebanian, A.

    1999-01-01

    In this paper the internal consistency and inter-observer reliability of the assessment of disabilities in stroke patients with apraxia is presented. Disabilities were assessed by means of observation of activities of daily living (ADL). The study was conducted at occupational therapy departments in

  18. Corrections for criterion reliability in validity generalization: The consistency of Hermes, the utility of Midas

    Directory of Open Access Journals (Sweden)

    Jesús F. Salgado

    2016-04-01

    Full Text Available There is criticism in the literature about the use of interrater coefficients to correct for criterion reliability in validity generalization (VG studies and disputing whether .52 is an accurate and non-dubious estimate of interrater reliability of overall job performance (OJP ratings. We present a second-order meta-analysis of three independent meta-analytic studies of the interrater reliability of job performance ratings and make a number of comments and reflections on LeBreton et al.s paper. The results of our meta-analysis indicate that the interrater reliability for a single rater is .52 (k = 66, N = 18,582, SD = .105. Our main conclusions are: (a the value of .52 is an accurate estimate of the interrater reliability of overall job performance for a single rater; (b it is not reasonable to conclude that past VG studies that used .52 as the criterion reliability value have a less than secure statistical foundation; (c based on interrater reliability, test-retest reliability, and coefficient alpha, supervisor ratings are a useful and appropriate measure of job performance and can be confidently used as a criterion; (d validity correction for criterion unreliability has been unanimously recommended by "classical" psychometricians and I/O psychologists as the proper way to estimate predictor validity, and is still recommended at present; (e the substantive contribution of VG procedures to inform HRM practices in organizations should not be lost in these technical points of debate.

  19. Towards consistent and reliable Dutch and international energy statistics for the chemical industry

    International Nuclear Information System (INIS)

    Neelis, M.L.; Pouwelse, J.W.

    2008-01-01

    Consistent and reliable energy statistics are of vital importance for proper monitoring of energy-efficiency policies. In recent studies, irregularities have been reported in the Dutch energy statistics for the chemical industry. We studied in depth the company data that form the basis of the energy statistics in the Netherlands between 1995 and 2004 to find causes for these irregularities. We discovered that chemical products have occasionally been included, resulting in statistics with an inconsistent system boundary. Lack of guidance in the survey for the complex energy conversions in the chemical industry in the survey also resulted in large fluctuations for certain energy commodities. The findings of our analysis have been the basis for a new survey that has been used since 2007. We demonstrate that the annual questionnaire used for the international energy statistics can result in comparable problems as observed in the Netherlands. We suggest to include chemical residual gas as energy commodity in the questionnaire and to include the energy conversions in the chemical industry in the international energy statistics. In addition, we think the questionnaire should be explicit about the treatment of basic chemical products produced at refineries and in the petrochemical industry to avoid system boundary problems

  20. Improvements in the consistency of fabrication and the reliability of nuclear fuels through quality assurance

    International Nuclear Information System (INIS)

    Sifferlen, R.

    1976-01-01

    By establishing correlations between rejection level and fabrication parameters, quality assurance can guide corrective action for improving the consistency of fabrication and the reliability of fuel elements. The author cites examples relating to the quality of the uranium in metallic fuels, the influence of the parent metal on the welding of zirconium alloys, the behaviour of the springs inside the cladding during the welding of plugs and the behaviour of uranium oxide pellets. (author)

  1. Planck 2013 results. XXXI. Consistency of the Planck data

    DEFF Research Database (Denmark)

    Ade, P. A. R.; Arnaud, M.; Ashdown, M.

    2014-01-01

    The Planck design and scanning strategy provide many levels of redundancy that can be exploited to provide tests of internal consistency. One of the most important is the comparison of the 70 GHz (amplifier) and 100 GHz (bolometer) channels. Based on dierent instrument technologies, with feeds...... in the HFI channels would result in shifts in the posterior distributions of parameters of less than 0.3σ except for As, the amplitude of the primordial curvature perturbations at 0.05 Mpc-1, which changes by about 1.We extend these comparisons to include the sky maps from the complete nine-year mission...... located dierently in the focal plane, analysed independently by dierent teams using dierent software, and near∫ the minimum of diuse foreground emission, these channels are in eect two dierent experiments. The 143 GHz channel has the lowest noise level on Planck, and is near the minimum of unresolved...

  2. Assessing motivation for work environment improvements: internal consistency, reliability and factorial structure.

    Science.gov (United States)

    Hedlund, Ann; Ateg, Mattias; Andersson, Ing-Marie; Rosén, Gunnar

    2010-04-01

    Workers' motivation to actively take part in improvements to the work environment is assumed to be important for the efficiency of investments for that purpose. That gives rise to the need for a tool to measure this motivation. A questionnaire to measure motivation for improvements to the work environment has been designed. Internal consistency and test-retest reliability of the domains of the questionnaire have been measured, and the factorial structure has been explored, from the answers of 113 employees. The internal consistency is high (0.94), as well as the correlation for the total score (0.84). Three factors are identified accounting for 61.6% of the total variance. The questionnaire can be a useful tool in improving intervention methods. The expectation is that the tool can be useful, particularly with the aim of improving efficiency of companies' investments for work environment improvements. Copyright 2010 Elsevier Ltd. All rights reserved.

  3. Simulated patient training: Using inter-rater reliability to evaluate simulated patient consistency in nursing education.

    Science.gov (United States)

    MacLean, Sharon; Geddes, Fiona; Kelly, Michelle; Della, Phillip

    2018-03-01

    Simulated patients (SPs) are frequently used for training nursing students in communication skills. An acknowledged benefit of using SPs is the opportunity to provide a standardized approach by which participants can demonstrate and develop communication skills. However, relatively little evidence is available on how to best facilitate and evaluate the reliability and accuracy of SPs' performances. The aim of this study is to investigate the effectiveness of an evidenced based SP training framework to ensure standardization of SPs. The training framework was employed to improve inter-rater reliability of SPs. A quasi-experimental study was employed to assess SP post-training understanding of simulation scenario parameters using inter-rater reliability agreement indices. Two phases of data collection took place. Initially a trial phase including audio-visual (AV) recordings of two undergraduate nursing students completing a simulation scenario is rated by eight SPs using the Interpersonal Communication Assessments Scale (ICAS) and Quality of Discharge Teaching Scale (QDTS). In phase 2, eight SP raters and four nursing faculty raters independently evaluated students' (N=42) communication practices using the QDTS. Intraclass correlation coefficients (ICC) were >0.80 for both stages of the study in clinical communication skills. The results support the premise that if trained appropriately, SPs have a high degree of reliability and validity to both facilitate and evaluate student performance in nurse education. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  4. Risk aversion vs. the Omega ratio : Consistency results

    NARCIS (Netherlands)

    Balder, Sven; Schweizer, Nikolaus

    This paper clarifies when the Omega ratio and related performance measures are consistent with second order stochastic dominance and when they are not. To avoid consistency problems, the threshold parameter in the ratio should be chosen as the expected return of some benchmark – as is commonly done

  5. The memory failures of everyday questionnaire (MFE): internal consistency and reliability.

    Science.gov (United States)

    Montejo Carrasco, Pedro; Montenegro, Peña Mercedes; Sueiro, Manuel J

    2012-07-01

    The Memory Failures of Everyday Questionnaire (MFE) is one of the most widely-used instruments to assess memory failures in daily life. The original scale has nine response options, making it difficult to apply; we created a three-point scale (0-1-2) with response choices that make it easier to administer. We examined the two versions' equivalence in a sample of 193 participants between 19 and 64 years of age. The test-retest reliability and internal consistency of the version we propose were also computed in a sample of 113 people. Several indicators attest to the two forms' equivalence: the correlation between the items' means (r = .94; p MFE 1-9. The MFE 0-2 provides a brief, simple evaluation, so we recommend it for use in clinical practice as well as research.

  6. A reliable and consistent production technology for high volume compacted graphite iron castings

    Directory of Open Access Journals (Sweden)

    Liu Jincheng

    2014-07-01

    Full Text Available The demands for improved engine performance, fuel economy, durability, and lower emissions provide a continual challenge for engine designers. The use of Compacted Graphite Iron (CGI has been established for successful high volume series production in the passenger vehicle, commercial vehicle and industrial power sectors over the last decade. The increased demand for CGI engine components provides new opportunities for the cast iron foundry industry to establish efficient and robust CGI volume production processes, in China and globally. The production window range for stable CGI is narrow and constantly moving. Therefore, any one step single addition of magnesium alloy and the inoculant cannot ensure a reliable and consistent production process for complicated CGI engine castings. The present paper introduces the SinterCast thermal analysis process control system that provides for the consistent production of CGI with low nodularity and reduced porosity, without risking the formation of flake graphite. The technology is currently being used in high volume Chinese foundry production. The Chinese foundry industry can develop complicated high demand CGI engine castings with the proper process control technology.

  7. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  8. Planck 2013 results. XXXI. Consistency of the Planck data

    CERN Document Server

    Ade, P A R; Ashdown, M; Aumont, J; Baccigalupi, C; Banday, A.J; Barreiro, R.B; Battaner, E; Benabed, K; Benoit-Levy, A; Bernard, J.P; Bersanelli, M; Bielewicz, P; Bond, J.R; Borrill, J; Bouchet, F.R; Burigana, C; Cardoso, J.F; Catalano, A; Challinor, A; Chamballu, A; Chiang, H.C; Christensen, P.R; Clements, D.L; Colombi, S; Colombo, L.P.L; Couchot, F; Coulais, A; Crill, B.P; Curto, A; Cuttaia, F; Danese, L; Davies, R.D; Davis, R.J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Desert, F.X; Dickinson, C; Diego, J.M; Dole, H; Donzelli, S; Dore, O; Douspis, M; Dupac, X; Ensslin, T.A; Eriksen, H.K; Finelli, F; Forni, O; Frailis, M; Fraisse, A A; Franceschi, E; Galeotta, S; Ganga, K; Giard, M; Gonzalez-Nuevo, J; Gorski, K.M.; Gratton, S.; Gregorio, A; Gruppuso, A; Gudmundsson, J E; Hansen, F.K; Hanson, D; Harrison, D; Henrot-Versille, S; Herranz, D; Hildebrandt, S.R; Hivon, E; Hobson, M; Holmes, W.A.; Hornstrup, A; Hovest, W.; Huffenberger, K.M; Jaffe, T.R; Jaffe, A.H; Jones, W.C; Keihanen, E; Keskitalo, R; Knoche, J; Kunz, M; Kurki-Suonio, H; Lagache, G; Lahteenmaki, A; Lamarre, J.M; Lasenby, A; Lawrence, C.R; Leonardi, R; Leon-Tavares, J; Lesgourgues, J; Liguori, M; Lilje, P.B; Linden-Vornle, M; Lopez-Caniego, M; Lubin, P.M; Macias-Perez, J.F; Maino, D; Mandolesi, N; Maris, M; Martin, P.G; Martinez-Gonzalez, E; Masi, S; Matarrese, S; Mazzotta, P; Meinhold, P.R; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Miville-Deschenes, M.A; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Moss, A; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Norgaard-Nielsen, H.U; Noviello, F; Novikov, D; Novikov, I; Oxborrow, C.A; Pagano, L; Pajot, F; Paoletti, D; Partridge, B; Pasian, F; Patanchon, G; Pearson, D; Pearson, T.J; Perdereau, O; Perrotta, F; Piacentini, F; Piat, M; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Popa, L; Pratt, G.W; Prunet, S; Puget, J.L; Rachen, J.P; Reinecke, M; Remazeilles, M; Renault, C; Ricciardi, S.; Ristorcelli, I; Rocha, G.; Roudier, G; Rubino-Martin, J.A; Rusholme, B; Sandri, M; Scott, D; Stolyarov, V; Sudiwala, R; Sutton, D; Suur-Uski, A.S; Sygnet, J.F; Tauber, J.A; Terenzi, L; Toffolatti, L; Tomasi, M; Tristram, M; Tucci, M; Valenziano, L; Valiviita, J; Van Tent, B; Vielva, P; Villa, F; Wade, L.A; Wandelt, B.D; Wehus, I K; White, S D M; Yvon, D; Zacchei, A; Zonca, A

    2014-01-01

    The Planck design and scanning strategy provide many levels of redundancy that can be exploited to provide tests of internal consistency. One of the most important is the comparison of the 70 GHz (amplifier) and 100 GHz (bolometer) channels. Based on different instrument technologies, with feeds located differently in the focal plane, analysed independently by different teams using different software, and near the minimum of diffuse foreground emission, these channels are in effect two different experiments. The 143 GHz channel has the lowest noise level on Planck, and is near the minimum of unresolved foreground emission. In this paper, we analyse the level of consistency achieved in the 2013 Planck data. We concentrate on comparisons between the 70, 100, and 143 GHz channel maps and power spectra, particularly over the angular scales of the first and second acoustic peaks, on maps masked for diffuse Galactic emission and for strong unresolved sources. Difference maps covering angular scales from 8°...

  9. Reliability, factor analysis and internal consistency calculation of the Insomnia Severity Index (ISI in French and in English among Lebanese adolescents

    Directory of Open Access Journals (Sweden)

    M. Chahoud

    2017-06-01

    Conclusion: The results of our analyses reveal that both English and French versions of the ISI scale have good internal consistency and are reproducible and reliable. Therefore, it can be used to assess the prevalence of insomnia in Lebanese adolescents.

  10. Exploration of reliability databases and comparison of former IFMIF's results

    International Nuclear Information System (INIS)

    Tapia, Carlos; Dies, Javier; Abal, Javier; Ibarra, Angel; Arroyo, Jose M.

    2011-01-01

    There is an uncertainty issue about the applicability of industrial databases to new designs, such as the International Fusion Materials Irradiation Facility (IFMIF), as they usually contain elements for which no historical statistics exist. The exploration of common components reliability data in Accelerator Driven Systems (ADS) and Liquid Metal Technologies (LMT) frameworks is the milestone to analyze the data used in IFMIF reliability's reports and for future studies. The comparison between the reliability accelerator results given in the former IFMIF's reports and the databases explored has been made by means of a new accelerator Reliability, Availability, Maintainability (RAM) analysis. The reliability database used in this analysis is traceable.

  11. An overview of coefficient alpha and a reliability matrix for estimating adequacy of internal consistency coefficients with psychological research measures.

    Science.gov (United States)

    Ponterotto, Joseph G; Ruckdeschel, Daniel E

    2007-12-01

    The present article addresses issues in reliability assessment that are often neglected in psychological research such as acceptable levels of internal consistency for research purposes, factors affecting the magnitude of coefficient alpha (alpha), and considerations for interpreting alpha within the research context. A new reliability matrix anchored in classical test theory is introduced to help researchers judge adequacy of internal consistency coefficients with research measures. Guidelines and cautions in applying the matrix are provided.

  12. Reliability-based evaluation of bridge components for consistent safety margins.

    Science.gov (United States)

    2010-10-01

    The Load and Resistant Factor Design (LRFD) approach is based on the concept of structural reliability. The approach is more : rational than the former design approaches such as Load Factor Design or Allowable Stress Design. The LRFD Specification fo...

  13. "A Comparison of Consensus, Consistency, and Measurement Approaches to Estimating Interrater Reliability"

    OpenAIRE

    Steven E. Stemler

    2004-01-01

    This article argues that the general practice of describing interrater reliability as a single, unified concept is..at best imprecise, and at worst potentially misleading. Rather than representing a single concept, different..statistical methods for computing interrater reliability can be more accurately classified into one of three..categories based upon the underlying goals of analysis. The three general categories introduced and..described in this paper are: 1) consensus estimates, 2) cons...

  14. Assessing the reliability of predictive activity coefficient models for molecules consisting of several functional groups

    Directory of Open Access Journals (Sweden)

    R. P. Gerber

    2013-03-01

    Full Text Available Currently, the most successful predictive models for activity coefficients are those based on functional groups such as UNIFAC. In contrast, these models require a large amount of experimental data for the determination of their parameter matrix. A more recent alternative is the models based on COSMO, for which only a small set of universal parameters must be calibrated. In this work, a recalibrated COSMO-SAC model was compared with the UNIFAC (Do model employing experimental infinite dilution activity coefficient data for 2236 non-hydrogen-bonding binary mixtures at different temperatures. As expected, UNIFAC (Do presented better overall performance, with a mean absolute error of 0.12 ln-units against 0.22 for our COSMO-SAC implementation. However, in cases involving molecules with several functional groups or when functional groups appear in an unusual way, the deviation for UNIFAC was 0.44 as opposed to 0.20 for COSMO-SAC. These results show that COSMO-SAC provides more reliable predictions for multi-functional or more complex molecules, reaffirming its future prospects.

  15. Internal consistency reliability and validity of the Hebrew translation of the Oxford Happiness Inventory.

    Science.gov (United States)

    Francis, L J; Katz, Y J

    2000-08-01

    The Hebrew translation of the Oxford Happiness Inventory and the short form Revised Eysenck Personality Questionnaire were completed by 298 undergraduate women in Israel. The findings confirm the internal reliability of the Hebrew translation of the Oxford Happiness Inventory and support the construct validity according to which "happiness is a thing called stable extraversion."

  16. Internal consistency reliability and validity of the Arabic translation of the Mathematics Teaching Efficacy Beliefs Instrument.

    Science.gov (United States)

    Alkhateeb, Haitham M

    2004-06-01

    The Arabic translation of the Mathematics Teaching Efficacy Beliefs was completed by 144 undergraduate students (M age=20.6) in Jordan. The findings support the internal reliability of the Arabic translation of the Mathematics Teaching Efficacy Beliefs as well as its construct validity.

  17. Intra-Subject Consistency and Reliability of Response Following 2 mA Transcranial Direct Current Stimulation.

    Science.gov (United States)

    Dyke, Katherine; Kim, Soyoung; Jackson, Georgina M; Jackson, Stephen R

    Transcranial direct current stimulation (tDCS) is a popular non-invasive brain stimulation technique that has been shown to influence cortical excitability. While polarity specific effects have often been reported, this is not always the case, and variability in both the magnitude and direction of the effects have been observed. We aimed to explore the consistency and reliability of the effects of tDCS by investigating changes in cortical excitability across multiple testing sessions in the same individuals. A within subjects design was used to investigate the effects of anodal and cathodal tDCS applied to the motor cortex. Four experimental sessions were tested for each polarity in addition to two sham sessions. Transcranial magnetic stimulation (TMS) was used to measure cortical excitability (TMS recruitment curves). Changes in excitability were measured by comparing baseline measures and those taken immediately following 20 minutes of 2 mA stimulation or sham stimulation. Anodal tDCS significantly increased cortical excitability at a group level, whereas cathodal tDCS failed to have any significant effects. The sham condition also failed to show any significant changes. Analysis of intra-subject responses to anodal stimulation across four sessions suggest that the amount of change in excitability across sessions was only weakly associated, and was found to have poor reliability across sessions (ICC = 0.276). The effects of cathodal stimulation show even poorer reliability across sessions (ICC = 0.137). In contrast ICC analysis for the two sessions of sham stimulation reflect a moderate level of reliability (ICC = .424). Our findings indicate that although 2 mA anodal tDCS is effective at increasing cortical excitability at group level, the effects are unreliable across repeated testing sessions within individual participants. Our results suggest that 2 mA cathodal tDCS does not significantly alter cortical excitability immediately following

  18. Results from the LHC Beam Dump Reliability Run

    CERN Document Server

    Uythoven, J; Carlier, E; Castronuovo, F; Ducimetière, L; Gallet, E; Goddard, B; Magnin, N; Verhagen, H

    2008-01-01

    The LHC Beam Dumping System is one of the vital elements of the LHC Machine Protection System and has to operate reliably every time a beam dump request is made. Detailed dependability calculations have been made, resulting in expected rates for the different system failure modes. A 'reliability run' of the whole system, installed in its final configuration in the LHC, has been made to discover infant mortality problems and to compare the occurrence of the measured failure modes with their calculations.

  19. The eye-complaint questionnaire in a visual display unit work environment: Internal consistency and test-retest reliability

    NARCIS (Netherlands)

    Steenstra, Ivan A.; Sluiter, Judith K.; Frings-Dresen, Monique H. W.

    2009-01-01

    The internal consistency and test-retest reliability of a 10-item eye-complaint questionnaire (ECQ) were examined within a sample of office workers. Repeated within-subjects measures were performed within a single day and over intervals of 1 and 7 d. Questionnaires were completed by 96 workers (70%

  20. Internal Consistency of Reliability Assessment of the Persian version of the ‘Home Falls and Accident Screening Tool’

    Directory of Open Access Journals (Sweden)

    Afsoon Hassani Mehraban

    2013-10-01

    Full Text Available Objectives: Falling is a common problem among the elderly. Falling indoors and outdoors is highly prevalent among the Iranian elderly. Therefore, identification of the contributing factors at home and their modification can reduce falls and subsequent injuries inthe elderly. The goal of this study was to identify the elderly at risk of fall, using the ‘Home Falls and Accident Screening Tool’ (HOME FAST, and to determine the reliability of this tool. Methods: Sixty old people were selected from five geographical regions of Tehran through the Local Town Councils. Participants were aged 60 to 65 years, and HOME FAST was used to assess inter rater and test- retest reliability. Results: Test-retest reliability in the study showed that agreement between the items of the Persian version of HOME FAST was over 0.8, which is a very good reliability. The agreement between the domains was 0.65-1.00, indicative of moderate to high reliability. Moreover, the Inter rater reliability of the items was over 0.8, which is also very good. The correlation of each item between the domains was 0.01-1.00, which shows poor to high reliability. Discussion: This study showed that the reliability of the Persian version of HOME FAST is high. This tool can therefore be used as an appropriate screening tool by professionals to take necessary preventive measures for the Iranian elderly population.

  1. Systems reliability Benchmark exercise part 1-Description and results

    International Nuclear Information System (INIS)

    Amendola, A.

    1986-01-01

    The report describes aims, rules and results of the Systems Reliability Benchmark Exercise, which has been performed in order to assess methods and procedures for reliability analysis of complex systems and involved a large number of European organizations active in NPP safety evaluation. The exercise included both qualitative and quantitative methods and was structured in such a way that separation of the effects of uncertainties in modelling and in data on the overall spread was made possible. Part I describes the way in which RBE has been performed, its main results and conclusions

  2. Reliability of AOFAS diabetic foot questionnaire in Charcot arthropathy: stability, internal consistency, and measurable difference.

    Science.gov (United States)

    Dhawan, Vibhu; Spratt, Kevin F; Pinzur, Michael S; Baumhauer, Judith; Rudicel, Sally; Saltzman, Charles L

    2005-09-01

    The development of Charcot changes is known to be associated with a high rate of recurrent ulceration and amputation. Unfortunately, the effect of Charcot arthropathy on quality of life in diabetic patients has not been systematically studied because of a lack of a disease-specific instrument. The purpose of this study was to develop and test an instrument to evaluate the health-related quality of life of diabetic foot disease. Subjects diagnosed with Charcot arthropathy completed a patient self-administered questionnaire, and clinicians completed an accompanying observational survey. The patient self-administered questionnaire was organized into five general sections: demographics, general health, diabetes-related symptoms, comorbidities, and satisfaction. The scales measured the effect in six health domains: 1) general health, 2) care, 3) worry, 4) sleep, 5) emotion, and 6) physicality. The psychometric properties of the scales were evaluated and the summary scores for the Short-Form Health Survey (SF-36) were compared to published norms for other major medical illnesses. Of the 89 enrolled patients, 57 who completed the questionnaire on enrollment returned a second completed form at 3-month followup. Over the 3-month followup period most of the patients showed an improvement in the Eichenholtz staging. The internal consistency of most was moderate to high and, in general, the scale scores were stable over 3 months. However, several of the scales suffered from low-ceiling or high-floor effects. Patients with Charcot arthropathy had a much lower physical component score on enrollment than the reported norms for other disease conditions, including diabetes. Quality of life represents an important set of outcomes when evaluating the effectiveness of treatment for patients with Charcot arthropathy. This study represents an initial attempt to develop a standardized survey for use with this patient population. Further studies need to be done with larger groups of

  3. Assessment of test-retest reliability and internal consistency of the Wisconsin Gait Scale in hemiparetic post-stroke patients

    Directory of Open Access Journals (Sweden)

    Guzik Agnieszka

    2016-09-01

    Full Text Available Introduction: A proper assessment of gait pattern is a significant aspect in planning the process of teaching gait in hemiparetic post-stroke patients. The Wisconsin Gait Scale (WGS is an observational tool for assessing post-stroke patients’ gait. The aim of the study was to assess test-retest reliability and internal consistency of the WGS and examine correlations between gait assessment made with the WGS and gait speed, Brunnström scale, Ashworth’s scale and the Barthel Index.

  4. The Internal Consistency Reliability of the Katz-Francis Scale of Attitude toward Judaism among Australian Jews

    Directory of Open Access Journals (Sweden)

    Patrick Lumbroso

    2016-09-01

    Full Text Available The Katz-Francis Scale of Attitude toward Judaism was developed initially to extend among the Hebrew-speaking Jewish community in Israel a growing body of international research concerned to map the correlates, antecedents and consequences of individual differences in attitude toward religion as assessed by the Francis Scale of Attitude toward Christianity. The present paper explored the internal consistency reliability and construct validity of the English translation of the Katz-Francis Scale of Attitude toward Judaism among 101 Australian Jews. On the basis of these data, this instrument is commended for application in further research.

  5. Reliability and results, but little room to manoeuvre

    International Nuclear Information System (INIS)

    Atkinson, Ian

    1994-01-01

    A wide variety of research programs to produce cost-effective and reliable inspection techniques have arisen following the discovery of stress corrosion cracks in the control rod drive mechanism of pressurized water reactors, notably in France, Belgium and Spain. This article describes the research program results from a cooperative partnership between Comex Nucleaire, Westinghouse Electric and AEA Technology. The package developed offers techniques to provide complete capability in virtually all the design configurations used world-wide. After extensive acceptance trials in France and the United States the techniques are now being used on site at Bugey 3. (UK)

  6. Assessment of the reliability and consistency of the "malnutrition inflammation score" (MIS) in Mexican adults with chronic kidney disease for diagnosis of protein-energy wasting syndrome (PEW).

    Science.gov (United States)

    González-Ortiz, Ailema Janeth; Arce-Santander, Celene Viridiana; Vega-Vega, Olynka; Correa-Rotter, Ricardo; Espinosa-Cuevas, María de Los Angeles

    2014-10-04

    The protein-energy wasting syndrome (PEW) is a condition of malnutrition, inflammation, anorexia and wasting of body reserves resulting from inflammatory and non-inflammatory conditions in patients with chronic kidney disease (CKD).One way of assessing PEW, extensively described in the literature, is using the Malnutrition Inflammation Score (MIS). To assess the reliability and consistency of MIS for diagnosis of PEW in Mexican adults with CKD on hemodialysis (HD). Study of diagnostic tests. A sample of 45 adults with CKD on HD were analyzed during the period June-July 2014.The instrument was applied on 2 occasions; the test-retest reliability was calculated using the Intraclass Correlation Coefficient (ICC); the internal consistency of the questionnaire was analyzed using Cronbach's αcoefficient. A weighted Kappa test was used to estimate the validity of the instrument; the result was subsequently compared with the Bilbrey nutritional index (BNI). The reliability of the questionnaires, evaluated in the patient sample, was ICC=0.829.The agreement between MIS observations was considered adequate, k= 0.585 (p <0.001); when comparing it with BNI, a value of k = 0.114 was obtained (p <0.001).In order to estimate the tendency, a correlation test was performed. The r² correlation coefficient was 0.488 (P <0.001). MIS has adequate reliability and validity for diagnosing PEW in the population with chronic kidney disease on HD. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  7. The reliability and internal consistency of one-shot and flicker change detection for measuring individual differences in visual working memory capacity.

    Science.gov (United States)

    Pailian, Hrag; Halberda, Justin

    2015-04-01

    We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.

  8. Telomere Q-PNA-FISH--reliable results from stochastic signals.

    Directory of Open Access Journals (Sweden)

    Andrea Cukusic Kalajzic

    Full Text Available Structural and functional analysis of telomeres is very important for understanding basic biological functions such as genome stability, cell growth control, senescence and aging. Recently, serious concerns have been raised regarding the reliability of current telomere measurement methods such as Southern blot and quantitative polymerase chain reaction. Since telomere length is associated with age related pathologies, including cardiovascular disease and cancer, both at the individual and population level, accurate interpretation of measured results is a necessity. The telomere Q-PNA-FISH technique has been widely used in these studies as well as in commercial analysis for the general population. A hallmark of telomere Q-PNA-FISH is the wide variation among telomere signals which has a major impact on obtained results. In the present study we introduce a specific mathematical and statistical analysis of sister telomere signals during cell culture senescence which enabled us to identify high regularity in their variations. This phenomenon explains the reproducibility of results observed in numerous telomere studies when the Q-PNA-FISH technique is used. In addition, we discuss the molecular mechanisms which probably underlie the observed telomere behavior.

  9. A roadmap for production of sustainable, consistent and reliable electric power from agricultural biomass- An Indian perspective

    International Nuclear Information System (INIS)

    Singh, Jaswinder

    2016-01-01

    The utilization of agricultural biomass for production of electric power can help to reduce the environmental emissions while achieving energy security and sustainable development. This paper presents a methodology for estimating the power production potential of agricultural biomass in a country. Further, the methodology has been applied to develop a roadmap for producing reliable power in India. The present study reveals that about 650 Mt/year of agricultural biomass is generated in India, while about one-third of this has been found to be surplus for energy applications. The cereal crops have major contribution (64.60%) in production of surplus biomass followed by sugarcane (24.60%) and cotton (10.68%). The energy potential of these resources is of the order of 3.72 EJ, which represents a significant proportion of the primary energy consumption in the country. These biomass resources can produce electric power of 23–35 GW depending upon the efficiency of thermal conversion. The delivery of biomass to the plants and selection of appropriate technology have been found as the major issues that need to be resolved carefully. In the end, the study summarizes various technological options for biomass collection and utilization that can be used for producing clean and consistent power supply. - Highlights: •The production of bioelectricity in India is imperative and inevitable. •About one-third of the agricultural biomass is available for power generation. •The power potential of these resources is of the order of 23–31 GW. •The delivery of biomass to plants and technology selection are the key issues. •India should exploit these resources for producing clean and reliable power.

  10. Toward valid and reliable brain imaging results in eating disorders.

    Science.gov (United States)

    Frank, Guido K W; Favaro, Angela; Marsh, Rachel; Ehrlich, Stefan; Lawson, Elizabeth A

    2018-03-01

    Human brain imaging can help improve our understanding of mechanisms underlying brain function and how they drive behavior in health and disease. Such knowledge may eventually help us to devise better treatments for psychiatric disorders. However, the brain imaging literature in psychiatry and especially eating disorders has been inconsistent, and studies are often difficult to replicate. The extent or severity of extremes of eating and state of illness, which are often associated with differences in, for instance hormonal status, comorbidity, and medication use, commonly differ between studies and likely add to variation across study results. Those effects are in addition to the well-described problems arising from differences in task designs, data quality control procedures, image data preprocessing and analysis or statistical thresholds applied across studies. Which of those factors are most relevant to improve reproducibility is still a question for debate and further research. Here we propose guidelines for brain imaging research in eating disorders to acquire valid results that are more reliable and clinically useful. © 2018 Wiley Periodicals, Inc.

  11. Reliability, factor analysis and internal consistency calculation of the Insomnia Severity Index (ISI) in French and in English among Lebanese adolescents.

    Science.gov (United States)

    Chahoud, M; Chahine, R; Salameh, P; Sauleau, E A

    2017-06-01

    Our goal is to validate and to verify the reliability of the French and English versions of the Insomnia Severity Index (ISI) in Lebanese adolescents. A cross-sectional study was implemented. 104 Lebanese students aged between 14 and 19 years participated in the study. The English version of the questionnaire was distributed to English-speaking students and the French version was administered to French-speaking students. A scale (1 to 7 with 1 = very well understood and 7 = not at all) was used to identify the level of the students' understanding of each instruction, question and answer of the ISI. The scale's structural validity was assessed. The factor structure of ISI was evaluated by principal component analysis. The internal consistency of this scale was evaluated by Cronbach's alpha. To assess test-retest reliability the intraclass correlation coefficient (ICC) was used. The principal component analysis confirmed the presence of a two-component factor structure in the English version and a three-component factor structure in the French version with eigenvalues > 1. The English version of the ISI had an excellent internal consistency (α = 0.90), while the French version had a good internal consistency (α = 0.70). The ICC presented an excellent agreement in the French version (ICC = 0.914, CI = 0.856-0.949) and a good agreement in the English one (ICC = 0.762, CI = 0.481-890). The Bland-Altman plots of the two versions of the ISI showed that the responses over two weeks' were comparable and very few outliers were detected. The results of our analyses reveal that both English and French versions of the ISI scale have good internal consistency and are reproducible and reliable. Therefore, it can be used to assess the prevalence of insomnia in Lebanese adolescents.

  12. Reliability of attitude and knowledge items and behavioral consistency in the validated sun exposure questionnaire in a Danish population based sample

    DEFF Research Database (Denmark)

    Køster, Brian; Søndergaard, Jens; Nielsen, Jesper Bo

    2018-01-01

    in protection behavior was low. To our knowledge, this is the first study to report reliability for a completely validated questionnaire on sun-related behavior in a national random population based sample. Further, we show that attitude and knowledge questions confirmed their validity with good reliability......An important feature of questionnaire validation is reliability. To be able to measure a given concept by questionnaire validly, the reliability needs to be high. The objectives of this study were to examine reliability of attitude and knowledge and behavioral consistency of sunburn in a developed...... questionnaire for monitoring and evaluating population sun-related behavior. Sun related behavior, attitude and knowledge was measured weekly by a questionnaire in the summer of 2013 among 664 Danes. Reliability was tested in a test-retest design. Consistency of behavioral information was tested similarly...

  13. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  14. Internal consistency, reliability, and temporal stability of the Oxford Happiness Questionnaire short-form: Test-retest data over two weeks

    OpenAIRE

    MCGUCKIN, CONOR

    2006-01-01

    PUBLISHED The Oxford Happiness Questionnaire short-form is a recently developed eight-item measure of happiness. This study evaluated the internal consistency reliability and test-retest reliability of the Oxford Happiness Questionnaire short-form among 55 Northern Irish undergraduate university students who completed the measure on two occasions separated by two weeks. Internal consistency of the measure on both occasions was satisfactory at both Time 1 (alpha = .62) and Time 2 (alpha = ....

  15. Posterior consistency for Bayesian inverse problems through stability and regression results

    International Nuclear Information System (INIS)

    Vollmer, Sebastian J

    2013-01-01

    In the Bayesian approach, the a priori knowledge about the input of a mathematical model is described via a probability measure. The joint distribution of the unknown input and the data is then conditioned, using Bayes’ formula, giving rise to the posterior distribution on the unknown input. In this setting we prove posterior consistency for nonlinear inverse problems: a sequence of data is considered, with diminishing fluctuations around a single truth and it is then of interest to show that the resulting sequence of posterior measures arising from this sequence of data concentrates around the truth used to generate the data. Posterior consistency justifies the use of the Bayesian approach very much in the same way as error bounds and convergence results for regularization techniques do. As a guiding example, we consider the inverse problem of reconstructing the diffusion coefficient from noisy observations of the solution to an elliptic PDE in divergence form. This problem is approached by splitting the forward operator into the underlying continuum model and a simpler observation operator based on the output of the model. In general, these splittings allow us to conclude posterior consistency provided a deterministic stability result for the underlying inverse problem and a posterior consistency result for the Bayesian regression problem with the push-forward prior. Moreover, we prove posterior consistency for the Bayesian regression problem based on the regularity, the tail behaviour and the small ball probabilities of the prior. (paper)

  16. Results of the event sequence reliability benchmark exercise

    International Nuclear Information System (INIS)

    Silvestri, E.

    1990-01-01

    The Event Sequence Reliability Benchmark Exercise is the fourth of a series of benchmark exercises on reliability and risk assessment, with specific reference to nuclear power plant applications, and is the logical continuation of the previous benchmark exercises on System Analysis Common Cause Failure and Human Factors. The reference plant is the Nuclear Power Plant at Grohnde Federal Republic of Germany a 1300 MW PWR plant of KWU design. The specific objective of the Exercise is to model, to quantify and to analyze such event sequences initiated by the occurrence of a loss of offsite power that involve the steam generator feed. The general aim is to develop a segment of a risk assessment, which ought to include all the specific aspects and models of quantification, such as common canal failure, Human Factors and System Analysis, developed in the previous reliability benchmark exercises, with the addition of the specific topics of dependences between homologous components belonging to different systems featuring in a given event sequence and of uncertainty quantification, to end up with an overall assessment of: - the state of the art in risk assessment and the relative influences of quantification problems in a general risk assessment framework. The Exercise has been carried out in two phases, both requiring modelling and quantification, with the second phase adopting more restrictive rules and fixing certain common data, as emerged necessary from the first phase. Fourteen teams have participated in the Exercise mostly from EEC countries, with one from Sweden and one from the USA. (author)

  17. Internal consistency reliability and construct validity of the Dutch translation of the Francis Scale of Attitude toward Christianity among adolescents.

    Science.gov (United States)

    Francis, L J; Hermans, C A

    2000-02-01

    A sample of 1,021 young people attending Years 7, 8, 9, 10, and 11 at Catholic secondary schools within the state-maintained sector completed the Dutch translation of the Francis Scale of Attitude toward Christianity. The data support its reliability and validity and commend it for further use in studies conducted among young people in The Netherlands.

  18. Calibration and consistency of results of an ionization-chamber secondary standard measuring system for activity

    International Nuclear Information System (INIS)

    Schrader, Heinrich

    2000-01-01

    Calibration in terms of activity of the ionization-chamber secondary standard measuring systems at the PTB is described. The measurement results of a Centronic IG12/A20, a Vinten ISOCAL IV and a radionuclide calibrator chamber for nuclear medicine applications are discussed, their energy-dependent efficiency curves established and the consistency checked using recently evaluated radionuclide decay data. Criteria for evaluating and transferring calibration factors (or efficiencies) are given

  19. Factor structure, internal consistency and reliability of the Posttraumatic Stress Disorder Checklist (PCL: an exploratory study Estrutura fatorial, consistência interna e confiabilidade do Posttraumatic Stress Disorder Checklist (PCL: um estudo exploratório

    Directory of Open Access Journals (Sweden)

    Eduardo de Paula Lima

    2012-01-01

    Full Text Available INTRODUCTION: Posttraumatic stress disorder (PTSD is an anxiety disorder resulting from exposure to traumatic events. The Posttraumatic Stress Disorder Checklist (PCL is a self-report measure largely used to evaluate the presence of PTSD. OBJECTIVE: To investigate the internal consistency, temporal reliability and factor validity of the Portuguese language version of the PCL used in Brazil. METHODS: A total of 186 participants were recruited. The sample was heterogeneous with regard to occupation, sociodemographic data, mental health history, and exposure to traumatic events. Subjects answered the PCL at two occasions within a 15 days’ interval (range: 5-15 days. RESULTS: Cronbach’s alpha coefficients indicated high internal consistency for the total scale (0.91 and for the theoretical dimensions of the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV (0.83, 0.81, and 0.80. Temporal reliability (test-retest was high and consistent for different cutoffs. Maximum likelihood exploratory factor analysis (EFA was conducted and oblique rotation (Promax was applied. The Kaiser-Meyer-Olkin (KMO index (0.911 and Bartlett’s test of sphericity (χ² = 1,381.34, p INTRODUÇÃO: O transtorno do estresse pós-traumático (TEPT é um transtorno de ansiedade decorrente da exposição a eventos traumáticos. Entre as medidas de avaliação dos sintomas, destaca-se o Posttraumatic Stress Disorder Checklist (PCL. OBJETIVO: Investigar a consistência interna, a confiabilidade temporal e a validade fatorial da versão do PCL em português, utilizada no Brasil. MÉTODOS: Participaram do estudo 186 indivíduos heterogêneos em relação a ocupação, características sociodemográficas, histórico de saúde mental e exposição a eventos traumáticos. O PCL foi aplicado em dois momentos considerando um intervalo máximo de 15 dias (intervalo: 5-15 dias. RESULTADOS: A consistência interna (alfa de Cronbach foi adequada para a escala

  20. A review of culturally adapted versions of the Oswestry Disability Index: the adaptation process, construct validity, test-retest reliability and internal consistency.

    Science.gov (United States)

    Sheahan, Peter J; Nelson-Wong, Erika J; Fischer, Steven L

    2015-01-01

    The Oswestry Disability Index (ODI) is a self-report-based outcome measure used to quantify the extent of disability related to low back pain (LBP), a substantial contributor to workplace absenteeism. The ODI tool has been adapted for use by patients in several non-English speaking nations. It is unclear, however, if these adapted versions of the ODI are as credible as the original ODI developed for English-speaking nations. The objective of this study was to conduct a review of the literature to identify culturally adapted versions of the ODI and to report on the adaptation process, construct validity, test-retest reliability and internal consistency of these ODIs. Following a pragmatic review process, data were extracted from each study with regard to these four outcomes. While most studies applied adaptation processes in accordance with best-practice guidelines, there were some deviations. However, all studies reported high-quality psychometric properties: group mean construct validity was 0.734 ± 0.094 (indicated via a correlation coefficient), test-retest reliability was 0.937 ± 0.032 (indicated via an intraclass correlation coefficient) and internal consistency was 0.876 ± 0.047 (indicated via Cronbach's alpha). Researchers can be confident when using any of these culturally adapted ODIs, or when comparing and contrasting results between cultures where these versions were employed. Implications for Rehabilitation Low back pain is the second leading cause of disability in the world, behind only cancer. The Oswestry Disability Index (ODI) has been developed as a self-report outcome measure of low back pain for administration to patients. An understanding of the various cross-cultural adaptations of the ODI is important for more concerted multi-national research efforts. This review examines 16 cross-cultural adaptations of the ODI and should inform the work of health care and rehabilitation professionals.

  1. First results of GERDA Phase II and consistency with background models

    Science.gov (United States)

    Agostini, M.; Allardt, M.; Bakalyarov, A. M.; Balata, M.; Barabanov, I.; Baudis, L.; Bauer, C.; Bellotti, E.; Belogurov, S.; Belyaev, S. T.; Benato, G.; Bettini, A.; Bezrukov, L.; Bode1, T.; Borowicz, D.; Brudanin, V.; Brugnera, R.; Caldwell, A.; Cattadori, C.; Chernogorov, A.; D'Andrea, V.; Demidova, E. V.; Di Marco, N.; Domula, A.; Doroshkevich, E.; Egorov, V.; Falkenstein, R.; Frodyma, N.; Gangapshev, A.; Garfagnini, A.; Gooch, C.; Grabmayr, P.; Gurentsov, V.; Gusev, K.; Hakenmüller, J.; Hegai, A.; Heisel, M.; Hemmer, S.; Hofmann, W.; Hult, M.; Inzhechik, L. V.; Janicskó Csáthy, J.; Jochum, J.; Junker, M.; Kazalov, V.; Kihm, T.; Kirpichnikov, I. V.; Kirsch, A.; Kish, A.; Klimenko, A.; Kneißl, R.; Knöpfle, K. T.; Kochetov, O.; Kornoukhov, V. N.; Kuzminov, V. V.; Laubenstein, M.; Lazzaro, A.; Lebedev, V. I.; Lehnert, B.; Liao, H. Y.; Lindner, M.; Lippi, I.; Lubashevskiy, A.; Lubsandorzhiev, B.; Lutter, G.; Macolino, C.; Majorovits, B.; Maneschg, W.; Medinaceli, E.; Miloradovic, M.; Mingazheva, R.; Misiaszek, M.; Moseev, P.; Nemchenok, I.; Palioselitis, D.; Panas, K.; Pandola, L.; Pelczar, K.; Pullia, A.; Riboldi, S.; Rumyantseva, N.; Sada, C.; Salamida, F.; Salathe, M.; Schmitt, C.; Schneider, B.; Schönert, S.; Schreiner, J.; Schulz, O.; Schütz, A.-K.; Schwingenheuer, B.; Selivanenko, O.; Shevzik, E.; Shirchenko, M.; Simgen, H.; Smolnikov, A.; Stanco, L.; Vanhoefer, L.; Vasenko, A. A.; Veresnikova, A.; von Sturm, K.; Wagner, V.; Wegmann, A.; Wester, T.; Wiesinger, C.; Wojcik, M.; Yanovich, E.; Zhitnikov, I.; Zhukov, S. V.; Zinatulina, D.; Zuber, K.; Zuzel, G.

    2017-01-01

    The GERDA (GERmanium Detector Array) is an experiment for the search of neutrinoless double beta decay (0νββ) in 76Ge, located at Laboratori Nazionali del Gran Sasso of INFN (Italy). GERDA operates bare high purity germanium detectors submersed in liquid Argon (LAr). Phase II of data-taking started in Dec 2015 and is currently ongoing. In Phase II 35 kg of germanium detectors enriched in 76Ge including thirty newly produced Broad Energy Germanium (BEGe) detectors is operating to reach an exposure of 100 kg·yr within about 3 years data taking. The design goal of Phase II is to reduce the background by one order of magnitude to get the sensitivity for T1/20ν = O≤ft( {{{10}26}} \\right){{ yr}}. To achieve the necessary background reduction, the setup was complemented with LAr veto. Analysis of the background spectrum of Phase II demonstrates consistency with the background models. Furthermore 226Ra and 232Th contamination levels consistent with screening results. In the first Phase II data release we found no hint for a 0νββ decay signal and place a limit of this process T1/20ν > 5.3 \\cdot {1025} yr (90% C.L., sensitivity 4.0·1025 yr). First results of GERDA Phase II will be presented.

  2. Consistency and reliability of judgements by assessors of case based discussions in general practice specialty training programmes in the United Kingdom.

    Science.gov (United States)

    Bodgener, Susan; Denney, Meiling; Howard, John

    2017-01-01

    Case based discussions (CbDs) are a mandatory workplace assessment used throughout general practitioner (GP) specialty training; they contribute to the annual review of competence progression (ARCP) for each trainee. This study examined the judgements arising from CbDs made by different groups of assessors and whether or not these assessments supported ARCP decisions. The trainees selected were at the end of their first year of GP training and had been identified during their ARCPs to need extra training time. CbDs were specifically chosen as they are completed by both hospital and GP supervisors, enabling comparison between these two groups. The results raise concern with regard to the consistency of judgements made by different groups of assessors, with significant variance between assessors of different status and seniority. Further work needs to be done on whether the CbD in its current format is fit for purpose as one of the mandatory WPBAs for GP trainees, particularly during their hospital placements. There is a need to increase the inter-rater reliability of CbDs to ensure a consistent contribution to subsequent decisions about a trainee's overall progress.

  3. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  4. Iterative reconstruction for quantitative computed tomography analysis of emphysema: consistent results using different tube currents

    Directory of Open Access Journals (Sweden)

    Yamashiro T

    2015-02-01

    Full Text Available Tsuneo Yamashiro,1 Tetsuhiro Miyara,1 Osamu Honda,2 Noriyuki Tomiyama,2 Yoshiharu Ohno,3 Satoshi Noma,4 Sadayuki Murayama1 On behalf of the ACTIve Study Group 1Department of Radiology, Graduate School of Medical Science, University of the Ryukyus, Nishihara, Okinawa, Japan; 2Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan; 3Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan; 4Department of Radiology, Tenri Hospital, Tenri, Nara, Japan Purpose: To assess the advantages of iterative reconstruction for quantitative computed tomography (CT analysis of pulmonary emphysema. Materials and methods: Twenty-two patients with pulmonary emphysema underwent chest CT imaging using identical scanners with three different tube currents: 240, 120, and 60 mA. Scan data were converted to CT images using Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D and a conventional filtered-back projection mode. Thus, six scans with and without AIDR3D were generated per patient. All other scanning and reconstruction settings were fixed. The percent low attenuation area (LAA%; < -950 Hounsfield units and the lung density 15th percentile were automatically measured using a commercial workstation. Comparisons of LAA% and 15th percentile results between scans with and without using AIDR3D were made by Wilcoxon signed-rank tests. Associations between body weight and measurement errors among these scans were evaluated by Spearman rank correlation analysis. Results: Overall, scan series without AIDR3D had higher LAA% and lower 15th percentile values than those with AIDR3D at each tube current (P<0.0001. For scan series without AIDR3D, lower tube currents resulted in higher LAA% values and lower 15th percentiles. The extent of emphysema was significantly different between each pair among scans when not using AIDR3D (LAA%, P<0.0001; 15th percentile, P<0.01, but was not

  5. Test-Retest Reliability, Convergent Validity, and Internal Consistency of the Persian Version of Fullerton Advanced Balance Scale in Iranian Community-Dwelling Older Adults

    OpenAIRE

    Azar Sabet; Akram Azad; Ghorban Taghizadeh

    2016-01-01

    Objectives: This study was performed to evaluate convergent validity, test-retest reliability and internal consistency of the Persian translation of the Fullerton advanced balance (FAB) for use in Iranian community- dwelling older adults and improve the quality of their functional balance assessment. Methods & Materials: The original scale was translated with forward-backward protocol. In the next step, using convenience sampling and inclusion criteria, 88 functionally indep...

  6. Happiness as stable extraversion : internal consistency reliability and construct validity of the Oxford Happiness Questionnaire among undergraduate students\\ud \\ud

    OpenAIRE

    Robbins, Mandy; Francis, Leslie J.; Edwards, Bethan

    2010-01-01

    The Oxford Happiness Questionnaire (OHQ) was developed by Hills and Argyle (2002) to provide a more accessible equivalent measure of the Oxford Happiness Inventory (OHI). The aim of the present study was to examine the internal consistency reliability, and construct validity of this new instrument alongside the Eysenckian dimensional model of personality. The Oxford Happiness Questionnaire was completed by a sample of 131 undergraduate students together with the abbreviated form of the Revise...

  7. Consistent deformations of dual formulations of linearized gravity: A no-go result

    International Nuclear Information System (INIS)

    Bekaert, Xavier; Boulanger, Nicolas; Henneaux, Marc

    2003-01-01

    The consistent, local, smooth deformations of the dual formulation of linearized gravity involving a tensor field in the exotic representation of the Lorentz group with Young symmetry type (D-3,1) (one column of length D-3 and one column of length 1) are systematically investigated. The rigidity of the Abelian gauge algebra is first established. We next prove a no-go theorem for interactions involving at most two derivatives of the fields

  8. Impact of Alzheimer's Disease on Caregiver Questionnaire: internal consistency, convergent validity, and test-retest reliability of a new measure for assessing caregiver burden.

    Science.gov (United States)

    Cole, Jason C; Ito, Diane; Chen, Yaozhu J; Cheng, Rebecca; Bolognese, Jennifer; Li-McLeod, Josephine

    2014-09-04

    There is a lack of validated instruments to measure the level of burden of Alzheimer's disease (AD) on caregivers. The Impact of Alzheimer's Disease on Caregiver Questionnaire (IADCQ) is a 12-item instrument with a seven-day recall period that measures AD caregiver's burden across emotional, physical, social, financial, sleep, and time aspects. Primary objectives of this study were to evaluate psychometric properties of IADCQ administered on the Web and to determine most appropriate scoring algorithm. A national sample of 200 unpaid AD caregivers participated in this study by completing the Web-based version of IADCQ and Short Form-12 Health Survey Version 2 (SF-12v2™). The SF-12v2 was used to measure convergent validity of IADCQ scores and to provide an understanding of the overall health-related quality of life of sampled AD caregivers. The IADCQ survey was also completed four weeks later by a randomly selected subgroup of 50 participants to assess test-retest reliability. Confirmatory factor analysis (CFA) was implemented to test the dimensionality of the IADCQ items. Classical item-level and scale-level psychometric analyses were conducted to estimate psychometric characteristics of the instrument. Test-retest reliability was performed to evaluate the instrument's stability and consistency over time. Virtually none (2%) of the respondents had either floor or ceiling effects, indicating the IADCQ covers an ideal range of burden. A single-factor model obtained appropriate goodness of fit and provided evidence that a simple sum score of the 12 items of IADCQ can be used to measure AD caregiver's burden. Scales-level reliability was supported with a coefficient alpha of 0.93 and an intra-class correlation coefficient (for test-retest reliability) of 0.68 (95% CI: 0.50-0.80). Low-moderate negative correlations were observed between the IADCQ and scales of the SF-12v2. The study findings suggest the IADCQ has appropriate psychometric characteristics as a

  9. Cross-Cultural Adaptation of the Profile Fitness Mapping Neck Questionnaire to Brazilian Portuguese: Internal Consistency, Reliability, and Construct and Structural Validity.

    Science.gov (United States)

    Ferreira, Mariana Cândido; Björklund, Martin; Dach, Fabiola; Chaves, Thais Cristina

    The purpose of this study was to adapt and evaluate the psychometric properties of the ProFitMap-neck to Brazilian Portuguese. The cross-cultural adaptation consisted of 5 stages, and 180 female patients with chronic neck pain participated in the study. A subsample (n = 30) answered the pretest, and another subsample (n = 100) answered the questionnaire a second time. Internal consistency, test-retest reliability, and construct validity (hypothesis testing and structural validity) were estimated. For construct validity, the scores of the questionnaire were correlated with the Neck Disability Index (NDI), and the Hospital Anxiety and Depression Scale (HADS), the Tampa Scale of Kinesiophobia (TSK), and the 36-item Short-Form Health Survey (SF-36). Internal consistency was determined by adequate Cronbach's α values (α > 0.70). Strong reliability was identified by high intraclass correlation coefficients (ICC > 0.75). Construct validity was identified by moderate and strong correlations of the Br-ProFitMap-neck with total NDI score (-0.56 50%, Kaiser-Meyer-Olkin index > 0.50, eigenvalue > 1, and factor loadings > 0.2. Br-ProFitMap-neck had adequate psychometric properties and can be used in clinical settings, as well as research, in patients with chronic neck pain. Copyright © 2017. Published by Elsevier Inc.

  10. Reconstruction of scalar field theories realizing inflation consistent with the Planck and BICEP2 results

    Directory of Open Access Journals (Sweden)

    Kazuharu Bamba

    2014-10-01

    Full Text Available We reconstruct scalar field theories to realize inflation compatible with the BICEP2 result as well as the Planck. In particular, we examine the chaotic inflation model, natural (or axion inflation model, and an inflationary model with a hyperbolic inflaton potential. We perform an explicit approach to find out a scalar field model of inflation in which any observations can be explained in principle.

  11. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  12. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).

  13. Cross-cultural adaptation, reliability, internal consistency and validation of the Hand Function Sort (HFS©) for French speaking patients with upper limb complaints.

    Science.gov (United States)

    Konzelmann, M; Burrus, C; Hilfiker, R; Rivier, G; Deriaz, O; Luthi, F

    2015-03-01

    Functional evaluation of upper limb is not only based on clinical findings but requires self-administered questionnaires to address patients' perspective. The Hand Function Sort (HFS©) was only validated in English. The aim of this study was the French cross cultural adaptation and validation of the HFS© (HFS-F). 150 patients with various upper limbs impairments were recruited in a rehabilitation center. Translation and cross-cultural adaptation were made according to international guidelines. Construct validity was estimated through correlations with Disabilities Arm Shoulder and Hand (DASH) questionnaire, SF-36 mental component summary (MCS),SF-36 physical component summary (PCS) and pain intensity. Internal consistency was assessed by Cronbach's α and test-retest reliability by intraclass correlation. Cronbach's α was 0.98, test-retest reliability was excellent at 0.921 (95 % CI 0.871-0.971) same as original HFS©. Correlations with DASH were-0.779 (95 % CI -0.847 to -0.685); with SF 36 PCS 0.452 (95 % CI 0.276-0.599); with pain -0.247 (95 % CI -0.429 to -0.041); with SF 36 MCS 0.242 (95 % CI 0.042-0.422). There were no floor or ceiling effects. The HFS-F has the same good psychometric properties as the original HFS© (internal consistency, test retest reliability, convergent validity with DASH, divergent validity with SF-36 MCS, and no floor or ceiling effects). The convergent validity with SF-36 PCS was poor; we found no correlation with pain. The HFS-F could be used with confidence in a population of working patients. Other studies are necessary to study its psychometric properties in other populations.

  14. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    Energy Technology Data Exchange (ETDEWEB)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.

  15. Results of the reliability benchmark exercise and the future CEC-JRC program

    International Nuclear Information System (INIS)

    Amendola, A.

    1985-01-01

    As a contribution towards identifying problem areas and for assessing probabilistic safety assessment (PSA) methods and procedures of analysis, JRC has organized a wide-range Benchmark Exercise on systems reliability. This has been executed by ten different teams involving seventeen organizations from nine European countries. The exercise has been based on a real case (Auxiliary Feedwater System of EDF Paluel PWR 1300 MWe Unit), starting from analysis of technical specifications, logical and topological layout and operational procedures. Terms of references included both qualitative and quantitative analyses. The subdivision of the exercise into different phases and the rules adopted allowed assessment of the different components of the spread of the overall results. It appeared that modelling uncertainties may overwhelm data uncertainties and major efforts must be spent in order to improve consistency and completeness of qualitative analysis. After successful completion of the first exercise, CEC-JRC program has planned separate exercises on analysis of dependent failures and human factors before approaching the evaluation of a complete accident sequence

  16. Modification of Operating Procedure for EZ-Retriever (Trademark) Microwave to Produce Consistent and Reproducible Immunohistochemical Results

    National Research Council Canada - National Science Library

    Tompkins, Christina P; Fath, Denise M; Hamilton, Tracey A; Kan, Robert K

    2006-01-01

    The present study was conducted to optimize the operating procedure for the EZ- Retriever" microwave oven to produce consistent and reproducible staining results with microtubule-associated protein 2 (MAP-2...

  17. Using VIIRS Day/Night Band to Measure Electricity Supply Reliability: Preliminary Results from Maharashtra, India

    Directory of Open Access Journals (Sweden)

    Michael L. Mann

    2016-08-01

    Full Text Available Unreliable electricity supplies are common in developing countries and impose large socio-economic costs, yet precise information on electricity reliability is typically unavailable. This paper presents preliminary results from a machine-learning approach for using satellite imagery of nighttime lights to develop estimates of electricity reliability for western India at a finer spatial scale. We use data from the Visible Infrared Imaging Radiometer Suite (VIIRS onboard the Suomi National Polar Partnership (SNPP satellite together with newly-available data from networked household voltage meters. Our results point to the possibilities of this approach as well as areas for refinement. With currently available training data, we find a limited ability to detect individual outages identified by household-level measurements of electricity voltage. This is likely due to the relatively small number of individual outages observed in our preliminary data. However, we find that the approach can estimate electricity reliability rates for individual locations fairly well, with the predicted versus actual regression yielding an R2 > 0.5. We also find that, despite the after midnight overpass time of the SNPP satellite, the reliability estimates derived are representative of daytime reliability.

  18. Experiences in Germany with reliability data assessment, results and current problems

    International Nuclear Information System (INIS)

    Homke, P.; Kutsch, W.; Lindauer, E.

    1982-01-01

    This paper gives a survey on reliability data assessment in the FRG. The activities, which were carried out for the German Risk Assessment Study are presented together with selected results. A systematic data collection in a nuclear power plant is described and the experiences are discussed, which were gained in this project

  19. Feasibility to implement the radioisotopic method of nasal mucociliary transport measurement getting reliable results

    International Nuclear Information System (INIS)

    Troncoso, M.; Opazo, C.; Quilodran, C.; Lizama, V.

    2002-01-01

    Aim: Our goal was to implement the radioisotopic method to measure the nasal mucociliary velocity of transport (NMVT) in a feasible way in order to make it easily available as well as to validate the accuracy of the results. Such a method is needed when primary ciliary dyskinesia (PCD) is suspected, a disorder characterized for low NMVT, non-specific chronic respiratory symptoms that needs to be confirmed by electronic microscopic cilia biopsy. Methods: We performed one hundred studies from February 2000 until February 2002. Patients aged 2 months to 39 years, mean 9 years. All of them were referred from the Respiratory Disease Department. Ninety had upper or lower respiratory symptoms, ten were healthy controls. The procedure, done be the Nuclear Medicine Technologist, consists to put a 20 μl drop of 99mTc-MAA (0,1 mCi, 4 MBq) behind the head of the inferior turbinate in one nostril using a frontal light, a nasal speculum and a teflon catheter attached to a tuberculin syringe. The drop movement was acquired in a gamma camera-computer system and the velocity was expressed in mm/min. As there is need for the patient not to move during the procedure, sedation has to be used in non-cooperative children. Abnormal NMVT values cases were referred for nasal biopsy. Patients were classified in three groups. Normal controls (NC), PCD confirmed by biopsy (PCDB) and cases with respiratory symptoms without biopsy (RSNB). In all patients with NMVT less than 2.4 mm/min PCD was confirmed by biopsy. There was a clear-cut separation between normal and abnormal values and interestingly even the highest NMVT in PCDB cases was lower than the lowest NMVT in NC. The procedure is not as easy as is generally described in the literature because the operator has to get some skill as well as for the need of sedation in some cases. Conclusion: The procedure gives reliable, reproducible and objective results. It is safe, not expensive and quick in cooperative patients. Although, sometimes

  20. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  1. Selected problems and results of the transient event and reliability analyses for the German safety study

    International Nuclear Information System (INIS)

    Hoertner, H.

    1977-01-01

    For the investigation of the risk of nuclear power plants loss-of-coolant accidents and transients have to be analyzed. The different functions of the engineered safety features installed to cope with transients are explained. The event tree analysis is carried out for the important transient 'loss of normal onsite power'. Preliminary results of the reliability analyses performed for quantitative evaluation of this event tree are shown. (orig.) [de

  2. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    International Nuclear Information System (INIS)

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  3. Modified Core Wash Cytology: A reliable same day biopsy result for breast clinics.

    Science.gov (United States)

    Bulte, J P; Wauters, C A P; Duijm, L E M; de Wilt, J H W; Strobbe, L J A

    2016-12-01

    Fine Needle Aspiration Biopsy (FNAB), Core Needle biopsy (CNB) and hybrid techniques including Core Wash Cytology (CWC) are available for same-day diagnosis in breast lesions. In CWC a washing of the biopsy core is processed for a provisional cytological diagnosis, after which the core is processed like a regular CNB. This study focuses on the reliability of CWC in daily practice. All consecutive CWC procedures performed in a referral breast centre between May 2009 and May 2012 were reviewed, correlating CWC results with the CNB result, definitive diagnosis after surgical resection and/or follow-up. Symptomatic as well as screen-detected lesions, undergoing CNB were included. 1253 CWC procedures were performed. Definitive histology showed 849 (68%) malignant and 404 (32%) benign lesions. 80% of CWC procedures yielded a conclusive diagnosis: this percentage was higher amongst malignant lesions and lower for benign lesions: 89% and 62% respectively. Sensitivity and specificity of a conclusive CWC result were respectively 98.3% and 90.4%. The eventual incidence of malignancy in the cytological 'atypical' group (5%) was similar to the cytological 'benign' group (6%). CWC can be used to make a reliable provisional diagnosis of breast lesions within the hour. The high probability of conclusive results in malignant lesions makes CWC well suited for high risk populations. Copyright © 2016 Elsevier Ltd, BASO ~ the Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  4. Thermodynamic properties of xanthone: Heat capacities, phase-transition properties, and thermodynamic-consistency analyses using computational results

    International Nuclear Information System (INIS)

    Chirico, Robert D.; Kazakov, Andrei F.

    2015-01-01

    Highlights: • Heat capacities were measured for the temperature range (5 to 520) K. • The enthalpy of combustion was measured and the enthalpy of formation was derived. • Thermodynamic-consistency analysis resolved inconsistencies in literature enthalpies of sublimation. • An inconsistency in literature enthalpies of combustion was resolved. • Application of computational chemistry in consistency analysis was demonstrated successfully. - Abstract: Heat capacities and phase-transition properties for xanthone (IUPAC name 9H-xanthen-9-one and Chemical Abstracts registry number [90-47-1]) are reported for the temperature range 5 < T/K < 524. Statistical calculations were performed and thermodynamic properties for the ideal gas were derived based on molecular geometry optimization and vibrational frequencies calculated at the B3LYP/6-31+G(d,p) level of theory. These results are combined with sublimation pressures from the literature to allow critical evaluation of inconsistent enthalpies of sublimation for xanthone, also reported in the literature. Literature values for the enthalpy of combustion of xanthone are re-assessed, a revision is recommended for one result, and a new value for the enthalpy of formation of the ideal gas is derived. Comparisons with thermophysical properties reported in the literature are made for all other reported and derived properties, where possible

  5. Test results of reliable and very high capillary multi-evaporators / condenser loop

    Energy Technology Data Exchange (ETDEWEB)

    Van Oost, S; Dubois, M; Bekaert, G [Societe Anonyme Belge de Construction Aeronautique - SABCA (Belgium)

    1997-12-31

    The paper present the results of various SABCA activities in the field of two-phase heat transport system. These results have been based on a critical review and analysis of the existing two-phase loop and of the future loop needs in space applications. The research and the development of a high capillary wick (capillary pressure up to 38 000 Pa) are described. These activities have led towards the development of a reliable high performance capillary loop concept (HPCPL), which is discussed in details. Several loop configurations mono/multi-evaporators have been ground tested. The presented results of various tests clearly show the viability of this concept for future applications. Proposed flight demonstrations as well as potential applications conclude this paper. (authors) 7 refs.

  6. Test results of reliable and very high capillary multi-evaporators / condenser loop

    Energy Technology Data Exchange (ETDEWEB)

    Van Oost, S.; Dubois, M.; Bekaert, G. [Societe Anonyme Belge de Construction Aeronautique - SABCA (Belgium)

    1996-12-31

    The paper present the results of various SABCA activities in the field of two-phase heat transport system. These results have been based on a critical review and analysis of the existing two-phase loop and of the future loop needs in space applications. The research and the development of a high capillary wick (capillary pressure up to 38 000 Pa) are described. These activities have led towards the development of a reliable high performance capillary loop concept (HPCPL), which is discussed in details. Several loop configurations mono/multi-evaporators have been ground tested. The presented results of various tests clearly show the viability of this concept for future applications. Proposed flight demonstrations as well as potential applications conclude this paper. (authors) 7 refs.

  7. Self-Consistent Model of Magnetospheric Electric Field, Ring Current, Plasmasphere, and Electromagnetic Ion Cyclotron Waves: Initial Results

    Science.gov (United States)

    Gamayunov, K. V.; Khazanov, G. V.; Liemohn, M. W.; Fok, M.-C.; Ridley, A. J.

    2009-01-01

    Further development of our self-consistent model of interacting ring current (RC) ions and electromagnetic ion cyclotron (EMIC) waves is presented. This model incorporates large scale magnetosphere-ionosphere coupling and treats self-consistently not only EMIC waves and RC ions, but also the magnetospheric electric field, RC, and plasmasphere. Initial simulations indicate that the region beyond geostationary orbit should be included in the simulation of the magnetosphere-ionosphere coupling. Additionally, a self-consistent description, based on first principles, of the ionospheric conductance is required. These initial simulations further show that in order to model the EMIC wave distribution and wave spectral properties accurately, the plasmasphere should also be simulated self-consistently, since its fine structure requires as much care as that of the RC. Finally, an effect of the finite time needed to reestablish a new potential pattern throughout the ionosphere and to communicate between the ionosphere and the equatorial magnetosphere cannot be ignored.

  8. The Evaluation of Real Time Milk Analyse Result Reliability in the Czech Republic

    Directory of Open Access Journals (Sweden)

    Oto Hanuš

    2016-01-01

    Full Text Available The good result reliability of regular analyzes of milk composition could improve the health monitoring of dairy cows and herd management. The aim of this study was the analysis of measurement of abilities and properties of RT (Real Time system (AfiLab = AfiMilk (NIR measurement unit (near infrared spectroscopy and electrical conductivity (C of milk by conductometry + AfiFarm (calibration and interpretation software for the analysis of individual milk samples (IMSs. There were 2 × 30 IMSs in the experiment. The reference values (RVs of milk components and properties (fat (F, proteins (P, lactose (L, C and the somatic cell count (SCC were determined by conventional (direct and indirect: conductometry (C; infrared spectroscopy 1 with the filter technology and 2 with the Fourier transformations (F, P, L; fluoro-opto-electronic cell counting (SCC in the film on the rotation disc (1 and by flow cytometry (2 methods. AfiLab method (alternative showed less close relationships as compared to the RVs as relationships between reference methods. This was expected. However, these relationships (r were mostly significant: F from .597 to .738 (P ≤ 0.01 and ≤ 0.001; P from .284 to .787 (P > 0.05 and P ≤ 0.001; C .773 (P ≤ 0.001. Correlations (r were not significant (P > 0.05: L from −.013 to .194; SCC from −.148 to −.133. Variability of the RVs explained the following percentages of variability in AfiLab results: F to 54.4 %; P to 61.9 %; L only 3.8 %; C to 59.7 %. Explanatory power (reliability of AfiLab results to the animal is increasing with the regularity of their measurements (principle of real time application. Correlation values r (x minus 1.64 × sd for confidence interval (one-sided at a level of 95 % can be used for an alternative method in assessing the calibration quality. These limits are F 0.564, P 0.784 and C 0.715 and can be essential with the further implementation of this advanced technology of dairy herd management.

  9. Cross-cultural adaptation, reliability, internal consistency and validation of the Spinal Function Sort (SFS) for French- and German-speaking patients with back complaints.

    Science.gov (United States)

    Borloz, S; Trippolini, M A; Ballabeni, P; Luthi, F; Deriaz, O

    2012-09-01

    Functional subjective evaluation through questionnaire is fundamental, but not often realized in patients with back complaints, lacking validated tools. The Spinal Function Sort (SFS) was only validated in English. We aimed to translate, adapt and validate the French (SFS-F) and German (SFS-G) versions of the SFS. Three hundred and forty-four patients, experiencing various back complaints, were recruited in a French (n = 87) and a German-speaking (n = 257) center. Construct validity was estimated via correlations with SF-36 physical and mental scales, pain intensity and hospital anxiety and depression scales (HADS). Scale homogeneities were assessed by Cronbach's α. Test-retest reliability was assessed on 65 additional patients using intraclass correlation (IC). For the French and German translations, respectively, α were 0.98 and 0.98; IC 0.98 (95% CI: [0.97; 1.00]) and 0.94 (0.90; 0.98). Correlations with physical functioning were 0.63 (0.48; 0.74) and 0.67 (0.59; 0.73); with physical summary 0.60 (0.44; 0.72) and 0.52 (0.43; 0.61); with pain -0.33 (-0.51; -0.13) and -0.51 (-0.60; -0.42); with mental health -0.08 (-0.29; 0.14) and 0.25 (0.13; 0.36); with mental summary 0.01 (-0.21; 0.23) and 0.28 (0.16; 0.39); with depression -0.26 (-0.45; -0.05) and -0.42 (-0.52; -0.32); with anxiety -0.17 (-0.37; -0.04) and -0.45 (-0.54; -0.35). Reliability was excellent for both languages. Convergent validity was good with SF-36 physical scales, moderate with VAS pain. Divergent validity was low with SF-36 mental scales in both translated versions and with HADS for the SFS-F (moderate in SFS-G). Both versions seem to be valid and reliable for evaluating perceived functional capacity in patients with back complaints.

  10. Improving predictions for collider observables by consistently combining fixed order calculations with resummed results in perturbation theory

    International Nuclear Information System (INIS)

    Schoenherr, Marek

    2011-01-01

    With the constantly increasing precision of experimental data acquired at the current collider experiments Tevatron and LHC the theoretical uncertainty on the prediction of multiparticle final states has to decrease accordingly in order to have meaningful tests of the underlying theories such as the Standard Model. A pure leading order calculation, defined in the perturbative expansion of said theory in the interaction constant, represents the classical limit to such a quantum field theory and was already found to be insufficient at past collider experiments, e.g. LEP or HERA. Such a leading order calculation can be systematically improved in various limits. If the typical scales of a process are large and the respective coupling constants are small, the inclusion of fixed-order higher-order corrections then yields quickly converging predictions with much reduced uncertainties. In certain regions of the phase space, still well within the perturbative regime of the underlying theory, a clear hierarchy of the inherent scales, however, leads to large logarithms occurring at every order in perturbation theory. In many cases these logarithms are universal and can be resummed to all orders leading to precise predictions in these limits. Multiparticle final states now exhibit both small and large scales, necessitating a description using both resummed and fixed-order results. This thesis presents the consistent combination of two such resummation schemes with fixed-order results. The main objective therefor is to identify and properly treat terms that are present in both formulations in a process and observable independent manner. In the first part the resummation scheme introduced by Yennie, Frautschi and Suura (YFS), resumming large logarithms associated with the emission of soft photons in massive QED, is combined with fixed-order next-to-leading matrix elements. The implementation of a universal algorithm is detailed and results are studied for various precision

  11. Reliability of environmental sampling culture results using the negative binomial intraclass correlation coefficient.

    Science.gov (United States)

    Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming

    2014-01-01

    The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.

  12. Pre-analytical and analytical aspects affecting clinical reliability of plasma glucose results.

    Science.gov (United States)

    Pasqualetti, Sara; Braga, Federica; Panteghini, Mauro

    2017-07-01

    The measurement of plasma glucose (PG) plays a central role in recognizing disturbances in carbohydrate metabolism, with established decision limits that are globally accepted. This requires that PG results are reliable and unequivocally valid no matter where they are obtained. To control the pre-analytical variability of PG and prevent in vitro glycolysis, the use of citrate as rapidly effective glycolysis inhibitor has been proposed. However, the commercial availability of several tubes with studies showing different performance has created confusion among users. Moreover, and more importantly, studies have shown that tubes promptly inhibiting glycolysis give PG results that are significantly higher than tubes containing sodium fluoride only, used in the majority of studies generating the current PG cut-points, with a different clinical classification of subjects. From the analytical point of view, to be equivalent among different measuring systems, PG results should be traceable to a recognized higher-order reference via the implementation of an unbroken metrological hierarchy. In doing this, it is important that manufacturers of measuring systems consider the uncertainty accumulated through the different steps of the selected traceability chain. In particular, PG results should fulfil analytical performance specifications defined to fit the intended clinical application. Since PG has tight homeostatic control, its biological variability may be used to define these limits. Alternatively, given the central diagnostic role of the analyte, an outcome model showing the impact of analytical performance of test on clinical classifications of subjects can be used. Using these specifications, performance assessment studies employing commutable control materials with values assigned by reference procedure have shown that the quality of PG measurements is often far from desirable and that problems are exacerbated using point-of-care devices. Copyright © 2017 The Canadian

  13. Determinants of National Health Insurance enrolment in Ghana across the life course: Are the results consistent between surveys?

    Science.gov (United States)

    van der Wielen, Nele; Falkingham, Jane; Channon, Andrew Amos

    2018-04-23

    Ghana is currently undergoing a profound demographic transition, with large increases in the number of older adults in the population. Older adults require greater levels of healthcare as illness and disability increase with age. Ghana therefore provides an important and timely case study of policy implementation aimed at improving equal access to healthcare in the context of population ageing. This paper examines the determinants of National Health Insurance (NHIS) enrolment in Ghana, using two different surveys and distinguishing between younger and older adults. Two surveys are used in order to investigate consistency in insurance enrolment. The comparison between age groups is aimed at understanding whether determinants differ for older adults. Previous studies have mainly focused on the enrolment of young and middle aged adults; thus by widening the focus to include older adults and taking into account differences in their demographic and socio-economic characteristics this paper provides a unique contribution to the literature. Using data from the 2007-2008 Study on Global Ageing and Adult Health (SAGE) and the 2012-2013 Ghanaian Living Standards Survey (GLSS) the determinants of NHIS enrolment among younger adults (aged 18-49) and older adults (aged 50 and over) are compared. Logistic regression explores the socio-economic and demographic determinants of NHIS enrolment and multinomial logistic regression investigates the correlates of insurance drop out. Similar results for people aged 18-49 and people aged 50 plus were revealed, with older adults having a slightly lower probability of dropping out of insurance coverage compared to younger adults. Both surveys confirm that education and wealth increase the likelihood of NHIS affiliation. Further, residential differences in insurance coverage are found, with greater NHIS coverage in urban areas. The findings give assurance that both datasets (SAGE and GLSS) are suitable for research on insurance affiliation

  14. Automated lung volumetry from routine thoracic CT scans: how reliable is the result?

    Science.gov (United States)

    Haas, Matthias; Hamm, Bernd; Niehues, Stefan M

    2014-05-01

    Today, lung volumes can be easily calculated from chest computed tomography (CT) scans. Modern postprocessing workstations allow automated volume measurement of data sets acquired. However, there are challenges in the use of lung volume as an indicator of pulmonary disease when it is obtained from routine CT. Intra-individual variation and methodologic aspects have to be considered. Our goal was to assess the reliability of volumetric measurements in routine CT lung scans. Forty adult cancer patients whose lungs were unaffected by the disease underwent routine chest CT scans in 3-month intervals, resulting in a total number of 302 chest CT scans. Lung volume was calculated by automatic volumetry software. On average of 7.2 CT scans were successfully evaluable per patient (range 2-15). Intra-individual changes were assessed. In the set of patients investigated, lung volume was approximately normally distributed, with a mean of 5283 cm(3) (standard deviation = 947 cm(3), skewness = -0.34, and curtosis = 0.16). Between different scans in one and the same patient the median intra-individual standard deviation in lung volume was 853 cm(3) (16% of the mean lung volume). Automatic lung segmentation of routine chest CT scans allows a technically stable estimation of lung volume. However, substantial intra-individual variations have to be considered. A median intra-individual deviation of 16% in lung volume between different routine scans was found. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  15. Accreditation and radiation protection - the cost or smaller doses and reliable results

    International Nuclear Information System (INIS)

    Omahen, G.; Zdesar, U.

    2011-01-01

    Laboratories involved in the protection against radiation and therefore in the measurement of radioactivity, dose rate and contamination have always been tied to the quality of their measurements, particularly those that have performed measurements for nuclear power plants. However in the laboratories more than quality it was more important, that people are professional, that they are engaged in scientific work and know how to interpret the results. Very often these are things that do not go along with reviewing the measuring instruments and quality records. However customer requires measurement results that can be trusted. This is the purpose of the standard SIST EN ISO / IEC 17025 in which the requirements for testing and calibration laboratories are standardised. The standard in force since 1999. In some countries, requests for accreditation of testing laboratories according to SIST EN ISO / IEC 17025 is even in regulation. This request is for example in the Croatian and Slovenian regulations for laboratories involved in measuring the radioactivity, dose rate, contamination, or by checking the X-ray apparatus. Several laboratories have been accreditation for several years. From that experience we can conclude that customer gets reliable results from the accredited laboratories at relatively low cost. On the other side laboratory which his accredited has introduced a line of work and his laboratory, there are rules for equipment, personnel, training and all that eventually enhanced measurement expertise. With accreditation, it is much easier to compensate for the loss of workers due to pension or leaving the laboratory because every moment must always be in the laboratory at least two who know how to work on the method. Accreditation is not improving radiation protection or reducing Becquerel in the air. But at least we know how accurate mSv or Bq are and how small mSv and Bq can be measured. (author) [sr

  16. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    International Nuclear Information System (INIS)

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual's performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average

  17. [Santa Claus is perceived as reliable and friendly: results of the Danish Christmas 2013 survey].

    Science.gov (United States)

    Amin, Faisal Mohammad; West, Anders Sode; Jørgensen, Carina Sleiborg; Simonsen, Sofie Amalie; Lindberg, Ulrich; Tranum-Jensen, Jørgen; Hougaard, Anders

    2013-12-02

    Several studies have indicated that the population in general perceives doctors as reliable. In the present study perceptions of reliability and kindness attributed to another socially significant archetype, Santa Claus, have been comparatively examined in relation to the doctor. In all, 52 randomly chosen participants were shown a film, where a narrator dressed either as Santa Claus or as a doctor tells an identical story. Structured interviews were then used to assess the subjects' perceptions of reliability and kindness in relation to the narrator's appearance. We found a strong inclination for Santa Claus being perceived as friendlier than the doctor (p = 0.053). However, there was no significant difference in the perception of reliability between Santa Claus and the doctor (p = 0.524). The positive associations attributed to Santa Claus probably cause that he is perceived friendlier than the doctor who may be associated with more serious and unpleasant memories of illness and suffering. Surprisingly, and despite him being an imaginary person, Santa Claus was assessed as being as reliable as the doctor.

  18. Results of the EC research project REQUEST on software quality and reliability

    International Nuclear Information System (INIS)

    Kersken, M.; Saglietti, F.

    1990-01-01

    GRS work in software safety was mainly concerned with the qualitative assessment of software reliability and quality. As a supplement to these activities the work within the REQUEST project emphasized the quantitative determination of the respective parameters. The three-level quality model COQUAMO serves for the computation - and partly for the prediction - of quality factors during the software life cycle. PERFIDE controls the application of software reliability models during the test phase and in early operational life. Specific attention was paid to the assessment of fault-tolerant diverse software systems. (orig.) [de

  19. Automated Energy Distribution and Reliability System: Validation Integration - Results of Future Architecture Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Buche, D. L.

    2008-06-01

    This report describes Northern Indiana Public Service Co. project efforts to develop an automated energy distribution and reliability system. The purpose of this project was to implement a database-driven GIS solution that would manage all of the company's gas, electric, and landbase objects. This report is second in a series of reports detailing this effort.

  20. Estimation of electromagnetic pumps reliability based on the results of their exploitation

    International Nuclear Information System (INIS)

    Vitkovskij, I.V.; Kirillov, I.R.; Chajka, P.Yu.; Kryuchkov, E.A.; Poplavskij, V.M.; Nosov, Yu.V.; Oshkanov, N.N.

    2007-01-01

    Main factors, determining the service life of induction electromagnetic pumps (IEP), are analyzed. It is shown that the IEP serviceability depends mainly on the winding reliability. The main damaging factors, acting on the windings, are noted. The expressions for calculation of the failure intensity for the coil and case insulations are obtained [ru

  1. The Reliability of Results from National Tests, Public Examinations, and Vocational Qualifications in England

    Science.gov (United States)

    He, Qingping; Opposs, Dennis

    2012-01-01

    National tests, public examinations, and vocational qualifications in England are used for a variety of purposes, including the certification of individual learners in different subject areas and the accountability of individual professionals and institutions. However, there has been ongoing debate about the reliability and validity of their…

  2. Construct validity, test-retest reliability and internal consistency of the Thai version of the disabilities of the arm, shoulder and hand questionnaire (DASH-TH) in patients with carpal tunnel syndrome.

    Science.gov (United States)

    Buntragulpoontawee, Montana; Phutrit, Suphatha; Tongprasert, Siam; Wongpakaran, Tinakon; Khunachiva, Jeeranan

    2018-03-27

    This study evaluated additional psychometric properties of the Thai version of the disabilities of the arm, shoulder and hand questionnaire (DASH-TH) which included, test-retest reliability, construct validity, internal consistency of in patients with carpal tunnel syndrome. As for determining construct validity, the Thai EuroQOL questionnaire (EQ-5D-5L) was also administered in order to examine convergent and divergent validity. Fifty patients completed both questionnaires. The DASH-TH showed excellent test-retest reliability (intraclass correlation coefficient = 0.811) and internal consistency (Cronbach's alpha = 0.911). The exploratory factor analysis yielded a six-factor solution while the confirmatory factor analysis denoted that the hypothesized model adequately fit the data with a comparative fit index of 0.967 and a Tucker-Lewis index of 0.964. The related subscales between the DASH-TH and the Thai EQ-5D-5L were significantly correlated, indicating the DASH-TH's convergent and discriminant validity. The DASH-TH demonstrated good reliability, internal consistency construct validity, and multidimensionality, in assessing the upper extremity function in carpal tunnel syndrome patients.

  3. Reliability and risk functions for structural components taking into account inspection results

    International Nuclear Information System (INIS)

    Rackwitz, R.; Schall, G.

    1989-01-01

    The method of outcrossings has been shown to be efficient when calculating the failure probability of metallic structural components under ergodic Gaussian loading. Using Paris/Erdogan's crack growth law it is possible to develop a semi-analytical calculation model for both the reliability and the risk function. For numerical studies an approximate method of asymptotic nature is proposed. The same methodology also enables to incorporate inspection observations. (orig.) [de

  4. Consistency and Main Differences Between European Regional Climate Downscaling Intercomparison Results; From PRUDENCE and ENSEMBLES to CORDEX

    Science.gov (United States)

    Christensen, J. H.; Larsen, M. A. D.; Christensen, O. B.; Drews, M.

    2017-12-01

    For more than 20 years, coordinated efforts to apply regional climate models to downscale GCM simulations for Europe have been pursued by an ever increasing group of scientists. This endeavor showed its first results during EU framework supported projects such as RACCS and MERCURE. Here, the foundation for today's advanced worldwide CORDEX approach was laid out by a core of six research teams, who conducted some of the first coordinated RCM simulations with the aim to assess regional climate change for Europe. However, it was realized at this stage that model bias in GCMs as well as RCMs made this task very challenging. As an immediate outcome, the idea was conceived to make an even more coordinated effort by constructing a well-defined and structured set of common simulations; this lead to the PRUDENCE project (2001-2004). Additional coordinated efforts involving ever increasing numbers of GCMs and RCMs followed in ENSEMBLES (2004-2009) and the ongoing Euro-CORDEX (officially commenced 2011) efforts. Along with the overall coordination, simulations have increased their standard resolution from 50km (PRUDENCE) to about 12km (Euro-CORDEX) and from time slice simulations (PRUDENCE) to transient experiments (ENSEMBLES and CORDEX); from one driving model and emission scenario (PRUDENCE) to several (Euro-CORDEX). So far, this wealth of simulations have been used to assess the potential impacts of future climate change in Europe providing a baseline change as defined by a multi-model mean change with associated uncertainties calculated from model spread in the ensemble. But how has the overall picture of state-of-the-art regional climate change projections changed over this period of almost two decades? Here we compare across scenarios, model resolutions and model vintage the results from PRUDENCE, ENSEMBLES and Euro-CORDEX. By appropriate scaling we identify robust findings about the projected future of European climate expressed by temperature and precipitation changes

  5. No evidence for consistent long-term growth stimulation of 13 tropical tree species: results from tree-ring analysis.

    Science.gov (United States)

    Groenendijk, Peter; van der Sleen, Peter; Vlam, Mart; Bunyavejchewin, Sarayudh; Bongers, Frans; Zuidema, Pieter A

    2015-10-01

    The important role of tropical forests in the global carbon cycle makes it imperative to assess changes in their carbon dynamics for accurate projections of future climate-vegetation feedbacks. Forest monitoring studies conducted over the past decades have found evidence for both increasing and decreasing growth rates of tropical forest trees. The limited duration of these studies restrained analyses to decadal scales, and it is still unclear whether growth changes occurred over longer time scales, as would be expected if CO2 -fertilization stimulated tree growth. Furthermore, studies have so far dealt with changes in biomass gain at forest-stand level, but insights into species-specific growth changes - that ultimately determine community-level responses - are lacking. Here, we analyse species-specific growth changes on a centennial scale, using growth data from tree-ring analysis for 13 tree species (~1300 trees), from three sites distributed across the tropics. We used an established (regional curve standardization) and a new (size-class isolation) growth-trend detection method and explicitly assessed the influence of biases on the trend detection. In addition, we assessed whether aggregated trends were present within and across study sites. We found evidence for decreasing growth rates over time for 8-10 species, whereas increases were noted for two species and one showed no trend. Additionally, we found evidence for weak aggregated growth decreases at the site in Thailand and when analysing all sites simultaneously. The observed growth reductions suggest deteriorating growth conditions, perhaps due to warming. However, other causes cannot be excluded, such as recovery from large-scale disturbances or changing forest dynamics. Our findings contrast growth patterns that would be expected if elevated CO2 would stimulate tree growth. These results suggest that commonly assumed growth increases of tropical forests may not occur, which could lead to erroneous

  6. Does the high–tech industry consistently reduce CO{sub 2} emissions? Results from nonparametric additive regression model

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Bin [School of Statistics, Jiangxi University of Finance and Economics, Nanchang, Jiangxi 330013 (China); Research Center of Applied Statistics, Jiangxi University of Finance and Economics, Nanchang, Jiangxi 330013 (China); Lin, Boqiang, E-mail: bqlin@xmu.edu.cn [Collaborative Innovation Center for Energy Economics and Energy Policy, China Institute for Studies in Energy Policy, Xiamen University, Xiamen, Fujian 361005 (China)

    2017-03-15

    China is currently the world's largest carbon dioxide (CO{sub 2}) emitter. Moreover, total energy consumption and CO{sub 2} emissions in China will continue to increase due to the rapid growth of industrialization and urbanization. Therefore, vigorously developing the high–tech industry becomes an inevitable choice to reduce CO{sub 2} emissions at the moment or in the future. However, ignoring the existing nonlinear links between economic variables, most scholars use traditional linear models to explore the impact of the high–tech industry on CO{sub 2} emissions from an aggregate perspective. Few studies have focused on nonlinear relationships and regional differences in China. Based on panel data of 1998–2014, this study uses the nonparametric additive regression model to explore the nonlinear effect of the high–tech industry from a regional perspective. The estimated results show that the residual sum of squares (SSR) of the nonparametric additive regression model in the eastern, central and western regions are 0.693, 0.054 and 0.085 respectively, which are much less those that of the traditional linear regression model (3.158, 4.227 and 7.196). This verifies that the nonparametric additive regression model has a better fitting effect. Specifically, the high–tech industry produces an inverted “U–shaped” nonlinear impact on CO{sub 2} emissions in the eastern region, but a positive “U–shaped” nonlinear effect in the central and western regions. Therefore, the nonlinear impact of the high–tech industry on CO{sub 2} emissions in the three regions should be given adequate attention in developing effective abatement policies. - Highlights: • The nonlinear effect of the high–tech industry on CO{sub 2} emissions was investigated. • The high–tech industry yields an inverted “U–shaped” effect in the eastern region. • The high–tech industry has a positive “U–shaped” nonlinear effect in other regions. • The linear impact

  7. Demands placed on waste package performance testing and modeling by some general results on reliability analysis

    International Nuclear Information System (INIS)

    Chesnut, D.A.

    1991-09-01

    Waste packages for a US nuclear waste repository are required to provide reasonable assurance of maintaining substantially complete containment of radionuclides for 300 to 1000 years after closure. The waiting time to failure for complex failure processes affecting engineered or manufactured systems is often found to be an exponentially-distributed random variable. Assuming that this simple distribution can be used to describe the behavior of a hypothetical single barrier waste package, calculations presented in this paper show that the mean time to failure (the only parameter needed to completely specify an exponential distribution) would have to be more than 10 7 years in order to provide reasonable assurance of meeting this requirement. With two independent barriers, each would need to have a mean time to failure of only 10 5 years to provide the same reliability. Other examples illustrate how multiple barriers can provide a strategy for not only achieving but demonstrating regulatory compliance

  8. Construct validity and internal consistency reliability of the Malay version of the 21-item depression anxiety stress scale (Malay-DASS-21) among male outpatient clinic attendees in Johor.

    Science.gov (United States)

    Rusli, B N; Amrina, K; Trived, S; Loh, K P; Shashi, M

    2017-10-01

    The 21-item English version of the Depression Anxiety Stress Scale (DASS-21) has been proposed as a method for assessing self-perceived depression, anxiety and stress over the past week in various clinical and nonclinical populations. Several Malay versions of the DASS-21 have been validated in various populations with varying success. One particular Malay version has been validated in various occupational groups (such as nurses and automotive workers) but not among male clinic outpatient attendees in Malaysia. To validate the Malay version of the DASS-21 (Malay-DASS-21) among male outpatient clinic attendees in Johor. A validation study with a random sample of 402 male respondents attending the outpatient clinic of a major public outpatient clinic in Johor Bahru and Segamat was carried out from January to March 2016. Construct validity of the Malay-DASS-21 was examined using Exploratory Factor Analysis (KMO = 0.947; Bartlett's test of sphericity is significant, pDASS- 21 and the internal consistency reliability using Cronbach's alpha. Construct validity of the Malay-DASS-21 based on eigenvalues and factor loadings to confirm the three factor structure (depression, anxiety, and stress) was acceptable. The internal consistency reliability of the factor construct was very impressive with Cronbach's alpha values in the range of 0.837 to 0.863. The present study showed that the Malay- DASS-21 has acceptable psychometric construct and high internal consistency reliability to measure self-perceived depression, anxiety and stress over the past week in male outpatient clinic attendees in Johor. Further studies are necessary to revalidate the Malay-DASS-21 across different populations and cultures, and using confirmatory factor analyses.

  9. Interface Consistency

    DEFF Research Database (Denmark)

    Staunstrup, Jørgen

    1998-01-01

    This paper proposes that Interface Consistency is an important issue for the development of modular designs. Byproviding a precise specification of component interfaces it becomes possible to check that separately developedcomponents use a common interface in a coherent matter thus avoiding a very...... significant source of design errors. Awide range of interface specifications are possible, the simplest form is a syntactical check of parameter types.However, today it is possible to do more sophisticated forms involving semantic checks....

  10. Self-consistent GW0 results for the electron gas: Fixed screened potential W0 within the random-phase approximation

    International Nuclear Information System (INIS)

    von Barth, U.; Holm, B.

    1996-01-01

    With the aim of properly understanding the basis for and the utility of many-body perturbation theory as applied to extended metallic systems, we have calculated the electronic self-energy of the homogeneous electron gas within the GW approximation. The calculation has been carried out in a self-consistent way; i.e., the one-electron Green function obtained from Dyson close-quote s equation is the same as that used to calculate the self-energy. The self-consistency is restricted in the sense that the screened interaction W is kept fixed and equal to that of the random-phase approximation for the gas. We have found that the final results are marginally affected by the broadening of the quasiparticles, and that their self-consistent energies are still close to their free-electron counterparts as they are in non-self-consistent calculations. The reduction in strength of the quasiparticles and the development of satellite structure (plasmons) gives, however, a markedly smaller dynamical self-energy leading to, e.g., a smaller reduction in the quasiparticle strength as compared to non-self-consistent results. The relatively bad description of plasmon structure within the non-self-consistent GW approximation is marginally improved. A first attempt at including W in the self-consistency cycle leads to an even broader and structureless satellite spectrum in disagreement with experiment. copyright 1996 The American Physical Society

  11. Korean round-robin result for new international program to assess the reliability of emerging nondestructive techniques

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Cho; Kim, Jin Gyum; Kang, Sung Sik; Jhung, Myung Jo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2017-04-15

    The Korea Institute of Nuclear Safety, as a representative organization of Korea, in February 2012 participated in an international Program to Assess the Reliability of Emerging Nondestructive Techniques initiated by the U.S. Nuclear Regulatory Commission. The goal of the Program to Assess the Reliability of Emerging Nondestructive Techniques is to investigate the performance of emerging and prospective novel nondestructive techniques to find flaws in nickel-alloy welds and base materials. In this article, Korean round-robin test results were evaluated with respect to the test blocks and various nondestructive examination techniques. The test blocks were prepared to simulate large-bore dissimilar metal welds, small-bore dissimilar metal welds, and bottom-mounted instrumentation penetration welds in nuclear power plants. Also, lessons learned from the Korean round-robin test were summarized and discussed.

  12. Reliable laboratory urinalysis results using a new standardised urine collection device

    NARCIS (Netherlands)

    Roelofs-Thijssen, M.A.; Schreuder, M.F.; Hogeveen, M.; Herwaarden, A.E. van

    2013-01-01

    OBJECTIVES: While urine sampling is necessary in the diagnosis of urinary tract infection and electrolyte disturbances, the collection of urine in neonates and non-toilet-trained children is often difficult. A universal urine collection method providing representative urinalyses results is needed.

  13. System reliability worth assessment at a midwest utility-survey results for residential customers

    Energy Technology Data Exchange (ETDEWEB)

    Chowdhury, A.A.; Mielnik, T.C. [Electric System Planning, MidAmerican Energy Company, Davenport, Iowa (United States); Lawton, L.E.; Sullivan, M.J.; Katz, A. [Population Research Systems, San Francisco, CA (United States)

    2005-12-01

    This paper presents the overall results of a residential customer survey conducted in service areas of MidAmerican Energy Company, a Midwest utility. A similar survey was conducted concurrently in the industrial, commercial and institutional sectors and the survey results are presented in a companion paper. The results of this study are compared with the results of other studies performed in the high cost areas of the US east and west coasts. This is the first ever study of this nature performed for the residential customers in the US Midwest region. Methodological differences in the study design compared to coastal surveys are discussed. Customer survey costing techniques can be categorized into three main groups: contingent valuation techniques, direct costing techniques and indirect costing techniques. Most customer surveys conducted by different organizations in the last two decades used a combination of all three techniques. The selection of a technique is mainly dependent on the type of customer being surveyed. In this MidAmerican study, contingent valuation techniques and an indirect costing technique have been used, as most consequences of power outages to residential users are related to inconvenience or disruption of housekeeping and leisure activities that are intangible in nature. The major contribution of this paper is that particulars of Midwest residential customers compared to residential customers of coastal utilities are noted and customer responses on power quality issues that are important to customers are summarized. (author)

  14. Interobserver reliability in musculoskeletal ultrasonography: results from a "Teach the Teachers" rheumatologist course

    DEFF Research Database (Denmark)

    Naredo, ee.; Møller, I.; Moragues, C.

    2006-01-01

    , tendon lesions, bursitis, and power Doppler signal. Afterwards they compared the ultrasound findings and re-examined the patients together while discussing their results. RESULTS: Overall agreements were 91% for joint effusion/synovitis and tendon lesions, 87% for cortical abnormalities, 84......: The shoulder, wrist/hand, ankle/foot, or knee of 24 patients with rheumatic diseases were evaluated by 23 musculoskeletal ultrasound experts from different European countries randomly assigned to six groups. The participants did not reach consensus on scanning method or diagnostic criteria before...... the investigation. They were unaware of the patients' clinical and imaging data. The experts from each group undertook a blinded ultrasound examination of the four anatomical regions. The ultrasound investigation included the presence/absence of joint effusion/synovitis, bony cortex abnormalities, tenosynovitis...

  15. Chemical composition analysis and product consistency tests to support enhanced Hanford waste glass models: Results for the January, March, and April 2015 LAW glasses

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Riley, W. T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Best, D. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-09-03

    In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for several simulated low activity waste (LAW) glasses (designated as the January, March, and April 2015 LAW glasses) fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions.

  16. Chemical composition analysis and product consistency tests to support Enhanced Hanford Waste Glass Models. Results for the Augusta and October 2014 LAW Glasses

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Best, D. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-07-07

    In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for several simulated low activity waste (LAW) glasses (designated as the August and October 2014 LAW glasses) fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions.

  17. The extent of food waste generation across EU-27: different calculation methods and the reliability of their results.

    Science.gov (United States)

    Bräutigam, Klaus-Rainer; Jörissen, Juliane; Priefer, Carmen

    2014-08-01

    The reduction of food waste is seen as an important societal issue with considerable ethical, ecological and economic implications. The European Commission aims at cutting down food waste to one-half by 2020. However, implementing effective prevention measures requires knowledge of the reasons and the scale of food waste generation along the food supply chain. The available data basis for Europe is very heterogeneous and doubts about its reliability are legitimate. This mini-review gives an overview of available data on food waste generation in EU-27 and discusses their reliability against the results of own model calculations. These calculations are based on a methodology developed on behalf of the Food and Agriculture Organization of the United Nations and provide data on food waste generation for each of the EU-27 member states, broken down to the individual stages of the food chain and differentiated by product groups. The analysis shows that the results differ significantly, depending on the data sources chosen and the assumptions made. Further research is much needed in order to improve the data stock, which builds the basis for the monitoring and management of food waste. © The Author(s) 2014.

  18. Full-data Results of Hubble Frontier Fields: UV Luminosity Functions at z ∼ 6–10 and a Consistent Picture of Cosmic Reionization

    Science.gov (United States)

    Ishigaki, Masafumi; Kawamata, Ryota; Ouchi, Masami; Oguri, Masamune; Shimasaku, Kazuhiro; Ono, Yoshiaki

    2018-02-01

    We present UV luminosity functions of dropout galaxies at z∼ 6{--}10 with the complete Hubble Frontier Fields data. We obtain a catalog of ∼450 dropout-galaxy candidates (350, 66, and 40 at z∼ 6{--}7, 8, and 9, respectively), with UV absolute magnitudes that reach ∼ -14 mag, ∼2 mag deeper than the Hubble Ultra Deep Field detection limits. We carefully evaluate number densities of the dropout galaxies by Monte Carlo simulations, including all lensing effects such as magnification, distortion, and multiplication of images as well as detection completeness and contamination effects in a self-consistent manner. We find that UV luminosity functions at z∼ 6{--}8 have steep faint-end slopes, α ∼ -2, and likely steeper slopes, α ≲ -2 at z∼ 9{--}10. We also find that the evolution of UV luminosity densities shows a non-accelerated decline beyond z∼ 8 in the case of {M}trunc}=-15, but an accelerated one in the case of {M}trunc}=-17. We examine whether our results are consistent with the Thomson scattering optical depth from the Planck satellite and the ionized hydrogen fraction Q H II at z≲ 7 based on the standard analytic reionization model. We find that reionization scenarios exist that consistently explain all of the observational measurements with the allowed parameters of {f}esc}={0.17}-0.03+0.07 and {M}trunc}> -14.0 for {log}{ξ }ion}/[{erg}}-1 {Hz}]=25.34, where {f}esc} is the escape fraction, M trunc is the faint limit of the UV luminosity function, and {ξ }ion} is the conversion factor of the UV luminosity to the ionizing photon emission rate. The length of the reionization period is estimated to be {{Δ }}z={3.9}-1.6+2.0 (for 0.1< {Q}{{H}{{II}}}< 0.99), consistent with the recent estimate from Planck.

  19. [Reliability for detection of developmental problems using the semaphore from the Child Development Evaluation test: Is a yellow result different from a red result?

    Science.gov (United States)

    Rizzoli-Córdoba, Antonio; Ortega-Ríosvelasco, Fernando; Villasís-Keever, Miguel Ángel; Pizarro-Castellanos, Mariel; Buenrostro-Márquez, Guillermo; Aceves-Villagrán, Daniel; O'Shea-Cuevas, Gabriel; Muñoz-Hernández, Onofre

    The Child Development Evaluation (CDE) is a screening tool designed and validated in Mexico for detecting developmental problems. The result is expressed through a semaphore. In the CDE test, both yellow and red results are considered positive, although a different intervention is proposed for each. The aim of this work was to evaluate the reliability of the CDE test to discriminate between children with yellow/red result based on the developmental domain quotient (DDQ) obtained through the Battelle Development Inventory, 2nd edition (in Spanish) (BDI-2). The information was obtained for the study from the validation. Children with a normal (green) result in the CDE were excluded. Two different cut-off points of the DDQ were used (BDI-2): social: 20.1% vs. 28.9%; and adaptive: 6.9% vs. 20.4%. The semaphore result yellow/red allows identifying different magnitudes of delay in developmental domains or subdomains, supporting the recommendation of different interventions for each one. Copyright © 2014 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.

  20. Methods of Estimation the Reliability and Increasing the Informativeness of the Laboratory Results (Analysis of the Laboratory Case of Measurement the Indicators of Thyroid Function)

    OpenAIRE

    N A Kovyazina; N A Alhutova; N N Zybina; N M Kalinina

    2014-01-01

    The goal of the study was to demonstrate the multilevel laboratory quality management system and point at the methods of estimating the reliability and increasing the amount of information content of the laboratory results (on the example of the laboratory case). Results. The article examines the stages of laboratory quality management which has helped to estimate the reliability of the results of determining Free T3, Free T4 and TSH. The measurement results are presented by the expanded unce...

  1. Establishing Reliable Cognitive Change in Children with Epilepsy: The Procedures and Results for a Sample with Epilepsy

    Science.gov (United States)

    van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan

    2013-01-01

    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…

  2. Establishing reliable cognitive change in children with epilepsy: The procedures and results for a sample with epilepsy

    NARCIS (Netherlands)

    van Iterson, L.; Augustijn, P.B.; de Jong, P.F.; van der Leij, A.

    2013-01-01

    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a

  3. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  4. Product consistency test and toxicity characteristic leaching procedure results of the ceramic waste form from the electrometallurgical treatment process for spent fuel

    International Nuclear Information System (INIS)

    Johnson, S. G.; Adamic, M. L.: DiSanto, T.; Warren, A. R.; Cummings, D. G.; Foulkrod, L.; Goff, K. M.

    1999-01-01

    The ceramic waste form produced from the electrometallurgical treatment of sodium bonded spent fuel from the Experimental Breeder Reactor-II was tested using two immersion tests with separate and distinct purposes. The product consistency test is used to assess the consistency of the waste forms produced and thus is an indicator of a well-controlled process. The toxicity characteristic leaching procedure is used to determine whether a substance is to be considered hazardous by the Environmental Protection Agency. The proposed high level waste repository will not be licensed to receive hazardous waste, thus any waste forms destined to be placed there cannot be of a hazardous nature as defined by the Resource Conservation and Recovery Act. Results are presented from the first four fully radioactive ceramic waste forms produced and from seven ceramic waste forms produced from cold surrogate materials. The fully radioactive waste forms are approximately 2 kg in weight and were produced with salt used to treat 100 driver subassemblies of spent fuel

  5. Chemical composition analysis and product consistency tests to support enhanced Hanford waste glass models. Results for the third set of high alumina outer layer matrix glasses

    Energy Technology Data Exchange (ETDEWEB)

    Fox, K. M. [Savannah River Site (SRS), Aiken, SC (United States); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States)

    2015-12-01

    In this report, the Savannah River National Laboratory provides chemical analyses and Product Consistency Test (PCT) results for 14 simulated high level waste glasses fabricated by the Pacific Northwest National Laboratory. The results of these analyses will be used as part of efforts to revise or extend the validation regions of the current Hanford Waste Treatment and Immobilization Plant glass property models to cover a broader span of waste compositions. The measured chemical composition data are reported and compared with the targeted values for each component for each glass. All of the measured sums of oxides for the study glasses fell within the interval of 96.9 to 100.8 wt %, indicating recovery of all components. Comparisons of the targeted and measured chemical compositions showed that the measured values for the glasses met the targeted concentrations within 10% for those components present at more than 5 wt %. The PCT results were normalized to both the targeted and measured compositions of the study glasses. Several of the glasses exhibited increases in normalized concentrations (NCi) after the canister centerline cooled (CCC) heat treatment. Five of the glasses, after the CCC heat treatment, had NCB values that exceeded that of the Environmental Assessment (EA) benchmark glass. These results can be combined with additional characterization, including X-ray diffraction, to determine the cause of the higher release rates.

  6. Work limitations among working persons with rheumatoid arthritis: results, reliability, and validity of the work limitations questionnaire in 836 patients.

    Science.gov (United States)

    Walker, Nancy; Michaud, Kaleb; Wolfe, Frederick

    2005-06-01

    To describe workplace limitations and the validity and reliability of the Work Limitations Questionnaire (WLQ) in persons with rheumatoid arthritis (RA). A total of 836 employed persons with RA reported clinical and work related measures and completed the WLQ, a 25 item questionnaire that assesses the impact of chronic health conditions on job performance and productivity. Limitations are categorized into 4 domains: physical demands (PDS), mental demands (MDS), time management demands (TMS), and output demands (ODS), which are then used to calculate the WLQ index. Of the 836 completed WLQ, about 10% (85) could not be scored, as more than half the items in each domain were not applicable to the patient's job. Demographic and clinical variables were associated with missing WLQ scores including older age (OR 1.7, 95% CI 1.3-2.1), male sex (OR 1.9, 95% CI 1.2-3.0), and Health Assessment Questionnaire (HAQ) scores (OR 1.4, 95% CI 1.0-2.0). Work limitations were present in all work domains: PDS (27.5%), MDS (15.7%), ODS (19.4%), and TMS (28.6%), resulting in a mean WLQ index of 5.9 (SD 5.6), which corresponds to a 4.9% decrease in productivity and a 5.1% increase in work hours to compensate for productivity loss. The WLQ index was inversely associated with Medical Outcomes Study Short Form 36 (SF-36) Mental Component Score (MCS; r = -0.60) and Physical Component Score (PCS; r = -0.49). Fatigue (0.5), pain (0.46), and HAQ (0.56) were also significantly associated with the WLQ index. Weaker associations were seen with days unable to perform (0.29), days activities cut down (0.38), and annual income (-0.10). The WLQ is a reliable tool for assessing work productivity. However, persons with RA tend to select jobs that they can do with their RA limitations, with the result that the WLQ does not detect functional limitations as well as the HAQ and SF-36. The WLQ provides special information that is not available using conventional measures of assessment, and can provide helpful

  7. Results of the reliability investigations for the design basis accident 'Rupture of a cold primary coolant system'

    International Nuclear Information System (INIS)

    Hoertner, H.; Nieckau, E.; Spindler, H.

    1976-12-01

    This report gives a comprehensive presentation of the detailed reliability investigation carried out for the engineered safety features installed to cope with the design basis accident 'Large LOCA' of a German nuclear power plant with pressurized water reactor. The investigation is based on the engineered safety features of the Biblis Nuclear Power Plant, Unit A. The reliability investigation is carried out by means of a fault tree analysis. The influence of common-mode failures is assessed. (orig.) [de

  8. Quantitative CT assessment in chronic obstructive pulmonary disease patients: Comparison of the patients with and without consistent clinical symptoms and pulmonary function results

    International Nuclear Information System (INIS)

    Nam, Boda; Hwang, Jung Hwa; Lee, Young Mok; Park, Jai Soung; Jou, Sung Shick; Kim, Young Bae

    2015-01-01

    We compared the clinical and quantitative CT measurement parameters between chronic obstructive pulmonary disease (COPD) patients with and without consistent clinical symptoms and pulmonary function results. This study included 60 patients having a clinical diagnosis of COPD, who underwent chest CT scan and pulmonary function tests. These 60 patients were classified into typical and atypical groups, which were further sub-classified into 4 groups, based on their dyspnea score and the result of pulmonary function tests [typical 1: mild dyspnea and pulmonary function impairment (PFI); typical 2: severe dyspnea and PFI; atypical 1: mild dyspnea and severe PFI; atypical 2: severe dyspnea and mild PFI]. Quantitative measurements of the CT data for emphysema, bronchial wall thickness and air-trapping were performed using software analysis. Comparative statistical analysis was performed between the groups. The CT emphysema index correlated well with the results of the pulmonary functional test (typical 1 vs. atypical 1, p = 0.032), and the bronchial wall area ratio correlated with the dyspnea score (typical 1 vs. atypical 2, p = 0.033). CT air-trapping index also correlated with the results of the pulmonary function test (typical 1 vs. atypical 1, p = 0.012) and dyspnea score (typical 1 vs. atypical 2, p = 0.000), and was found to be the most significant parameter between the typical and atypical groups. Quantitative CT measurements for emphysema and airways correlated well with the dyspnea score and pulmonary function results in patients with COPD. Air-trapping was the most significant parameter between the typical vs. atypical group of COPD patients

  9. Quantitative CT assessment in chronic obstructive pulmonary disease patients: Comparison of the patients with and without consistent clinical symptoms and pulmonary function results

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Boda; Hwang, Jung Hwa [Dept. of Radiology, Soonchunhyang University Hospital, Seoul (Korea, Republic of); Lee, Young Mok [Bangbae GF Allergy Clinic, Seoul (Korea, Republic of); Park, Jai Soung [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of); Jou, Sung Shick [Dept. of Radiology, Soonchunhyang University Cheonan Hospital, Cheonan (Korea, Republic of); Kim, Young Bae [Dept. of Preventive Medicine, Soonchunhyang University College of Medicine, Cheonan (Korea, Republic of)

    2015-09-15

    We compared the clinical and quantitative CT measurement parameters between chronic obstructive pulmonary disease (COPD) patients with and without consistent clinical symptoms and pulmonary function results. This study included 60 patients having a clinical diagnosis of COPD, who underwent chest CT scan and pulmonary function tests. These 60 patients were classified into typical and atypical groups, which were further sub-classified into 4 groups, based on their dyspnea score and the result of pulmonary function tests [typical 1: mild dyspnea and pulmonary function impairment (PFI); typical 2: severe dyspnea and PFI; atypical 1: mild dyspnea and severe PFI; atypical 2: severe dyspnea and mild PFI]. Quantitative measurements of the CT data for emphysema, bronchial wall thickness and air-trapping were performed using software analysis. Comparative statistical analysis was performed between the groups. The CT emphysema index correlated well with the results of the pulmonary functional test (typical 1 vs. atypical 1, p = 0.032), and the bronchial wall area ratio correlated with the dyspnea score (typical 1 vs. atypical 2, p = 0.033). CT air-trapping index also correlated with the results of the pulmonary function test (typical 1 vs. atypical 1, p = 0.012) and dyspnea score (typical 1 vs. atypical 2, p = 0.000), and was found to be the most significant parameter between the typical and atypical groups. Quantitative CT measurements for emphysema and airways correlated well with the dyspnea score and pulmonary function results in patients with COPD. Air-trapping was the most significant parameter between the typical vs. atypical group of COPD patients.

  10. Reliability of D-Dimer test results in deciding the necessity of performing CTA in high risk population to establish the diagnosis of PE

    Directory of Open Access Journals (Sweden)

    Saher Ebrahim Taman

    2016-06-01

    Conclusion: Negative D-Dimer test is a reliable diagnostic modality to rule out the need for CT Angiography in patients at high risk population of PE. However, positive test results cannot confirm the diagnosis and further testing is warranted.

  11. The reliability of test results from simple test samples in predicting the fatigue performance of automotive components

    International Nuclear Information System (INIS)

    Fourlaris, G.; Ellwood, R.; Jones, T.B.

    2007-01-01

    The use of high strength steels (HSS) in automotive components is steadily increasing as automotive designers use modern steel grades to improve structural performance, reduce vehicle weight and enhance crash performance. Weight reduction can be achieved by substituting mild steel with a thinner gauge HSS, however, it must be ensured that no deterioration in performance including fatigue capability occurs. In this study, tests have been carried out to determine the effects that gauge and material strength have on the fatigue performance of a fusion welded automotive suspension arm. Current finite element (FE) modelling and fatigue prediction techniques have been evaluated to determine their reliability when used for thin strip steels. Results have shown the fatigue performance of welded components to be independent of the strength of the parent material for the steel grades studied, with material thickness and joining process the key features determining the fatigue performance. The correlation between the fatigue performance of simple welded samples under uniaxial, constant amplitude loading and complex components under biaxial in service road load data, has been shown to be unreliable. This study also indicates that with the application of modern technologies, such as tailor-welded blanks (TWB), significant weight savings can be achieved. This is demonstrated by a 19% weight reduction with no detrimental effect on the fatigue performance

  12. Factors that influence standard automated perimetry test results in glaucoma: test reliability, technician experience, time of day, and season.

    Science.gov (United States)

    Junoy Montolio, Francisco G; Wesselink, Christiaan; Gordijn, Marijke; Jansonius, Nomdo M

    2012-10-09

    To determine the influence of several factors on standard automated perimetry test results in glaucoma. Longitudinal Humphrey field analyzer 30-2 Swedish interactive threshold algorithm data from 160 eyes of 160 glaucoma patients were used. The influence of technician experience, time of day, and season on the mean deviation (MD) was determined by performing linear regression analysis of MD against time on a series of visual fields and subsequently performing a multiple linear regression analysis with the MD residuals as dependent variable and the factors mentioned above as independent variables. Analyses were performed with and without adjustment for the test reliability (fixation losses and false-positive and false-negative answers) and with and without stratification according to disease stage (baseline MD). Mean follow-up was 9.4 years, with on average 10.8 tests per patient. Technician experience, time of day, and season were associated with the MD. Approximately 0.2 dB lower MD values were found for inexperienced technicians (P Technician experience, time of day, season, and the percentage of false-positive answers have a significant influence on the MD of standard automated perimetry.

  13. Escala de bem-estar afetivo no trabalho (Jaws: evidências de validade fatorial e consistência interna Job-related affective well-being scale (Jaws: evidences of factor validity and reliability

    Directory of Open Access Journals (Sweden)

    Valdiney Veloso Gouveia

    2008-01-01

    Full Text Available O objetivo deste estudo foi adaptar uma medida de bem-estar afetivo no trabalho para o contexto brasileiro. Especificamente, pretendeu-se conhecer evidências de validade fatorial e consistência interna da Job-Related Affective Well-Being Scale (JAWS, avaliando se as pontuações nos seus fatores diferem em função do gênero e da idade dos participantes. Participaram 298 trabalhadores de centros comerciais de pequeno e médio porte da cidade de João Pessoa (PB. A maioria destes era do sexo feminino (76,8%, com idade média de 26 anos (DP = 6,87. Através de uma análise dos componentes principais (rotação promax foram identificados dois fatores que explicaram conjuntamente 48,1% da variância total: afetos positivos (α = 0,94; 14 itens e afetos negativos (α = 0,87; 13 itens; um fator geral de bem-estar afetivo no trabalho foi também computado (α = 0,95; 27 itens. As pontuações dos participantes nestes fatores não foram influenciadas pelas variáveis gênero e idade. Estes resultados são discutidos à luz do que tem sido escrito sobre os parâmetros desta escala e da relação dos afetos com estas variáveis demográficas.This study aimed at adapting a measure of job-related affective well-being for the Brazilian milieu. Specifically, it was proposed to know evidences of factor validity and reliability of the Job-Related Affective Well-Being Scale (JAWS, assessing if its scores are influenced by participants' gender and age. The participants were 298 individuals employed in small or middle shopping malls in the city of João Pessoa, PB; most of them were female (76.8%, with a mean age of 26 years old (SD = 6.87. A main component analysis (with promax rotation was performed, revealing two components that jointly accounted for 48.1% of the total variance. They were named as positive affect (α = .94; 14 items and negative affect (α = .87; 13 items. A general factor of affective well-being was also identified (α = .95; 27 items

  14. Self-consistent kinetic simulations of lower hybrid drift instability resulting in electron current driven by fusion products in tokamak plasmas

    International Nuclear Information System (INIS)

    Cook, J W S; Chapman, S C; Dendy, R O; Brady, C S

    2011-01-01

    We present particle-in-cell (PIC) simulations of minority energetic protons in deuterium plasmas, which demonstrate a collective instability responsible for emission near the lower hybrid frequency and its harmonics. The simulations capture the lower hybrid drift instability in a parameter regime motivated by tokamak fusion plasma conditions, and show further that the excited electromagnetic fields collectively and collisionlessly couple free energy from the protons to directed electron motion. This results in an asymmetric tail antiparallel to the magnetic field. We focus on obliquely propagating modes excited by energetic ions, whose ring-beam distribution is motivated by population inversions related to ion cyclotron emission, in a background plasma with a temperature similar to that of the core of a large tokamak plasma. A fully self-consistent electromagnetic relativistic PIC code representing all vector field quantities and particle velocities in three dimensions as functions of a single spatial dimension is used to model this situation, by evolving the initial antiparallel travelling ring-beam distribution of 3 MeV protons in a background 10 keV Maxwellian deuterium plasma with realistic ion-electron mass ratio. These simulations provide a proof-of-principle for a key plasma physics process that may be exploited in future alpha channelling scenarios for magnetically confined burning plasmas.

  15. A Comparison of Result Reliability for Investigation of Milk Composition by Alternative Analytical Methods in Czech Republic

    Directory of Open Access Journals (Sweden)

    Oto Hanuš

    2014-01-01

    Full Text Available The milk analyse result reliability is important for assurance of foodstuff chain quality. There are more direct and indirect methods for milk composition measurement (fat (F, protein (P, lactose (L and solids non fat (SNF content. The goal was to evaluate some reference and routine milk analytical procedures on result basis. The direct reference analyses were: F, fat content (Röse–Gottlieb method; P, crude protein content (Kjeldahl method; L, lactose (monohydrate, polarimetric method; SNF, solids non fat (gravimetric method. F, P, L and SNF were determined also by various indirect methods: – MIR (infrared (IR technology with optical filters, 7 instruments in 4 labs; – MIR–FT (IR spectroscopy with Fourier’s transformations, 10 in 6; – ultrasonic method (UM, 3 in 1; – analysis by the blue and red box (BRB, 1 v 1. There were used 10 reference milk samples. Coefficient of determination (R2, correlation coefficient (r and standard deviation of the mean of individual differences (MDsd, for n were evaluated. All correlations (r; for all indirect and alternative methods and all milk components were significant (P ≤ 0.001. MIR and MIR–FT (conventional methods explained considerably higher proportion of the variability in reference results than the UM and BRB methods (alternative. All r average values (x minus 1.64 × sd for 95% confidence interval can be used as standards for calibration quality evaluation (MIR, MIR–FT, UM and BRB: – for F 0.997, 0.997, 0.99 and 0.995; – for P 0.986, 0.981, 0.828 and 0.864; – for L 0.968, 0.871, 0.705 and 0.761; – for SNF 0.992, 0.993, 0.911 and 0.872. Similarly ​MDsd (x plus 1.64 × sd: – for F 0.071, 0.068, 0.132 and 0.101%; – for P 0.051, 0.054, 0.202 and 0.14%; – for L 0.037, 0.074, 0.113 and 0.11%; – for SNF 0.052, 0.068, 0.141 and 0.204.

  16. Methods of Estimation the Reliability and Increasing the Informativeness of the Laboratory Results (Analysis of the Laboratory Case of Measurement the Indicators of Thyroid Function

    Directory of Open Access Journals (Sweden)

    N A Kovyazina

    2014-06-01

    Full Text Available The goal of the study was to demonstrate the multilevel laboratory quality management system and point at the methods of estimating the reliability and increasing the amount of information content of the laboratory results (on the example of the laboratory case. Results. The article examines the stages of laboratory quality management which has helped to estimate the reliability of the results of determining Free T3, Free T4 and TSH. The measurement results are presented by the expanded uncertainty and the evaluation of the dynamics. Conclusion. Compliance with mandatory measures for laboratory quality management system enables laboratories to obtain reliable results and calculate the parameters that are able to increase the amount of information content of laboratory tests in clinical decision making.

  17. Evaluating Proposed Investments in Power System Reliability and Resilience: Preliminary Results from Interviews with Public Utility Commission Staff

    Energy Technology Data Exchange (ETDEWEB)

    LaCommare, Kristina [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Larsen, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Eto, Joseph [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-01-01

    Policymakers and regulatory agencies are expressing renewed interest in the reliability and resilience of the U.S. electric power system in large part due to growing recognition of the challenges posed by climate change, extreme weather events, and other emerging threats. Unfortunately, there has been little or no consolidated information in the public domain describing how public utility/service commission (PUC) staff evaluate the economics of proposed investments in the resilience of the power system. Having more consolidated information would give policymakers a better understanding of how different state regulatory entities across the U.S. make economic decisions pertaining to reliability/resiliency. To help address this, Lawrence Berkeley National Laboratory (LBNL) was tasked by the U.S. Department of Energy Office of Energy Policy and Systems Analysis (EPSA) to conduct an initial set of interviews with PUC staff to learn more about how proposed utility investments in reliability/resilience are being evaluated from an economics perspective. LBNL conducted structured interviews in late May-early June 2016 with staff from the following PUCs: Washington D.C. (DCPSC), Florida (FPSC), and California (CPUC).

  18. Consistência interna e fatorial do Inventário Multifatorial de Coping para Adolescentes Reliability and confirmatory factorial analysis of the Multifactor Coping for Adolescents Inventory with Brazilian students

    Directory of Open Access Journals (Sweden)

    Marcos Alencar Abaide Balbinotti

    2006-12-01

    Full Text Available Coping é um construto multidimensional relativo às formas como as pessoas lidam com situações estressantes. Diversas pesquisas assinalam a importância dessas "respostas de enfrentamento". Este estudo visa verificar os índices de consistência interna e fatorial confirmatórios do Inventário Multifatorial de Coping para Adolescentes (IMCA-43. Assim, a coleta de dados foi realizada mediante aplicações coletivas, em sala de aula, em uma amostra de 285 estudantes do ensino fundamental e médio, de ambos os sexos e com idades variando de 13 a 18 anos. Os resultados dos índices alfa de Cronbach (0,71 a 0,89 foram satisfatórios. A adequação aos modelos tridimensional (x2/gl = 2,85; GFI = 0,757; AGFI = 0,724; RMSEA = 0,081 tetradimensional (x2/gl = 2,44; GFI = 0,724; AGFI = 0,695; RMSEA = 0,071 e pentadimensional (x2/gl = 2,32; GFI = 0,750; AGFI = 0,723; RMSEA = 0,068 é pouco recomendada. Os resultados indicam serem necessárias pesquisas continuadas a fim de melhorar certas qualidades métricas deste instrumento.Coping is a multidimensional concept concerning how people face and deal with stressful situations. Researches have been showing the importance of these forms of behavior responses. This study aimed to measure the internal consistency of the Multifactorial Inventory of Coping for Adolescents (IMCA-43 and evaluate the fit of the model through confirmatory factorial analysis. A sample of 285 students of Intermediate and High School, of both sexes, with ages ranging from 13 to 18 years old was used. The data was colected colectivelly in their classrooms. The results of the Cronbach alfa (0,71 to 0,89 were satisfactory. The index of adequacy for the tri-dimensional model (x2/gl = 2,85; GFI = 0,757; AGFI = 0,724; RMSEA = 0,081, tetra-dimensional (x2/gl = 2,44; GFI = 0,724; AGFI = 0,695; RMSEA = 0,071 and penta-dimensional (x2/gl = 2,32; GFI = 0,750; AGFI = 0,723; RMSEA = 0,068 are non-recommended for the fit of the model. The results

  19. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  20. Human reliability

    International Nuclear Information System (INIS)

    Bubb, H.

    1992-01-01

    This book resulted from the activity of Task Force 4.2 - 'Human Reliability'. This group was established on February 27th, 1986, at the plenary meeting of the Technical Reliability Committee of VDI, within the framework of the joint committee of VDI on industrial systems technology - GIS. It is composed of representatives of industry, representatives of research institutes, of technical control boards and universities, whose job it is to study how man fits into the technical side of the world of work and to optimize this interaction. In a total of 17 sessions, information from the part of ergonomy dealing with human reliability in using technical systems at work was exchanged, and different methods for its evaluation were examined and analyzed. The outcome of this work was systematized and compiled in this book. (orig.) [de

  1. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  2. Derivation of centers and axes of rotation for wrist and fingers in a hand kinematic model: methods and reliability results.

    Science.gov (United States)

    Cerveri, P; Lopomo, N; Pedotti, A; Ferrigno, G

    2005-03-01

    In the field of 3D reconstruction of human motion from video, model-based techniques have been proposed to increase the estimation accuracy and the degree of automation. The feasibility of this approach is strictly connected with the adopted biomechanical model. Particularly, the representation of the kinematic chain and the assessment of the corresponding parameters play a relevant role for the success of the motion assessment. In this paper, the focus is on the determination of the kinematic parameters of a general hand skeleton model using surface measurements. A novel method that integrates nonrigid sphere fitting and evolutionary optimization is proposed to estimate the centers and the functional axes of rotation of the skeletal joints. The reliability of the technique is tested using real movement data and simulated motions with known ground truth 3D measurement noise and different ranges of motion (RoM). With respect to standard nonrigid sphere fitting techniques, the proposed method performs 10-50% better in the best condition (very low noise and wide RoM) and over 100% better with physiological artifacts and RoM. Repeatability in the range of a couple of millimeters, on the localization of the centers of rotation, and in the range of one degree, on the axis directions is obtained from real data experiments.

  3. Self-Consistent-Field Method and τ-Functional Method on Group Manifold in Soliton Theory: a Review and New Results

    Directory of Open Access Journals (Sweden)

    Seiya Nishiyama

    2009-01-01

    Full Text Available The maximally-decoupled method has been considered as a theory to apply an basic idea of an integrability condition to certain multiple parametrized symmetries. The method is regarded as a mathematical tool to describe a symmetry of a collective submanifold in which a canonicity condition makes the collective variables to be an orthogonal coordinate-system. For this aim we adopt a concept of curvature unfamiliar in the conventional time-dependent (TD self-consistent field (SCF theory. Our basic idea lies in the introduction of a sort of Lagrange manner familiar to fluid dynamics to describe a collective coordinate-system. This manner enables us to take a one-form which is linearly composed of a TD SCF Hamiltonian and infinitesimal generators induced by collective variable differentials of a canonical transformation on a group. The integrability condition of the system read the curvature C = 0. Our method is constructed manifesting itself the structure of the group under consideration. To go beyond the maximaly-decoupled method, we have aimed to construct an SCF theory, i.e., υ (external parameter-dependent Hartree-Fock (HF theory. Toward such an ultimate goal, the υ-HF theory has been reconstructed on an affine Kac-Moody algebra along the soliton theory, using infinite-dimensional fermion. An infinite-dimensional fermion operator is introduced through a Laurent expansion of finite-dimensional fermion operators with respect to degrees of freedom of the fermions related to a υ-dependent potential with a Υ-periodicity. A bilinear equation for the υ-HF theory has been transcribed onto the corresponding τ-function using the regular representation for the group and the Schur-polynomials. The υ-HF SCF theory on an infinite-dimensional Fock space F∞ leads to a dynamics on an infinite-dimensional Grassmannian Gr∞ and may describe more precisely such a dynamics on the group manifold. A finite-dimensional Grassmannian is identified with a Gr

  4. Does a web-based feedback training program result in improved reliability in clinicians' ratings of the Global Assessment of Functioning (GAF) Scale?

    Science.gov (United States)

    Støre-Valen, Jakob; Ryum, Truls; Pedersen, Geir A F; Pripp, Are H; Jose, Paul E; Karterud, Sigmund

    2015-09-01

    The Global Assessment of Functioning (GAF) Scale is used in routine clinical practice and research to estimate symptom and functional severity and longitudinal change. Concerns about poor interrater reliability have been raised, and the present study evaluated the effect of a Web-based GAF training program designed to improve interrater reliability in routine clinical practice. Clinicians rated up to 20 vignettes online, and received deviation scores as immediate feedback (i.e., own scores compared with expert raters) after each rating. Growth curves of absolute SD scores across the vignettes were modeled. A linear mixed effects model, using the clinician's deviation scores from expert raters as the dependent variable, indicated an improvement in reliability during training. Moderation by content of scale (symptoms; functioning), scale range (average; extreme), previous experience with GAF rating, profession, and postgraduate training were assessed. Training reduced deviation scores for inexperienced GAF raters, for individuals in clinical professions other than nursing and medicine, and for individuals with no postgraduate specialization. In addition, training was most beneficial for cases with average severity of symptoms compared with cases with extreme severity. The results support the use of Web-based training with feedback routines as a means to improve the reliability of GAF ratings performed by clinicians in mental health practice. These results especially pertain to clinicians in mental health practice who do not have a masters or doctoral degree. (c) 2015 APA, all rights reserved.

  5. Replica consistency in a Data Grid

    International Nuclear Information System (INIS)

    Domenici, Andrea; Donno, Flavia; Pucciani, Gianni; Stockinger, Heinz; Stockinger, Kurt

    2004-01-01

    A Data Grid is a wide area computing infrastructure that employs Grid technologies to provide storage capacity and processing power to applications that handle very large quantities of data. Data Grids rely on data replication to achieve better performance and reliability by storing copies of data sets on different Grid nodes. When a data set can be modified by applications, the problem of maintaining consistency among existing copies arises. The consistency problem also concerns metadata, i.e., additional information about application data sets such as indices, directories, or catalogues. This kind of metadata is used both by the applications and by the Grid middleware to manage the data. For instance, the Replica Management Service (the Grid middleware component that controls data replication) uses catalogues to find the replicas of each data set. Such catalogues can also be replicated and their consistency is crucial to the correct operation of the Grid. Therefore, metadata consistency generally poses stricter requirements than data consistency. In this paper we report on the development of a Replica Consistency Service based on the middleware mainly developed by the European Data Grid Project. The paper summarises the main issues in the replica consistency problem, and lays out a high-level architectural design for a Replica Consistency Service. Finally, results from simulations of different consistency models are presented

  6. Job-related affective well-being scale (Jaws: evidences of factor validity and reliability / Escala de bem-estar afetivo no trabalho (Jaws: evidências de validade fatorial e consistência interna

    Directory of Open Access Journals (Sweden)

    Valdiney Veloso Gouveia

    2008-01-01

    Full Text Available This study aimed at adapting a measure of job-related affective well-being for the Brazilian milieu. Specifically, it was proposed to know evidences of factor validity and reliability of the Job-Related Affective Well-Being Scale (JAWS, assessing if its scores are influenced by participants' gender and age. The participants were 298 individuals employed in small or middle shopping malls in the city of João Pessoa, PB; most of them were female (76.8%, with a mean age of 26 years old (SD = 6.87. A main component analysis (with promax rotation was performed, revealing two components that jointly accounted for 48.1% of the total variance. They were named as positive affect (α = .94; 14 items and negative affect (α = .87; 13 items. A general factor of affective well-being was also identified (α = .95; 27 items. Participants' scores on these factors were not influenced by their gender or age. These findings are discussed based on literature that describes the psychometric parameters of the JAWS as well as the correlation of affects with demographic variables.

  7. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  8. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  9. Preliminary study for the reliability Assurance on results and procedure of the out-pile mechanical characterization test for a fuel assembly; Lateral Vibration Test (I)

    International Nuclear Information System (INIS)

    Lee, Kang Hee; Yoon, Kyung Hee; Kim, Hyung Kyu

    2007-01-01

    The reliability assurance with respect to the test procedure and results of the out-pile mechanical performance test for the nuclear fuel assembly is an essential task to assure the test quality and to get a permission for fuel loading into the commercial reactor core. For the case of vibration test, proper management and appropriate calibration of instruments and devices used in the test, various efforts to minimize the possible error during the test and signal acquisition process are needed. Additionally, the deep understanding both of the theoretical assumption and simplification for the signal processing/modal analysis and of the functions of the devices used in the test were highly required. In this study, the overall procedure and result of lateral vibration test were assembly's mechanical characterization were briefly introduced. A series of measures to assure and improve the reliability of the vibration test were discussed

  10. Preliminary study for the reliability Assurance on results and procedure of the out-pile mechanical characterization test for a fuel assembly; Lateral Vibration Test (I)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kang Hee; Yoon, Kyung Hee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2007-07-01

    The reliability assurance with respect to the test procedure and results of the out-pile mechanical performance test for the nuclear fuel assembly is an essential task to assure the test quality and to get a permission for fuel loading into the commercial reactor core. For the case of vibration test, proper management and appropriate calibration of instruments and devices used in the test, various efforts to minimize the possible error during the test and signal acquisition process are needed. Additionally, the deep understanding both of the theoretical assumption and simplification for the signal processing/modal analysis and of the functions of the devices used in the test were highly required. In this study, the overall procedure and result of lateral vibration test were assembly's mechanical characterization were briefly introduced. A series of measures to assure and improve the reliability of the vibration test were discussed.

  11. On the question of determining the amount of experiments, reliability and accuracy of the results in the study of physical-mechanical properties of rocks

    Directory of Open Access Journals (Sweden)

    Kuznetcov n.n.

    2015-06-01

    Full Text Available A comparative analysis of the methods for determining the required amount of experiments, the accuracy and reliability of the results of physical-mechanical rock properties study has been conducted. The advantages and disadvantages of the existing specialized method for determining the compressive strength of the samples have been discussed. On the basis of the investigation the optimal approach has been proposed to solve a wide range of the problems associated with the rock properties' parameters using

  12. Calculating system reliability with SRFYDO

    Energy Technology Data Exchange (ETDEWEB)

    Morzinski, Jerome [Los Alamos National Laboratory; Anderson - Cook, Christine M [Los Alamos National Laboratory; Klamann, Richard M [Los Alamos National Laboratory

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  13. Uses of human reliability analysis probabilistic risk assessment results to resolve personnel performance issues that could affect safety

    International Nuclear Information System (INIS)

    O'Brien, J.N.; Spettell, C.M.

    1985-10-01

    This report is the first in a series which documents research aimed at improving the usefulness of Probabilistic Risk Assessment (PRA) results in addressing human risk issues. This first report describes the results of an assessment of how well currently available PRA data addresses human risk issues of current concern to NRC. Findings indicate that PRA data could be far more useful in addressing human risk issues with modification of the development process and documentation structure of PRAs. In addition, information from non-PRA sources could be integrated with PRA data to address many other issues. 12 tabs

  14. Factors That Influence Standard Automated Perimetry Test Results in Glaucoma : Test Reliability, Technician Experience, Time of Day, and Season

    NARCIS (Netherlands)

    Montolio, Francisco G. Junoy; Wesselink, Christiaan; Gordijn, Marijke; Jansonius, Nomdo M.

    2012-01-01

    PURPOSE. To determine the influence of several factors on standard automated perimetry test results in glaucoma. METHODS. Longitudinal Humphrey field analyzer 30-2 Swedish interactive threshold algorithm data from 160 eyes of 160 glaucoma patients were used. The influence of technician experience,

  15. Factors that influence standard automated perimetry test results in glaucoma: Test reliability, technician experience, time of day, and season

    NARCIS (Netherlands)

    F.G.J. Montolio (Francisco G. Junoy); C. Wesselink (Christiaan); M.C.M. Gordijn (Marijke); N.M. Jansonius (Nomdo)

    2012-01-01

    textabstractPURPOSE. To determine the influence of several factors on standard automated perimetry test results in glaucoma. METHODS. Longitudinal Humphrey field analyzer 30-2 Swedish interactive threshold algorithm data from 160 eyes of 160 glaucoma patients were used. The influence of technician

  16. Reliability of 46,XX results on miscarriage specimens: a review of 1,222 first-trimester miscarriage specimens.

    Science.gov (United States)

    Lathi, Ruth B; Gustin, Stephanie L F; Keller, Jennifer; Maisenbacher, Melissa K; Sigurjonsson, Styrmir; Tao, Rosina; Demko, Zach

    2014-01-01

    To examine the rate of maternal contamination in miscarriage specimens. Retrospective review of 1,222 miscarriage specimens submitted for chromosome testing with detection of maternal cell contamination (MCC). Referral centers requesting genetic testing of miscarriage specimens at a single reference laboratory. Women with pregnancy loss who desire complete chromosome analysis of the pregnancy tissue. Analysis of miscarriage specimens using single-nucleotide polymorphism (SNP) microarray technology with bioinformatics program to detect maternal cell contamination. Chromosome content of miscarriages and incidence of 46,XX results due to MCC. Of the 1,222 samples analyzed, 592 had numeric chromosomal abnormalities, and 630 were normal 46,XX or 46,XY (456 and 187, respectively). In 269 of the 46,XX specimens, MCC with no embryonic component was found. With the exclusion of maternal 46,XX results, the chromosomal abnormality rate increased from 48% to 62%, and the ratio for XX to XY results dropped from 2.6 to 1.0. Over half of the normal 46,XX results in miscarriage specimens were due to MCC. The use of SNPs in MCC testing allows for precise identification of chromosomal abnormalities in miscarriage as well as MCC, improving the accuracy of products of conception testing. Copyright © 2014 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  17. The Validity and Reliability of the Mobbing Scale (MS)

    Science.gov (United States)

    Yaman, Erkan

    2009-01-01

    The aim of this research is to develop the Mobbing Scale and examine its validity and reliability. The sample of the study consisted of 515 persons from Sakarya and Bursa. In this study, construct validity, internal consistency, test-retest reliability, and item analysis of the scale were examined. As a result of factor analysis for construct…

  18. Development to Ensure of the Result Reliability of Production Indicators in the Milk Recording During its Computerization

    Directory of Open Access Journals (Sweden)

    Pavel Hering

    2016-01-01

    Full Text Available Milk recording (MR is an essential breeder measure. Results are important for inheritance check. The occurrence of errors in the data may compromise the efficiency of breeding of dairy cows. The aim was possibility to reduce the incidence of MR database errors. Analyses of frequency distribution of MR data deviations from different sources and estimations of limits of difference acceptability in milk recording were performed. The results of MR control days of flowmeter in parlor (DMY were paired to the AVG7 results (average for 7 days from the same flowmeter (n = 16,247, original recordings of complete lactations. The individual differences in milk yield indicators were calculated between successive MR control days (DMY – R, monthly interval, the reference value (R = previous DMY for MR data file. A statistically significant correlation coefficient (AVG7 and DMY was 0.935 (P < 0.001 and was higher in comparison to the previous assessment under AMS conditions (automatic milking system; 0.898; P < 0.001. This means that 87.3% of the variability in the ​​milk yield values for MR (DMY can be explained by variations in the AVG7 values and vice versa. Difference tests confirmed significant differences (P < 0.001 0.76 and 0.55 kg between DMY (in MR and AVG7 for original and also refined data file. Mentioned differences, although statistically significant, correspond only to 2.96 and 2.15% relatively. The use of multi–day milk yield average from the electronic flowmeter is an equivalent alternative to the use of record from one MR control day. Results are used in MR practice.

  19. Reporting consistently on CSR

    DEFF Research Database (Denmark)

    Thomsen, Christa; Nielsen, Anne Ellerup

    2006-01-01

    This chapter first outlines theory and literature on CSR and Stakeholder Relations focusing on the different perspectives and the contextual and dynamic character of the CSR concept. CSR reporting challenges are discussed and a model of analysis is proposed. Next, our paper presents the results...... of a case study showing that companies use different and not necessarily consistent strategies for reporting on CSR. Finally, the implications for managerial practice are discussed. The chapter concludes by highlighting the value and awareness of the discourse and the discourse types adopted...... in the reporting material. By implementing consistent discourse strategies that interact according to a well-defined pattern or order, it is possible to communicate a strong social commitment on the one hand, and to take into consideration the expectations of the shareholders and the other stakeholders...

  20. Power system reliability analysis using fault trees

    International Nuclear Information System (INIS)

    Volkanovski, A.; Cepin, M.; Mavko, B.

    2006-01-01

    The power system reliability analysis method is developed from the aspect of reliable delivery of electrical energy to customers. The method is developed based on the fault tree analysis, which is widely applied in the Probabilistic Safety Assessment (PSA). The method is adapted for the power system reliability analysis. The method is developed in a way that only the basic reliability parameters of the analysed power system are necessary as an input for the calculation of reliability indices of the system. The modeling and analysis was performed on an example power system consisting of eight substations. The results include the level of reliability of current power system configuration, the combinations of component failures resulting in a failed power delivery to loads, and the importance factors for components and subsystems. (author)

  1. Reliability considerations of NDT by probability of detection (POD). Determination using ultrasound phased array. Results from a project in frame of the German nuclear safety research program

    International Nuclear Information System (INIS)

    Kurz, Jochen H.; Dugan, Sandra; Juengert, Anne

    2013-01-01

    Reliable assessment procedures are an important aspect of maintenance concepts. Non-destructive testing (NDT) methods are an essential part of a variety of maintenance plans. Fracture mechanical assessments require knowledge of flaw dimensions, loads and material parameters. NDT methods are able to acquire information on all of these areas. However, it has to be considered that the level of detail information depends on the case investigated and therefore on the applicable methods. Reliability aspects of NDT methods are of importance if quantitative information is required. Different design concepts e.g. the damage tolerance approach in aerospace already include reliability criteria of NDT methods applied in maintenance plans. NDT is also an essential part during construction and maintenance of nuclear power plants. In Germany, type and extent of inspection are specified in Safety Standards of the Nuclear Safety Standards Commission (KTA). Only certified inspections are allowed in the nuclear industry. The qualification of NDT is carried out in form of performance demonstrations of the inspection teams and the equipment, witnessed by an authorized inspector. The results of these tests are mainly statements regarding the detection capabilities of certain artificial flaws. In other countries, e.g. the U.S., additional blind tests on test blocks with hidden and unknown flaws may be required, in which a certain percentage of these flaws has to be detected. The knowledge of the probability of detection (POD) curves of specific flaws in specific testing conditions is often not present. This paper shows the results of a research project designed for POD determination of ultrasound phased array inspections of real and artificial cracks. The continuative objective of this project was to generate quantitative POD results. The distribution of the crack sizes of the specimens and the inspection planning is discussed, and results of the ultrasound inspections are presented. In

  2. Usefulness and reliability of available epidemiological study results in assessments of radiation-related risks of cancer. Pt. 4

    International Nuclear Information System (INIS)

    Martignoni, K.; Elsasser, U.

    1990-05-01

    Carcinomas occurring in the thyroid gland as a result of radiation generally affect the papillary and, to a slightly lesser extent, follicular parts of this organ, while the available body of evidence hardly gives any indications of anaplastic and medullary neoplasms. Radiation has, however, mostly been associated with multicentric tumours. Among the survivors of the nuclear assaults on Hiroshima and Nagasaki, there are no known cases of anaplastic carcinomas of the thyroid. The papillary carcinoma, which is the prevailing type of neoplasm after radiation exposure, has less malignant potential than the follicular one and is encountered in all age groups. Malignant carcinomas of the thyroid are predominantly found in the middle and high age groups. It was calculated that high Gy doses and dose efficiencies are associated in children with a risk coefficient of 2.5 in 10 4 person-years. This rate is only half as high for adults. Studies performed on relevant cohorts point to latency periods of at least five years. Individuals exposed to radiation are believed to be at a forty-year or even life-long risk of developing cancer. The cancer risk can best be described on the basis of a linear dose-effect relationship. The mortality rate calculated for cancer of the thyroid amounts to approx. 10% of the morbidity rate. The carcinogenic potential of iodine-131 in the thyroid is only one-third as great as that associated with external radiation of high dose efficiency. (orig./MG) [de

  3. Suncor maintenance and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Little, S. [Suncor Energy, Calgary, AB (Canada)

    2006-07-01

    Fleet maintenance and reliability at Suncor Energy was discussed in this presentation, with reference to Suncor Energy's primary and support equipment fleets. This paper also discussed Suncor Energy's maintenance and reliability standard involving people, processes and technology. An organizational maturity chart that graphed organizational learning against organizational performance was illustrated. The presentation also reviewed the maintenance and reliability framework; maintenance reliability model; the process overview of the maintenance and reliability standard; a process flow chart of maintenance strategies and programs; and an asset reliability improvement process flow chart. An example of an improvement initiative was included, with reference to a shovel reliability review; a dipper trip reliability investigation; bucket related failures by type and frequency; root cause analysis of the reliability process; and additional actions taken. Last, the presentation provided a graph of the results of the improvement initiative and presented the key lessons learned. tabs., figs.

  4. The Rucio Consistency Service

    CERN Document Server

    Serfon, Cedric; The ATLAS collaboration

    2016-01-01

    One of the biggest challenge with Large scale data management system is to ensure the consistency between the global file catalog and what is physically on all storage elements. To tackle this issue, the Rucio software which is used by the ATLAS Distributed Data Management system has been extended to automatically handle lost or unregistered files (aka Dark Data). This system automatically detects these inconsistencies and take actions like recovery or deletion of unneeded files in a central manner. In this talk, we will present this system, explain the internals and give some results.

  5. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  6. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  7. Reliability of ultrasound grading traditional score and new global OMERACT-EULAR score system (GLOESS): results from an inter- and intra-reading exercise by rheumatologists.

    Science.gov (United States)

    Ventura-Ríos, Lucio; Hernández-Díaz, Cristina; Ferrusquia-Toríz, Diana; Cruz-Arenas, Esteban; Rodríguez-Henríquez, Pedro; Alvarez Del Castillo, Ana Laura; Campaña-Parra, Alfredo; Canul, Efrén; Guerrero Yeo, Gerardo; Mendoza-Ruiz, Juan Jorge; Pérez Cristóbal, Mario; Sicsik, Sandra; Silva Luna, Karina

    2017-12-01

    This study aims to test the reliability of ultrasound to graduate synovitis in static and video images, evaluating separately grayscale and power Doppler (PD), and combined. Thirteen trained rheumatologist ultrasonographers participated in two separate rounds reading 42 images, 15 static and 27 videos, of the 7-joint count [wrist, 2nd and 3rd metacarpophalangeal (MCP), 2nd and 3rd interphalangeal (IPP), 2nd and 5th metatarsophalangeal (MTP) joints]. The images were from six patients with rheumatoid arthritis, performed by one ultrasonographer. Synovitis definition was according to OMERACT. Scoring system in grayscale, PD separately, and combined (GLOESS-Global OMERACT-EULAR Score System) were reviewed before exercise. Reliability intra- and inter-reading was calculated with Cohen's kappa weighted, according to Landis and Koch. Kappa values for inter-reading were good to excellent. The minor kappa was for GLOESS in static images, and the highest was for the same scoring in videos (k 0.59 and 0.85, respectively). Excellent values were obtained for static PD in 5th MTP joint and for PD video in 2nd MTP joint. Results for GLOESS in general were good to moderate. Poor agreement was observed in 3rd MCP and 3rd IPP in all kinds of images. Intra-reading agreement were greater in grayscale and GLOESS in static images than in videos (k 0.86 vs. 0.77 and k 0.86 vs. 0.71, respectively), but PD was greater in videos than in static images (k 1.0 vs. 0.79). The reliability of the synovitis scoring through static images and videos is in general good to moderate when using grayscale and PD separately or combined.

  8. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  9. Guided color consistency optimization for image mosaicking

    Science.gov (United States)

    Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li

    2018-01-01

    This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.

  10. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  11. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  12. Consistent model driven architecture

    Science.gov (United States)

    Niepostyn, Stanisław J.

    2015-09-01

    The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.

  13. Bitcoin Meets Strong Consistency

    OpenAIRE

    Decker, Christian; Seidel, Jochen; Wattenhofer, Roger

    2014-01-01

    The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state...

  14. Consistent classical supergravity theories

    International Nuclear Information System (INIS)

    Muller, M.

    1989-01-01

    This book offers a presentation of both conformal and Poincare supergravity. The consistent four-dimensional supergravity theories are classified. The formulae needed for further modelling are included

  15. Human factor reliability program

    International Nuclear Information System (INIS)

    Knoblochova, L.

    2017-01-01

    The human factor's reliability program was at Slovenske elektrarne, a.s. (SE) nuclear power plants. introduced as one of the components Initiatives of Excellent Performance in 2011. The initiative's goal was to increase the reliability of both people and facilities, in response to 3 major areas of improvement - Need for improvement of the results, Troubleshooting support, Supporting the achievement of the company's goals. The human agent's reliability program is in practice included: - Tools to prevent human error; - Managerial observation and coaching; - Human factor analysis; -Quick information about the event with a human agent; -Human reliability timeline and performance indicators; - Basic, periodic and extraordinary training in human factor reliability(authors)

  16. Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE), Version 5.0: Models and Results Database (MAR-D) reference manual. Volume 8

    International Nuclear Information System (INIS)

    Russell, K.D.; Skinner, N.L.

    1994-07-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of several microcomputer programs that were developed to create and analyze probabilistic risk assessments (PRAs), primarily for nuclear power plants. The primary function of MAR-D is to create a data repository for completed PRAs and Individual Plant Examinations (IPEs) by providing input, conversion, and output capabilities for data used by IRRAS, SARA, SETS, and FRANTIC software. As probabilistic risk assessments and individual plant examinations are submitted to the NRC for review, MAR-D can be used to convert the models and results from the study for use with IRRAS and SARA. Then, these data can be easily accessed by future studies and will be in a form that will enhance the analysis process. This reference manual provides an overview of the functions available within MAR-D and step-by-step operating instructions

  17. Consistency of orthodox gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bellucci, S. [INFN, Frascati (Italy). Laboratori Nazionali di Frascati; Shiekh, A. [International Centre for Theoretical Physics, Trieste (Italy)

    1997-01-01

    A recent proposal for quantizing gravity is investigated for self consistency. The existence of a fixed-point all-order solution is found, corresponding to a consistent quantum gravity. A criterion to unify couplings is suggested, by invoking an application of their argument to more complex systems.

  18. Quasiparticles and thermodynamical consistency

    International Nuclear Information System (INIS)

    Shanenko, A.A.; Biro, T.S.; Toneev, V.D.

    2003-01-01

    A brief and simple introduction into the problem of the thermodynamical consistency is given. The thermodynamical consistency relations, which should be taken into account under constructing a quasiparticle model, are found in a general manner from the finite-temperature extension of the Hellmann-Feynman theorem. Restrictions following from these relations are illustrated by simple physical examples. (author)

  19. Análise da consistência interna e fatorial confirmatório do IMPRAFE-126 com praticantes de atividades físicas gaúchos Reliability and confirmatory factorial analysis of the IMPRAFE-126 with gauchos' practitioners of physical activities

    Directory of Open Access Journals (Sweden)

    Marcos Alencar Abaide Balbinotti

    2008-06-01

    Full Text Available Neste estudo, a motivação é entendida à luz da teoria da Autodeterminação. O objetivo deste estudo é verificar os índices de consistência interna e fatorial confirmatório do IMPRAFE-126. Utilizou-se uma amostra de 1.377 sujeitos, gaúchos, de ambos os sexos e com idades variando de 13 a 83 anos. Os resultados dos índices alfa de Cronbach (superiores a 0,89 foram satisfatórios. A adequação do modelo em seis dimensões foi testada e a validade confirmatória foi assumida para a amostra geral (x2/gl=2,520; GFI=0,859; AGFI=0,854; RMSEA=0,065 e para os sexos masculino (x2/gl=3,905; GFI=0,885; AGFI=0,881; RMSEA=0,066 e feminino (x2/gl=4,337; GFI=0,840; AGFI=0,831; RMSEA=0,068. Esses resultados indicam que o IMPRAFE-126 é um instrumento promissor e que pode ser oportunamente utilizado por psicólogos do esporte ou educadores físicos, aqueles particularmente interessados em avaliar os níveis de motivação de atletas ou praticantes de atividade física e esporte em geral. Entretanto, outros estudos de validade, fidedignidade e de normas devem ser conduzidos a fim de poder-se publicá-los em um futuro próximo.In this study, motivation is understood in the context of Self-determination theory. This study aims to verify the index of reliability and confirmatory factorial validity of the IMPRAFE-126. A sample of 1.377 gauchos' practitioners of physical activities, both sexes with ages between 13 and 83 years old, was used. The results of Cronbach's alpha index (.89 to .94 had been satisfactory. The adequacy to the six-dimension model was tested and the construct confirmatory validity for the general sample was assumed (x2/gl=2,520; GFI=0,859; AGFI=0,854; RMSEA=0,065, as so as for both sexes (masculine: x2/gl=3,905; GFI=0,885; AGFI=0,881; RMSEA=0,066; feminine: x2/gl=4,337; GFI=0,840; AGFI=0,831; RMSEA=0,068. These results indicate the IMPRAFE-126 is a promising tool that can be used by sports psychologists or personal trainers particularly

  20. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  1. How stable are quantitative sensory testing measurements over time? Report on 10-week reliability and agreement of results in healthy volunteers

    Directory of Open Access Journals (Sweden)

    Nothnagel H

    2017-08-01

    Full Text Available Helen Nothnagel,1,2,* Christian Puta,1,3,* Thomas Lehmann,4 Philipp Baumbach,5 Martha B Menard,6,7 Brunhild Gabriel,1 Holger H W Gabriel,1 Thomas Weiss,8 Frauke Musial2 1Department of Sports Medicine and Health Promotion, Friedrich Schiller University, Jena, Germany; 2Department of Community Medicine, National Research Center in Complementary and Alternative Medicine, UiT, The Arctic University of Norway, Tromsø, Norway; 3Center for Interdisciplinary Prevention of Diseases Related to Professional Activities, 4Department of Medical Statistics, Computer Sciences and Documentation, Friedrich Schiller University, 5Department of Anesthesiology and Intensive Care Medicine, University Hospital Jena, Germany; 6Crocker Institute, Kiawah Island, SC, 7School of Integrative Medicine and Health Sciences, Saybrook University, Oakland, CA, USA; 8Department of Biological and Clinical Psychology, Friedrich Schiller University, Jena, Germany *These authors contributed equally to this work Background: Quantitative sensory testing (QST is a diagnostic tool for the assessment of the somatosensory system. To establish QST as an outcome measure for clinical trials, the question of how similar the measurements are over time is crucial. Therefore, long-term reliability and limits of agreement of the standardized QST protocol of the German Research Network on Neuropathic Pain were tested. Methods: QST on the lower back and hand dorsum (dominant hand were assessed twice in 22 healthy volunteers (10 males and 12 females; mean age: 46.6±13.0 years, with sessions separated by 10.0±2.9 weeks. All measurements were performed by one investigator. To investigate long-term reliability and agreement of QST, differences between the two measurements, correlation coefficients, intraclass correlation coefficients (ICCs, Bland–Altman plots (limits of agreement, and standard error of measurement were used. Results: Most parameters of the QST were reliable over 10 weeks in

  2. A diagnostic test for apraxia in stroke patients: internal consistency and diagnostic value.

    NARCIS (Netherlands)

    Heugten, C.M. van; Dekker, J.; Deelman, B.G.; Stehmann-Saris, F.C.; Kinebanian, A.

    1999-01-01

    The internal consistency and the diagnostic value of a test for apraxia in patients having had a stroke are presented. Results indicate that the items of the test form a strong and consistent scale: Cronbach's alpha as well as the results of a Mokken scale analysis present good reliability and good

  3. Load Control System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Trudnowski, Daniel [Montana Tech of the Univ. of Montana, Butte, MT (United States)

    2015-04-03

    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  4. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  5. Reliability training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  6. Deterministic and stochastic approach for safety and reliability optimization of captive power plant maintenance scheduling using GA/SA-based hybrid techniques: A comparison of results

    International Nuclear Information System (INIS)

    Mohanta, Dusmanta Kumar; Sadhu, Pradip Kumar; Chakrabarti, R.

    2007-01-01

    This paper presents a comparison of results for optimization of captive power plant maintenance scheduling using genetic algorithm (GA) as well as hybrid GA/simulated annealing (SA) techniques. As utilities catered by captive power plants are very sensitive to power failure, therefore both deterministic and stochastic reliability objective functions have been considered to incorporate statutory safety regulations for maintenance of boilers, turbines and generators. The significant contribution of this paper is to incorporate stochastic feature of generating units and that of load using levelized risk method. Another significant contribution of this paper is to evaluate confidence interval for loss of load probability (LOLP) because some variations from optimum schedule are anticipated while executing maintenance schedules due to different real-life unforeseen exigencies. Such exigencies are incorporated in terms of near-optimum schedules obtained from hybrid GA/SA technique during the final stages of convergence. Case studies corroborate that same optimum schedules are obtained using GA and hybrid GA/SA for respective deterministic and stochastic formulations. The comparison of results in terms of interval of confidence for LOLP indicates that levelized risk method adequately incorporates the stochastic nature of power system as compared with levelized reserve method. Also the interval of confidence for LOLP denotes the possible risk in a quantified manner and it is of immense use from perspective of captive power plants intended for quality power

  7. Development of the software for the component reliability database system of Korean nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang Hoon; Kim, Seung Hwan; Choi, Sun Young [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-03-01

    A study was performed to develop the system for the component reliability database which consists of database system to store the reliability data and softwares to analyze the reliability data.This system is a part of KIND (Korea Information System for Nuclear Reliability Database).The MS-SQL database is used to stores the component population data, component maintenance history, and the results of reliability analysis. Two softwares were developed for the component reliability system. One is the KIND-InfoView for the data storing, retrieving and searching. The other is the KIND-CompRel for the statistical analysis of component reliability. 4 refs., 13 figs., 7 tabs. (Author)

  8. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  9. Consistency in PERT problems

    OpenAIRE

    Bergantiños, Gustavo; Valencia-Toledo, Alfredo; Vidal-Puga, Juan

    2016-01-01

    The program evaluation review technique (PERT) is a tool used to schedule and coordinate activities in a complex project. In assigning the cost of a potential delay, we characterize the Shapley rule as the only rule that satisfies consistency and other desirable properties.

  10. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  11. The OMERACT Psoriatic Arthritis Magnetic Resonance Imaging Score (PsAMRIS) is reliable and sensitive to change: results from an OMERACT workshop

    DEFF Research Database (Denmark)

    Bøyesen, Pernille; McQueen, Fiona M; Gandjbakhch, Frédérique

    2011-01-01

    The aim of this multireader exercise was to assess the reliability and sensitivity to change of the psoriatic arthritis magnetic resonance imaging score (PsAMRIS) in PsA patients followed for 1 year.......The aim of this multireader exercise was to assess the reliability and sensitivity to change of the psoriatic arthritis magnetic resonance imaging score (PsAMRIS) in PsA patients followed for 1 year....

  12. Geometrically Consistent Mesh Modification

    KAUST Repository

    Bonito, A.

    2010-01-01

    A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.

  13. Equipment Reliability Process in Krsko NPP

    International Nuclear Information System (INIS)

    Gluhak, M.

    2016-01-01

    To ensure long-term safe and reliable plant operation, equipment operability and availability must also be ensured by setting a group of processes to be established within the nuclear power plant. Equipment reliability process represents the integration and coordination of important equipment reliability activities into one process, which enables equipment performance and condition monitoring, preventive maintenance activities development, implementation and optimization, continuous improvement of the processes and long term planning. The initiative for introducing systematic approach for equipment reliability assuring came from US nuclear industry guided by INPO (Institute of Nuclear Power Operations) and by participation of several US nuclear utilities. As a result of the initiative, first edition of INPO document AP-913, 'Equipment Reliability Process Description' was issued and it became a basic document for implementation of equipment reliability process for the whole nuclear industry. The scope of equipment reliability process in Krsko NPP consists of following programs: equipment criticality classification, preventive maintenance program, corrective action program, system health reports and long-term investment plan. By implementation, supervision and continuous improvement of those programs, guided by more than thirty years of operating experience, Krsko NPP will continue to be on a track of safe and reliable operation until the end of prolonged life time. (author).

  14. Microelectronics Reliability

    Science.gov (United States)

    2017-01-17

    inverters  connected in a chain. ................................................. 5  Figure 3  Typical graph showing frequency versus square root of...developing an experimental  reliability estimating methodology that could both illuminate the  lifetime  reliability of advanced devices,  circuits and...or  FIT of the device. In other words an accurate estimate of the device  lifetime  was found and thus the  reliability  that  can  be  conveniently

  15. Reliability of neural encoding

    DEFF Research Database (Denmark)

    Alstrøm, Preben; Beierholm, Ulrik; Nielsen, Carsten Dahl

    2002-01-01

    The reliability with which a neuron is able to create the same firing pattern when presented with the same stimulus is of critical importance to the understanding of neuronal information processing. We show that reliability is closely related to the process of phaselocking. Experimental results f...

  16. Consistency argued students of fluid

    Science.gov (United States)

    Viyanti; Cari; Suparmi; Winarti; Slamet Budiarti, Indah; Handika, Jeffry; Widyastuti, Fatma

    2017-01-01

    Problem solving for physics concepts through consistency arguments can improve thinking skills of students and it is an important thing in science. The study aims to assess the consistency of the material Fluid student argmentation. The population of this study are College students PGRI Madiun, UIN Sunan Kalijaga Yogyakarta and Lampung University. Samples using cluster random sampling, 145 samples obtained by the number of students. The study used a descriptive survey method. Data obtained through multiple-choice test and interview reasoned. Problem fluid modified from [9] and [1]. The results of the study gained an average consistency argmentation for the right consistency, consistency is wrong, and inconsistent respectively 4.85%; 29.93%; and 65.23%. Data from the study have an impact on the lack of understanding of the fluid material which is ideally in full consistency argued affect the expansion of understanding of the concept. The results of the study as a reference in making improvements in future studies is to obtain a positive change in the consistency of argumentations.

  17. Is cosmology consistent?

    International Nuclear Information System (INIS)

    Wang Xiaomin; Tegmark, Max; Zaldarriaga, Matias

    2002-01-01

    We perform a detailed analysis of the latest cosmic microwave background (CMB) measurements (including BOOMERaNG, DASI, Maxima and CBI), both alone and jointly with other cosmological data sets involving, e.g., galaxy clustering and the Lyman Alpha Forest. We first address the question of whether the CMB data are internally consistent once calibration and beam uncertainties are taken into account, performing a series of statistical tests. With a few minor caveats, our answer is yes, and we compress all data into a single set of 24 bandpowers with associated covariance matrix and window functions. We then compute joint constraints on the 11 parameters of the 'standard' adiabatic inflationary cosmological model. Our best fit model passes a series of physical consistency checks and agrees with essentially all currently available cosmological data. In addition to sharp constraints on the cosmic matter budget in good agreement with those of the BOOMERaNG, DASI and Maxima teams, we obtain a heaviest neutrino mass range 0.04-4.2 eV and the sharpest constraints to date on gravity waves which (together with preference for a slight red-tilt) favor 'small-field' inflation models

  18. Consistent Quantum Theory

    Science.gov (United States)

    Griffiths, Robert B.

    2001-11-01

    Quantum mechanics is one of the most fundamental yet difficult subjects in physics. Nonrelativistic quantum theory is presented here in a clear and systematic fashion, integrating Born's probabilistic interpretation with Schrödinger dynamics. Basic quantum principles are illustrated with simple examples requiring no mathematics beyond linear algebra and elementary probability theory. The quantum measurement process is consistently analyzed using fundamental quantum principles without referring to measurement. These same principles are used to resolve several of the paradoxes that have long perplexed physicists, including the double slit and Schrödinger's cat. The consistent histories formalism used here was first introduced by the author, and extended by M. Gell-Mann, J. Hartle and R. Omnès. Essential for researchers yet accessible to advanced undergraduate students in physics, chemistry, mathematics, and computer science, this book is supplementary to standard textbooks. It will also be of interest to physicists and philosophers working on the foundations of quantum mechanics. Comprehensive account Written by one of the main figures in the field Paperback edition of successful work on philosophy of quantum mechanics

  19. Building and integrating reliability models in a Reliability-Centered-Maintenance approach

    International Nuclear Information System (INIS)

    Verite, B.; Villain, B.; Venturini, V.; Hugonnard, S.; Bryla, P.

    1998-03-01

    Electricite de France (EDF) has recently developed its OMF-Structures method, designed to optimize preventive maintenance of passive structures such as pipes and support, based on risk. In particular, reliability performances of components need to be determined; it is a two-step process, consisting of a qualitative sort followed by a quantitative evaluation, involving two types of models. Initially, degradation models are widely used to exclude some components from the field of preventive maintenance. The reliability of the remaining components is then evaluated by means of quantitative reliability models. The results are then included in a risk indicator that is used to directly optimize preventive maintenance tasks. (author)

  20. Consistency in the World Wide Web

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær

    Tim Berners-Lee envisioned that computers will behave as agents of humans on the World Wide Web, where they will retrieve, extract, and interact with information from the World Wide Web. A step towards this vision is to make computers capable of extracting this information in a reliable...... and consistent way. In this dissertation we study steps towards this vision by showing techniques for the specication, the verication and the evaluation of the consistency of information in the World Wide Web. We show how to detect certain classes of errors in a specication of information, and we show how...... the World Wide Web, in order to help perform consistent evaluations of web extraction techniques. These contributions are steps towards having computers reliable and consistently extract information from the World Wide Web, which in turn are steps towards achieving Tim Berners-Lee's vision. ii...

  1. Reliability of tumor volume estimation from MR images in patients with malignant glioma. Results from the American College of Radiology Imaging Network (ACRIN) 6662 Trial

    International Nuclear Information System (INIS)

    Ertl-Wagner, Birgit B.; Blume, Jeffrey D.; Herman, Benjamin; Peck, Donald; Udupa, Jayaram K.; Levering, Anthony; Schmalfuss, Ilona M.

    2009-01-01

    Reliable assessment of tumor growth in malignant glioma poses a common problem both clinically and when studying novel therapeutic agents. We aimed to evaluate two software-systems in their ability to estimate volume change of tumor and/or edema on magnetic resonance (MR) images of malignant gliomas. Twenty patients with malignant glioma were included from different sites. Serial post-operative MR images were assessed with two software systems representative of the two fundamental segmentation methods, single-image fuzzy analysis (3DVIEWNIX-TV) and multi-spectral-image analysis (Eigentool), and with a manual method by 16 independent readers (eight MR-certified technologists, four neuroradiology fellows, four neuroradiologists). Enhancing tumor volume and tumor volume plus edema were assessed independently by each reader. Intraclass correlation coefficients (ICCs), variance components, and prediction intervals were estimated. There were no significant differences in the average tumor volume change over time between the software systems (p > 0.05). Both software systems were much more reliable and yielded smaller prediction intervals than manual measurements. No significant differences were observed between the volume changes determined by fellows/neuroradiologists or technologists.Semi-automated software systems are reliable tools to serve as outcome parameters in clinical studies and the basis for therapeutic decision-making for malignant gliomas, whereas manual measurements are less reliable and should not be the basis for clinical or research outcome studies. (orig.)

  2. System reliability of corroding pipelines

    International Nuclear Information System (INIS)

    Zhou Wenxing

    2010-01-01

    A methodology is presented in this paper to evaluate the time-dependent system reliability of a pipeline segment that contains multiple active corrosion defects and is subjected to stochastic internal pressure loading. The pipeline segment is modeled as a series system with three distinctive failure modes due to corrosion, namely small leak, large leak and rupture. The internal pressure is characterized as a simple discrete stochastic process that consists of a sequence of independent and identically distributed random variables each acting over a period of one year. The magnitude of a given sequence follows the annual maximum pressure distribution. The methodology is illustrated through a hypothetical example. Furthermore, the impact of the spatial variability of the pressure loading and pipe resistances associated with different defects on the system reliability is investigated. The analysis results suggest that the spatial variability of pipe properties has a negligible impact on the system reliability. On the other hand, the spatial variability of the internal pressure, initial defect sizes and defect growth rates can have a significant impact on the system reliability.

  3. Reliability assessment of AOSpine thoracolumbar spine injury classification system and Thoracolumbar Injury Classification and Severity Score (TLICS) for thoracolumbar spine injuries: results of a multicentre study.

    Science.gov (United States)

    Kaul, Rahul; Chhabra, Harvinder Singh; Vaccaro, Alexander R; Abel, Rainer; Tuli, Sagun; Shetty, Ajoy Prasad; Das, Kali Dutta; Mohapatra, Bibhudendu; Nanda, Ankur; Sangondimath, Gururaj M; Bansal, Murari Lal; Patel, Nishit

    2017-05-01

    The aim of this multicentre study was to determine whether the recently introduced AOSpine Classification and Injury Severity System has better interrater and intrarater reliability than the already existing Thoracolumbar Injury Classification and Severity Score (TLICS) for thoracolumbar spine injuries. Clinical and radiological data of 50 consecutive patients admitted at a single centre with a diagnosis of an acute traumatic thoracolumbar spine injury were distributed to eleven attending spine surgeons from six different institutions in the form of PowerPoint presentation, who classified them according to both classifications. After time span of 6 weeks, cases were randomly rearranged and sent again to same surgeons for re-classification. Interobserver and intraobserver reliability for each component of TLICS and new AOSpine classification were evaluated using Fleiss Kappa coefficient (k value) and Spearman rank order correlation. Moderate interrater and intrarater reliability was seen for grading fracture type and integrity of posterior ligamentous complex (Fracture type: k = 0.43 ± 0.01 and 0.59 ± 0.16, respectively, PLC: k = 0.47 ± 0.01 and 0.55 ± 0.15, respectively), and fair to moderate reliability (k = 0.29 ± 0.01 interobserver and 0.44+/0.10 intraobserver, respectively) for total score according to TLICS. Moderate interrater (k = 0.59 ± 0.01) and substantial intrarater reliability (k = 0.68 ± 0.13) was seen for grading fracture type regardless of subtype according to AOSpine classification. Near perfect interrater and intrarater agreement was seen concerning neurological status for both the classification systems. Recently proposed AOSpine classification has better reliability for identifying fracture morphology than the existing TLICS. Additional studies are clearly necessary concerning the application of these classification systems across multiple physicians at different level of training and trauma centers to evaluate not

  4. Consistent Stochastic Modelling of Meteocean Design Parameters

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Sterndorff, M. J.

    2000-01-01

    Consistent stochastic models of metocean design parameters and their directional dependencies are essential for reliability assessment of offshore structures. In this paper a stochastic model for the annual maximum values of the significant wave height, and the associated wind velocity, current...

  5. Consistency of Network Traffic Repositories: An Overview

    NARCIS (Netherlands)

    Lastdrager, E.; Lastdrager, E.E.H.; Pras, Aiko

    2009-01-01

    Traffc repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffc that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for

  6. Consistency analysis of network traffic repositories

    NARCIS (Netherlands)

    Lastdrager, Elmer; Lastdrager, E.E.H.; Pras, Aiko

    Traffic repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffic that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for

  7. Low body weight and type of protease inhibitor predict discontinuation and treatment-limiting adverse drug reactions among HIV-infected patients starting a protease inhibitor regimen: consistent results from a randomized trial and an observational cohort

    DEFF Research Database (Denmark)

    Kirk, O; Gerstoft, J; Pedersen, C

    2001-01-01

    OBJECTIVES: To assess predictors for discontinuation and treatment-limiting adverse drug reactions (TLADR) among patients starting their first protease inhibitor (PI). METHODS: Data on patients starting a PI regimen (indinavir, ritonavir, ritonavir/saquinavir and saquinavir hard gel) in a randomi......OBJECTIVES: To assess predictors for discontinuation and treatment-limiting adverse drug reactions (TLADR) among patients starting their first protease inhibitor (PI). METHODS: Data on patients starting a PI regimen (indinavir, ritonavir, ritonavir/saquinavir and saquinavir hard gel....... Low body weight and initiation of ritonavir relative to other PIs were associated with an increased risk of TLADRs. Very consistent results were found in a randomized trial and an observational cohort....

  8. Redefining reliability

    International Nuclear Information System (INIS)

    Paulson, S.L.

    1995-01-01

    Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company's experiment in direct marketing of natural gas

  9. Psychometric properties and reliability of the Assessment Screen to Identify Survivors Toolkit for Gender Based Violence (ASIST-GBV): results from humanitarian settings in Ethiopia and Colombia.

    Science.gov (United States)

    Vu, Alexander; Wirtz, Andrea; Pham, Kiemanh; Singh, Sonal; Rubenstein, Leonard; Glass, Nancy; Perrin, Nancy

    2016-01-01

    Refugees and internally displaced persons who are affected by armed-conflict are at increased vulnerability to some forms of sexual violence or other types of gender-based violence. A validated, brief and easy-to-administer screening tool will help service providers identify GBV survivors and refer them to appropriate GBV services. To date, no such GBV screening tool exists. We developed the 7-item ASIST-GBV screening tool from qualitative research that included individual interviews and focus groups with GBV refugee and IDP survivors. This study presents the psychometric properties of the ASIST-GBV with female refugees living in Ethiopia and IDPs in Colombia. Several strategies were used to validate ASIST-GBV, including a 3 month implementation to validate the brief screening tool with women/girls seeking health services, aged ≥15 years in Ethiopia (N = 487) and female IDPs aged ≥ 18 years in Colombia (N = 511). High proportions of women screened positive for past-year GBV according to the ASIST-GBV: 50.6 % in Ethiopia and 63.4 % in Colombia. The factor analysis identified a single dimension, meaning that all items loaded on the single factor. Cronbach's α = 0.77. A 2-parameter logistic IRT model was used for estimating the precision and discriminating power of each item. Item difficulty varied across the continuum of GBV experiences in the following order (lowest to highest): threats of violence (0.690), physical violence (1.28), forced sex (2.49), coercive sex for survival (2.25), forced marriage (3.51), and forced pregnancy (6.33). Discrimination results showed that forced pregnancy was the item with the strongest ability to discriminate between different levels of GBV. Physical violence and forced sex also have higher levels of discrimination with threats of violence discriminating among women at the low end of the GBV continuum and coercive sex for survival among women at the mid-range of the continuum. The findings demonstrate that

  10. Search for the Standard Model Higgs boson through the decay of H→ZZ*→4l with the ATLAS experiment at LHC resulting to the observation of a new particle consistent with the Higgs boson

    International Nuclear Information System (INIS)

    Mountricha, E.

    2012-01-01

    The subject of this thesis is the search for the Standard Model Higgs boson through its decay into four leptons with the ATLAS experiment at CERN. The theory postulating the Higgs boson is presented and the constraints of the theory and direct and indirect searches are quoted. The ATLAS experiment and its components are described and the Detector Control System for the operation and monitoring of the power supplies of the Monitored Drift Tubes is detailed. The electron and muon reconstruction and identification are summarized. Studies on the muon fake rates, on the effect of pileup on the isolation of the muons, and on muon efficiencies of the isolation and impact parameter requirements are presented. The analysis of the Higgs decay to four leptons is detailed with emphasis on the background estimation, the methods employed and the control regions used. The results of the search using the 2011 νs= 7 TeV data are presented which have led to hints for the observation of the Higgs boson. The optimization performed for the search of a low mass Higgs boson is described and the effect on the 2011 data are shown. The analysis is performed for the 2011 νs = 8 TeV data collected up to July and the results are presented, including the combination with the 2011 data. These latest results have led to the observation of a new particle consistent with the Standard Model Higgs. (author) [fr

  11. Delimiting Coefficient a from Internal Consistency and Unidimensionality

    Science.gov (United States)

    Sijtsma, Klaas

    2015-01-01

    I discuss the contribution by Davenport, Davison, Liou, & Love (2015) in which they relate reliability represented by coefficient a to formal definitions of internal consistency and unidimensionality, both proposed by Cronbach (1951). I argue that coefficient a is a lower bound to reliability and that concepts of internal consistency and…

  12. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  13. The Principle of Energetic Consistency

    Science.gov (United States)

    Cohn, Stephen E.

    2009-01-01

    A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of

  14. Consistent force fields for saccharides

    DEFF Research Database (Denmark)

    Rasmussen, Kjeld

    1999-01-01

    Consistent force fields for carbohydrates were hitherto developed by extensive optimization ofpotential energy function parameters on experimental data and on ab initio results. A wide range of experimental data is used: internal structures obtained from gas phase electron diffraction and from x......-anomeric effects are accounted for without addition of specific terms. The work is done in the framework of the Consistent Force Field which originatedin Israel and was further developed in Denmark. The actual methods and strategies employed havebeen described previously. Extensive testing of the force field...

  15. Reliability and mechanical design

    International Nuclear Information System (INIS)

    Lemaire, Maurice

    1997-01-01

    A lot of results in mechanical design are obtained from a modelisation of physical reality and from a numerical solution which would lead to the evaluation of needs and resources. The goal of the reliability analysis is to evaluate the confidence which it is possible to grant to the chosen design through the calculation of a probability of failure linked to the retained scenario. Two types of analysis are proposed: the sensitivity analysis and the reliability analysis. Approximate methods are applicable to problems related to reliability, availability, maintainability and safety (RAMS)

  16. Progress in Methodologies for the Assessment of Passive Safety System Reliability in Advanced Reactors. Results from the Coordinated Research Project on Development of Advanced Methodologies for the Assessment of Passive Safety Systems Performance in Advanced Reactors

    International Nuclear Information System (INIS)

    2014-09-01

    Strong reliance on inherent and passive design features has become a hallmark of many advanced reactor designs, including several evolutionary designs and nearly all advanced small and medium sized reactor (SMR) designs. Advanced nuclear reactor designs incorporate several passive systems in addition to active ones — not only to enhance the operational safety of the reactors but also to eliminate the possibility of serious accidents. Accordingly, the assessment of the reliability of passive safety systems is a crucial issue to be resolved before their extensive use in future nuclear power plants. Several physical parameters affect the performance of a passive safety system, and their values at the time of operation are unknown a priori. The functions of passive systems are based on basic physical laws and thermodynamic principals, and they may not experience the same kind of failures as active systems. Hence, consistent efforts are required to qualify the reliability of passive systems. To support the development of advanced nuclear reactor designs with passive systems, investigations into their reliability using various methodologies are being conducted in several Member States with advanced reactor development programmes. These efforts include reliability methods for passive systems by the French Atomic Energy and Alternative Energies Commission, reliability evaluation of passive safety system by the University of Pisa, Italy, and assessment of passive system reliability by the Bhabha Atomic Research Centre, India. These different approaches seem to demonstrate a consensus on some aspects. However, the developers of the approaches have been unable to agree on the definition of reliability in a passive system. Based on these developments and in order to foster collaboration, the IAEA initiated the Coordinated Research Project (CRP) on Development of Advanced Methodologies for the Assessment of Passive Safety Systems Performance in Advanced Reactors in 2008. The

  17. A systematic review of the reliability and validity of discrete choice experiments in valuing non-market environmental goods

    DEFF Research Database (Denmark)

    Rokotonarivo, Sarobidy; Schaafsma, Marije; Hockley, Neal

    2016-01-01

    reliability measures. DCE results were generally consistent with those of other stated preference techniques (convergent validity), but hypothetical bias was common. Evidence supporting theoretical validity (consistency with assumptions of rational choice theory) was limited. In content validity tests, 2...

  18. Consistent Steering System using SCTP for Bluetooth Scatternet Sensor Network

    Science.gov (United States)

    Dhaya, R.; Sadasivam, V.; Kanthavel, R.

    2012-12-01

    Wireless communication is the best way to convey information from source to destination with flexibility and mobility and Bluetooth is the wireless technology suitable for short distance. On the other hand a wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Using Bluetooth piconet wireless technique in sensor nodes creates limitation in network depth and placement. The introduction of Scatternet solves the network restrictions with lack of reliability in data transmission. When the depth of the network increases, it results in more difficulties in routing. No authors so far focused on the reliability factors of Scatternet sensor network's routing. This paper illustrates the proposed system architecture and routing mechanism to increase the reliability. The another objective is to use reliable transport protocol that uses the multi-homing concept and supports multiple streams to prevent head-of-line blocking. The results show that the Scatternet sensor network has lower packet loss even in the congestive environment than the existing system suitable for all surveillance applications.

  19. Consistency and Communication in Committees

    OpenAIRE

    Inga Deimen; Felix Ketelaar; Mark T. Le Quement

    2013-01-01

    This paper analyzes truthtelling incentives in pre-vote communication in heterogeneous committees. We generalize the classical Condorcet jury model by introducing a new informational structure that captures consistency of information. In contrast to the impossibility result shown by Coughlan (2000) for the classical model, full pooling of information followed by sincere voting is an equilibrium outcome of our model for a large set of parameter values implying the possibility of ex post confli...

  20. Dynamically consistent oil import tariffs

    International Nuclear Information System (INIS)

    Karp, L.; Newbery, D.M.

    1992-01-01

    The standard theory of optimal tariffs considers tariffs on perishable goods produced abroad under static conditions, in which tariffs affect prices only in that period. Oil and other exhaustable resources do not fit this model, for current tariffs affect the amount of oil imported, which will affect the remaining stock and hence its future price. The problem of choosing a dynamically consistent oil import tariff when suppliers are competitive but importers have market power is considered. The open-loop Nash tariff is solved for the standard competitive case in which the oil price is arbitraged, and it was found that the resulting tariff rises at the rate of interest. This tariff was found to have an equilibrium that in general is dynamically inconsistent. Nevertheless, it is shown that necessary and sufficient conditions exist under which the tariff satisfies the weaker condition of time consistency. A dynamically consistent tariff is obtained by assuming that all agents condition their current decisions on the remaining stock of the resource, in contrast to open-loop strategies. For the natural case in which all agents choose their actions simultaneously in each period, the dynamically consistent tariff was characterized, and found to differ markedly from the time-inconsistent open-loop tariff. It was shown that if importers do not have overwhelming market power, then the time path of the world price is insensitive to the ability to commit, as is the level of wealth achieved by the importer. 26 refs., 4 figs

  1. An Introduction To Reliability

    International Nuclear Information System (INIS)

    Park, Kyoung Su

    1993-08-01

    This book introduces reliability with definition of reliability, requirement of reliability, system of life cycle and reliability, reliability and failure rate such as summary, reliability characteristic, chance failure, failure rate which changes over time, failure mode, replacement, reliability in engineering design, reliability test over assumption of failure rate, and drawing of reliability data, prediction of system reliability, conservation of system, failure such as summary and failure relay and analysis of system safety.

  2. Reliability versus reproducibility

    International Nuclear Information System (INIS)

    Lautzenheiser, C.E.

    1976-01-01

    Defect detection and reproducibility of results are two separate but closely related subjects. It is axiomatic that a defect must be detected from examination to examination or reproducibility of results is very poor. On the other hand, a defect can be detected on each of subsequent examinations for higher reliability and still have poor reproducibility of results

  3. Reliability of nucleic acid amplification methods for detection of Chlamydia trachomatis in urine: results of the first international collaborative quality control study among 96 laboratories

    NARCIS (Netherlands)

    R.P.A.J. Verkooyen (Roel); G.T. Noordhoek; P.E. Klapper; J. Reid; J. Schirm; G.M. Cleator; M. Ieven; G. Hoddevik

    2003-01-01

    textabstractThe first European Quality Control Concerted Action study was organized to assess the ability of laboratories to detect Chlamydia trachomatis in a panel of urine samples by nucleic acid amplification tests (NATs). The panel consisted of lyophilized urine samples,

  4. Reliability of Wireless Sensor Networks

    Science.gov (United States)

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  5. Reliability in perceptual analysis of voice quality.

    Science.gov (United States)

    Bele, Irene Velsvik

    2005-12-01

    This study focuses on speaking voice quality in male teachers (n = 35) and male actors (n = 36), who represent untrained and trained voice users, because we wanted to investigate normal and supranormal voices. In this study, both substantial and methodologic aspects were considered. It includes a method for perceptual voice evaluation, and a basic issue was rater reliability. A listening group of 10 listeners, 7 experienced speech-language therapists, and 3 speech-language therapist students evaluated the voices by 15 vocal characteristics using VA scales. Two sets of voice signals were investigated: text reading (2 loudness levels) and sustained vowel (3 levels). The results indicated a high interrater reliability for most perceptual characteristics. Connected speech was evaluated more reliably, especially at the normal level, but both types of voice signals were evaluated reliably, although the reliability for connected speech was somewhat higher than for vowels. Experienced listeners tended to be more consistent in their ratings than did the student raters. Some vocal characteristics achieved acceptable reliability even with a smaller panel of listeners. The perceptual characteristics grouped in 4 factors reflected perceptual dimensions.

  6. Consistence of Network Filtering Rules

    Institute of Scientific and Technical Information of China (English)

    SHE Kun; WU Yuancheng; HUANG Juncai; ZHOU Mingtian

    2004-01-01

    The inconsistence of firewall/VPN(Virtual Private Network) rule makes a huge maintainable cost.With development of Multinational Company,SOHO office,E-government the number of firewalls/VPN will increase rapidly.Rule table in stand-alone or network will be increased in geometric series accordingly.Checking the consistence of rule table manually is inadequate.A formal approach can define semantic consistence,make a theoretic foundation of intelligent management about rule tables.In this paper,a kind of formalization of host rules and network ones for auto rule-validation based on SET theory were proporsed and a rule validation scheme was defined.The analysis results show the superior performance of the methods and demonstrate its potential for the intelligent management based on rule tables.

  7. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  8. Recommendations for certification or measurement of reliability for reliable digital archival repositories with emphasis on access

    Directory of Open Access Journals (Sweden)

    Paula Regina Ventura Amorim Gonçalez

    2017-04-01

    Full Text Available Introduction: Considering the guidelines of ISO 16363: 2012 (Space data and information transfer systems -- Audit and certification of trustworthy digital repositories and the text of CONARQ Resolution 39 for certification of Reliable Digital Archival Repository (RDC-Arq, verify the technical recommendations should be used as the basis for a digital archival repository to be considered reliable. Objective: Identify requirements for the creation of Reliable Digital Archival Repositories with emphasis on access to information from the ISO 16363: 2012 and CONARQ Resolution 39. Methodology: For the development of the study, the methodology consisted of an exploratory, descriptive and documentary theoretical investigation, since it is based on ISO 16363: 2012 and CONARQ Resolution 39. From the perspective of the problem approach, the study is qualitative and quantitative, since the data were collected, tabulated, and analyzed from the interpretation of their contents. Results: We presented a set of Checklist Recommendations for reliability measurement and/or certification for RDC-Arq with a clipping focused on the identification of requirements with emphasis on access to information is presented. Conclusions: The right to information as well as access to reliable information is a premise for Digital Archival Repositories, so the set of recommendations is directed to archivists who work in Digital Repositories and wish to verify the requirements necessary to evaluate the reliability of the Digital Repository or still guide the information professional in collecting requirements for repository reliability certification.

  9. Self consistent field theory of virus assembly

    Science.gov (United States)

    Li, Siyu; Orland, Henri; Zandi, Roya

    2018-04-01

    The ground state dominance approximation (GSDA) has been extensively used to study the assembly of viral shells. In this work we employ the self-consistent field theory (SCFT) to investigate the adsorption of RNA onto positively charged spherical viral shells and examine the conditions when GSDA does not apply and SCFT has to be used to obtain a reliable solution. We find that there are two regimes in which GSDA does work. First, when the genomic RNA length is long enough compared to the capsid radius, and second, when the interaction between the genome and capsid is so strong that the genome is basically localized next to the wall. We find that for the case in which RNA is more or less distributed uniformly in the shell, regardless of the length of RNA, GSDA is not a good approximation. We observe that as the polymer-shell interaction becomes stronger, the energy gap between the ground state and first excited state increases and thus GSDA becomes a better approximation. We also present our results corresponding to the genome persistence length obtained through the tangent-tangent correlation length and show that it is zero in case of GSDA but is equal to the inverse of the energy gap when using SCFT.

  10. Reliability and improvement of RODOS results for a BWR plant; Erhoehung der Zuverlaessigkeit der RODOS-Ergebnisse fuer eine SWR-Anlage

    Energy Technology Data Exchange (ETDEWEB)

    Loeffler, H.; Cester, F.; Sonnenkalb, M.; Klein-Hessling, W.; Voggenberger, T.

    2009-06-15

    Decision support systems such as RODOS aim to support the responsible authorities by providing estimates for the possible radiological consequences in case of an event in a nuclear plant. The prognosis of quantity, composition and time of occurrence of a release from the plant (''source term'') in the so-called pre-release phase is one of the foundations with high relevance for this purpose. Within previous projects source term prognosis tools have been developed and applied exemplarily for a PWR. At the end of 2005 GRS has finalized a PSA level 2 for a plant of the SWR-69 type. On this basis improved versions of the source term prognosis tools QPRO (probabilistic) and ASTRID (deterministic) have been created for a BWR and tested in an emergency exercise in a BWR. The further development of QPRO has been related in particular to the structure of the probabilistic network and the precalculated source terms. The activities for the adaptation of ASTRID focus on the creation of the dataset for the BWR coolant loop and the containment. In the emergency exercise the manageability of QPRO but also of ASTRID has been proven. Further, the first phases of the accident progression have been well identified. However, the exercise scenario developed into a very unlikely sequence with partial core melt, and the reactor building ventilation was shut off just at a critical moment. Therefore the source term prognoses deviate from the exercise scenario. Starting from these experiences with the development and application of QPRO and ASTRID recommendations are given for the further improvement of the reliability of the source term prognosis for RODOS. In general it can be stated that the development status of QPRO and ASTRID is definitely advanced compared to the presently still prevailing source term prognosis methods. Therefore it is recommended to develop plant specific versions of these codes and to apply them.

  11. Student Precision and Reliability of the Team Sport Assessment in ...

    African Journals Online (AJOL)

    TSAP) and formative assessment of invasion sport. The specific objectives were to determine the degree of agreement among expert observers, inter-observer reliability (internal consistency), and intra observer reliability (temporal reliability).

  12. Delimiting coefficient alpha from internal consistency and unidimensionality

    NARCIS (Netherlands)

    Sijtsma, K.

    2015-01-01

    I discuss the contribution by Davenport, Davison, Liou, & Love (2015) in which they relate reliability represented by coefficient α to formal definitions of internal consistency and unidimensionality, both proposed by Cronbach (1951). I argue that coefficient α is a lower bound to reliability and

  13. Methodology for allocating reliability and risk

    International Nuclear Information System (INIS)

    Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.

    1986-05-01

    This report describes a methodology for reliability and risk allocation in nuclear power plants. The work investigates the technical feasibility of allocating reliability and risk, which are expressed in a set of global safety criteria and which may not necessarily be rigid, to various reactor systems, subsystems, components, operations, and structures in a consistent manner. The report also provides general discussions on the problem of reliability and risk allocation. The problem is formulated as a multiattribute decision analysis paradigm. The work mainly addresses the first two steps of a typical decision analysis, i.e., (1) identifying alternatives, and (2) generating information on outcomes of the alternatives, by performing a multiobjective optimization on a PRA model and reliability cost functions. The multiobjective optimization serves as the guiding principle to reliability and risk allocation. The concept of ''noninferiority'' is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The final step of decision analysis, i.e., assessment of the decision maker's preferences could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided, and several outstanding issues such as generic allocation, preference assessment, and uncertainty are discussed. 29 refs., 44 figs., 39 tabs

  14. On the reliability of spacecraft swarms

    NARCIS (Netherlands)

    Engelen, S.; Gill, E.K.A.; Verhoeven, C.J.M.

    2012-01-01

    Satellite swarms, consisting of a large number of identical, miniaturized and simple satellites, are claimed to provide an implementation for specific space missions which require high reliability. However, a consistent model of how reliability and availability on mission level is linked to cost-

  15. Measuring process and knowledge consistency

    DEFF Research Database (Denmark)

    Edwards, Kasper; Jensen, Klaes Ladeby; Haug, Anders

    2007-01-01

    When implementing configuration systems, knowledge about products and processes are documented and replicated in the configuration system. This practice assumes that products are specified consistently i.e. on the same rule base and likewise for processes. However, consistency cannot be taken...... for granted; rather the contrary, and attempting to implement a configuration system may easily ignite a political battle. This is because stakes are high in the sense that the rules and processes chosen may only reflect one part of the practice, ignoring a majority of the employees. To avoid this situation......, this paper presents a methodology for measuring product and process consistency prior to implementing a configuration system. The methodology consists of two parts: 1) measuring knowledge consistency and 2) measuring process consistency. Knowledge consistency is measured by developing a questionnaire...

  16. Reliability of diesel generators at the Finnish and Swedish nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Pulkkinen, Urho [Technical Research Centre of Finland, Vuorimiehentie 5, SF-02150, Espoo (Finland)

    1986-02-15

    The operating experiences of 40 stand-by diesel generators at the Finnish and Swedish nuclear power plants have been analysed with special emphasis on the impact of the frequency of surveillance testing and of the test procedure on diesel generator reliability, the contribution of design, manufacturing, testing and maintenance errors and the potential and actual common cause failures, The results pf the analyses consisted both practical recommendations and mathematical reliability models and useful reliability data. (author)

  17. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  18. The reliability of finite element analysis results of the low impact test in predicting the energy absorption performance of thin-walled structures

    Energy Technology Data Exchange (ETDEWEB)

    Alipour, R.; Nejadx, Farokhi A.; Izman, S. [Universiti Teknologi Malaysia, Johor Bahru (Malaysia)

    2015-05-15

    The application of dual phase steels (DPS) such as DP600 in the form of thin-walled structure in automotive components is being continuously increased as vehicle designers utilize modern steel grades and low weight structures to improve structural performance, make automotive light and reinforce crash performance. Preventing cost enhancement of broad investigations in this area can be gained by using computers in structural analysis in order to substitute lots of experiments with finite element analysis (FEA). Nevertheless, it necessitates to be certified that selected method including element type and solution methodology is capable of predicting real condition. In this paper, numerical and experimental studies are done to specify the effect of element type selection and solution methodology on the results of finite element analysis in order to investigate the energy absorption behavior of a DP600 thin-walled structure with three different geometries under a low impact loading. The outcomes indicated the combination of implicit method and solid elements is in better agreement with the experiments. In addition, using a combination of shell element types with implicit method reduces the time of simulation remarkably, although the error of results compared to the experiments increased to some extent.

  19. Reliability of Oronasal Fistula Classification.

    Science.gov (United States)

    Sitzman, Thomas J; Allori, Alexander C; Matic, Damir B; Beals, Stephen P; Fisher, David M; Samson, Thomas D; Marcus, Jeffrey R; Tse, Raymond W

    2018-01-01

    Objective Oronasal fistula is an important complication of cleft palate repair that is frequently used to evaluate surgical quality, yet reliability of fistula classification has never been examined. The objective of this study was to determine the reliability of oronasal fistula classification both within individual surgeons and between multiple surgeons. Design Using intraoral photographs of children with repaired cleft palate, surgeons rated the location of palatal fistulae using the Pittsburgh Fistula Classification System. Intrarater and interrater reliability scores were calculated for each region of the palate. Participants Eight cleft surgeons rated photographs obtained from 29 children. Results Within individual surgeons reliability for each region of the Pittsburgh classification ranged from moderate to almost perfect (κ = .60-.96). By contrast, reliability between surgeons was lower, ranging from fair to substantial (κ = .23-.70). Between-surgeon reliability was lowest for the junction of the soft and hard palates (κ = .23). Within-surgeon and between-surgeon reliability were almost perfect for the more general classification of fistula in the secondary palate (κ = .95 and κ = .83, respectively). Conclusions This is the first reliability study of fistula classification. We show that the Pittsburgh Fistula Classification System is reliable when used by an individual surgeon, but less reliable when used among multiple surgeons. Comparisons of fistula occurrence among surgeons may be subject to less bias if they use the more general classification of "presence or absence of fistula of the secondary palate" rather than the Pittsburgh Fistula Classification System.

  20. Radioecological aspects of at-sea dumping of nuclear wastes resulting from the FSU nuclear fleet activities: Reliability of packings and necessity of rehabilitation measures

    International Nuclear Information System (INIS)

    Lavkovsky, S.; Kvasha, N.; Kobzev, V.; Sadovnikov, V.; Goltsev, V.

    2002-01-01

    The practice of radioactive waste treatment in the former USSR was that prior to at-sea dumping of objects with spent nuclear fuel (SNF) a set of design and technological measures was undertaken with a view to form packings with additional barriers to prevent radionuclide release in the environment. Based upon the results of most conservative evaluations of the protective barrier corrosion resistance it was concluded, that till Year 2300 there will be no grounds to worry about a possibility of the loss of tightness of the majority of packings. However, should unfavourable external natural factors combine, the loss of sealing of the packing with the screening assembly of the nuclear icebreaker 'Lenin' can occur at any moment. (author)

  1. FY 1999 report on the results of the R and D on the assessment of reliability of oil refining facilities; 1999 nendo sekiyu seisei setsubi shinraisei hyoka nado gijutsu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    At present, in oil refineries in Japan, the term of the continued operation in oil refining facilities is shorter than that in Europe and America because of the regulation on the open inspection period for boilers and hazardous material storage tanks. As a result, the refining cost is comparatively higher than in Europe and America due to the increase in inspection/repair cost and decrease in operational rate. Therefore, it is becoming important to effectively supply petroleum products by keeping stability in oil refining facilities of the whole Japan and prolonging the term of the continued operation of oil refining facilities, etc. In this R and D, the technical development is conducted which is needed for the long-term continued operation of oil refining facilities. The items for the R and D are as follows: assessment technology of reliability of oil refining high temperature system facilities, assessment technology of reliability of piping/storage facilities in oil refinery, assessment technology of reliability of oil refining power system facilities, technology of management support system in oil refining facilities. In this fiscal year, technical survey, data collection, and construction of the basic concept of developmental technology were mostly conducted. Also conducted were trial manufacture of various probes for non-fracture inspection use, oscillators, etc., and basic design of inspection equipment and trial manufacture of a part of them. And the data acquired were analyzed. (NEDO)

  2. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: exactly solvable two-site Hubbard model.

    Science.gov (United States)

    Kutepov, A L

    2015-08-12

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.

  3. Is the General Self-Efficacy Scale a Reliable Measure to be used in Cross-Cultural Studies? Results from Brazil, Germany and Colombia.

    Science.gov (United States)

    Damásio, Bruno F; Valentini, Felipe; Núñes-Rodriguez, Susana I; Kliem, Soeren; Koller, Sílvia H; Hinz, Andreas; Brähler, Elmar; Finck, Carolyn; Zenger, Markus

    2016-05-26

    This study evaluated cross-cultural measurement invariance for the General Self-efficacy Scale (GSES) in a large Brazilian (N = 2.394) and representative German (N = 2.046) and Colombian (N = 1.500) samples. Initially, multiple-indicators multiple-causes (MIMIC) analyses showed that sex and age were biasing items responses on the total sample (2 and 10 items, respectively). After controlling for these two covariates, a multigroup confirmatory factor analysis (MGCFA) was employed. Configural invariance was attested. However, metric invariance was not supported for five items, in a total of 10, and scalar invariance was not supported for all items. We also evaluated the differences between the latent scores estimated by two models: MIMIC and MGCFA unconstraining the non-equivalent parameters across countries. The average difference was equal to |.07| on the estimation of the latent scores, and 22.8% of the scores were biased in at least .10 standardized points. Bias effects were above the mean for the German group, which the average difference was equal to |.09|, and 33.7% of the scores were biased in at least .10. In synthesis, the GSES did not provide evidence of measurement invariance to be employed in this cross-cultural study. More than that, our results showed that even when controlling for sex and age effects, the absence of control on items parameters in the MGCFA analyses across countries would implicate in bias of the latent scores estimation, with a higher effect for the German population.

  4. On the Reliability of Optimization Results for Trigeneration Systems in Buildings, in the Presence of Price Uncertainties and Erroneous Load Estimation

    Directory of Open Access Journals (Sweden)

    Antonio Piacentino

    2016-12-01

    Full Text Available Cogeneration and trigeneration plants are widely recognized as promising technologies for increasing energy efficiency in buildings. However, their overall potential is scarcely exploited, due to the difficulties in achieving economic viability and the risk of investment related to uncertainties in future energy loads and prices. Several stochastic optimization models have been proposed in the literature to account for uncertainties, but these instruments share in a common reliance on user-defined probability functions for each stochastic parameter. Being such functions hard to predict, in this paper an analysis of the influence of erroneous estimation of the uncertain energy loads and prices on the optimal plant design and operation is proposed. With reference to a hotel building, a number of realistic scenarios is developed, exploring all the most frequent errors occurring in the estimation of energy loads and prices. Then, profit-oriented optimizations are performed for the examined scenarios, by means of a deterministic mixed integer linear programming algorithm. From a comparison between the achieved results, it emerges that: (i the plant profitability is prevalently influenced by the average “spark-spread” (i.e., ratio between electricity and fuel price and, secondarily, from the shape of the daily price profiles; (ii the “optimal sizes” of the main components are scarcely influenced by the daily load profiles, while they are more strictly related with the average “power to heat” and “power to cooling” ratios of the building.

  5. Reliability engineering theory and practice

    CERN Document Server

    Birolini, Alessandro

    2014-01-01

    This book shows how to build in, evaluate, and demonstrate reliability and availability of components, equipment, systems. It presents the state-of-theart of reliability engineering, both in theory and practice, and is based on the author's more than 30 years experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The structure of the book allows rapid access to practical results. This final edition extend and replace all previous editions. New are, in particular, a strategy to mitigate incomplete coverage, a comprehensive introduction to human reliability with design guidelines and new models, and a refinement of reliability allocation, design guidelines for maintainability, and concepts related to regenerative stochastic processes. The set of problems for homework has been extended. Methods & tools are given in a way that they can be tailored to cover different reliability requirement levels and be used for safety analysis. Because of the Appendice...

  6. RTE - 2015 Reliability Report. Summary

    International Nuclear Information System (INIS)

    2016-01-01

    Every year, RTE produces a reliability report for the past year. This report includes a number of results from previous years so that year-to-year comparisons can be drawn and long-term trends analysed. The 2015 report underlines the major factors that have impacted on the reliability of the electrical power system, without focusing exclusively on Significant System Events (ESS). It describes various factors which contribute to present and future reliability and the numerous actions implemented by RTE to ensure reliability today and in the future, as well as the ways in which the various parties involved in the electrical power system interact across the whole European interconnected network

  7. Reliability of reference distances used in photogrammetry.

    Science.gov (United States)

    Aksu, Muge; Kaya, Demet; Kocadereli, Ilken

    2010-07-01

    To determine the reliability of the reference distances used for photogrammetric assessment. The sample consisted of 100 subjects with mean ages of 22.97 +/- 2.98 years. Five lateral and four frontal parameters were measured directly on the subjects' faces. For photogrammetric assessment, two reference distances for the profile view and three reference distances for the frontal view were established. Standardized photographs were taken and all the parameters that had been measured directly on the face were measured on the photographs. The reliability of the reference distances was checked by comparing direct and indirect values of the parameters obtained from the subjects' faces and photographs. Repeated measure analysis of variance (ANOVA) and Bland-Altman analyses were used for statistical assessment. For profile measurements, the indirect values measured were statistically different from the direct values except for Sn-Sto in male subjects and Prn-Sn and Sn-Sto in female subjects. The indirect values of Prn-Sn and Sn-Sto were reliable in both sexes. The poorest results were obtained in the indirect values of the N-Sn parameter for female subjects and the Sn-Me parameter for male subjects according to the Sa-Sba reference distance. For frontal measurements, the indirect values were statistically different from the direct values in both sexes except for one in male subjects. The indirect values measured were not statistically different from the direct values for Go-Go. The indirect values of Ch-Ch were reliable in male subjects. The poorest results were obtained according to the P-P reference distance. For profile assessment, the T-Ex reference distance was reliable for Prn-Sn and Sn-Sto in both sexes. For frontal assessment, Ex-Ex and En-En reference distances were reliable for Ch-Ch in male subjects.

  8. Proof tests on reliability

    International Nuclear Information System (INIS)

    Mishima, Yoshitsugu

    1983-01-01

    In order to obtain public understanding on nuclear power plants, tests should be carried out to prove the reliability and safety of present LWR plants. For example, the aseismicity of nuclear power plants must be verified by using a large scale earthquake simulator. Reliability test began in fiscal 1975, and the proof tests on steam generators and on PWR support and flexure pins against stress corrosion cracking have already been completed, and the results have been internationally highly appreciated. The capacity factor of the nuclear power plant operation in Japan rose to 80% in the summer of 1983, and considering the period of regular inspection, it means the operation of almost full capacity. Japanese LWR technology has now risen to the top place in the world after having overcome the defects. The significance of the reliability test is to secure the functioning till the age limit is reached, to confirm the correct forecast of deteriorating process, to confirm the effectiveness of the remedy to defects and to confirm the accuracy of predicting the behavior of facilities. The reliability of nuclear valves, fuel assemblies, the heat affected zones in welding, reactor cooling pumps and electric instruments has been tested or is being tested. (Kako, I.)

  9. Coverage of Large-Scale Food Fortification of Edible Oil, Wheat Flour, and Maize Flour Varies Greatly by Vehicle and Country but Is Consistently Lower among the Most Vulnerable: Results from Coverage Surveys in 8 Countries.

    Science.gov (United States)

    Aaron, Grant J; Friesen, Valerie M; Jungjohann, Svenja; Garrett, Greg S; Neufeld, Lynnette M; Myatt, Mark

    2017-05-01

    Background: Large-scale food fortification (LSFF) of commonly consumed food vehicles is widely implemented in low- and middle-income countries. Many programs have monitoring information gaps and most countries fail to assess program coverage. Objective: The aim of this work was to present LSFF coverage survey findings (overall and in vulnerable populations) from 18 programs (7 wheat flour, 4 maize flour, and 7 edible oil programs) conducted in 8 countries between 2013 and 2015. Methods: A Fortification Assessment Coverage Toolkit (FACT) was developed to standardize the assessments. Three indicators were used to assess the relations between coverage and vulnerability: 1 ) poverty, 2 ) poor dietary diversity, and 3 ) rural residence. Three measures of coverage were assessed: 1 ) consumption of the vehicle, 2 ) consumption of a fortifiable vehicle, and 3 ) consumption of a fortified vehicle. Individual program performance was assessed based on the following: 1 ) achieving overall coverage ≥50%, 2) achieving coverage of ≥75% in ≥1 vulnerable group, and 3 ) achieving equity in coverage for ≥1 vulnerable group. Results: Coverage varied widely by food vehicle and country. Only 2 of the 18 LSFF programs assessed met all 3 program performance criteria. The 2 main program bottlenecks were a poor choice of vehicle and failure to fortify a fortifiable vehicle (i.e., absence of fortification). Conclusions: The results highlight the importance of sound program design and routine monitoring and evaluation. There is strong evidence of the impact and cost-effectiveness of LSFF; however, impact can only be achieved when the necessary activities and processes during program design and implementation are followed. The FACT approach fills an important gap in the availability of standardized tools. The LSFF programs assessed here need to be re-evaluated to determine whether to further invest in the programs, whether other vehicles are appropriate, and whether other approaches

  10. Coverage of Large-Scale Food Fortification of Edible Oil, Wheat Flour, and Maize Flour Varies Greatly by Vehicle and Country but Is Consistently Lower among the Most Vulnerable: Results from Coverage Surveys in 8 Countries123

    Science.gov (United States)

    Aaron, Grant J; Friesen, Valerie M; Jungjohann, Svenja; Garrett, Greg S; Myatt, Mark

    2017-01-01

    Background: Large-scale food fortification (LSFF) of commonly consumed food vehicles is widely implemented in low- and middle-income countries. Many programs have monitoring information gaps and most countries fail to assess program coverage. Objective: The aim of this work was to present LSFF coverage survey findings (overall and in vulnerable populations) from 18 programs (7 wheat flour, 4 maize flour, and 7 edible oil programs) conducted in 8 countries between 2013 and 2015. Methods: A Fortification Assessment Coverage Toolkit (FACT) was developed to standardize the assessments. Three indicators were used to assess the relations between coverage and vulnerability: 1) poverty, 2) poor dietary diversity, and 3) rural residence. Three measures of coverage were assessed: 1) consumption of the vehicle, 2) consumption of a fortifiable vehicle, and 3) consumption of a fortified vehicle. Individual program performance was assessed based on the following: 1) achieving overall coverage ≥50%, 2) achieving coverage of ≥75% in ≥1 vulnerable group, and 3) achieving equity in coverage for ≥1 vulnerable group. Results: Coverage varied widely by food vehicle and country. Only 2 of the 18 LSFF programs assessed met all 3 program performance criteria. The 2 main program bottlenecks were a poor choice of vehicle and failure to fortify a fortifiable vehicle (i.e., absence of fortification). Conclusions: The results highlight the importance of sound program design and routine monitoring and evaluation. There is strong evidence of the impact and cost-effectiveness of LSFF; however, impact can only be achieved when the necessary activities and processes during program design and implementation are followed. The FACT approach fills an important gap in the availability of standardized tools. The LSFF programs assessed here need to be re-evaluated to determine whether to further invest in the programs, whether other vehicles are appropriate, and whether other approaches are needed

  11. Self-consistent expansion for the molecular beam epitaxy equation.

    Science.gov (United States)

    Katzav, Eytan

    2002-03-01

    Motivated by a controversy over the correct results derived from the dynamic renormalization group (DRG) analysis of the nonlinear molecular beam epitaxy (MBE) equation, a self-consistent expansion for the nonlinear MBE theory is considered. The scaling exponents are obtained for spatially correlated noise of the general form D(r-r('),t-t('))=2D(0)[r-->-r(')](2rho-d)delta(t-t(')). I find a lower critical dimension d(c)(rho)=4+2rho, above which the linear MBE solution appears. Below the lower critical dimension a rho-dependent strong-coupling solution is found. These results help to resolve the controversy over the correct exponents that describe nonlinear MBE, using a reliable method that proved itself in the past by giving reasonable results for the strong-coupling regime of the Kardar-Parisi-Zhang system (for d>1), where DRG failed to do so.

  12. Reliability evaluation of a natural circulation system

    International Nuclear Information System (INIS)

    Jafari, Jalil; D'Auria, Francesco; Kazeminejad, Hossein; Davilu, Hadi

    2003-01-01

    been made between results from this study and results from a previous analysis where the same methodology was adopted for the evaluation of the TH-R of a different passive system named Isolation Condenser (IC). The comparison shows that the current single-phase NC system is 'more reliable' than the two-phase IC system. This constitutes a proof of qualification and of consistency for the adopted methodology.

  13. Consistent bone marrow-derived cell mobilization following repeated short courses of granulocyte-colony-stimulating factor in patients with amyotrophic lateral sclerosis: results from a multicenter prospective trial.

    Science.gov (United States)

    Tarella, Corrado; Rutella, Sergio; Gualandi, Francesca; Melazzini, Mario; Scimè, Rosanna; Petrini, Mario; Moglia, Cristina; Ulla, Marco; Omedé, Paola; Bella, Vincenzo La; Corbo, Massimo; Silani, Vincenzo; Siciliano, Gabriele; Mora, Gabriele; Caponnetto, Claudia; Sabatelli, Mario; Chiò, Adriano

    2010-01-01

    The aim of this study was to evaluate and characterize the feasibility and safety of bone marrow-derived cell (BMC) mobilization following repeated courses of granulocyte-colony stimulating factor (G-CSF) in patients with amyotrophic lateral sclerosis (ALS). Between January 2006 and March 2007, 26 ALS patients entered a multicenter trial that included four courses of BMC mobilization at 3-month intervals. In each course, G-CSF (5 microg/kg b.i.d.) was administered for four consecutive days; 18% mannitol was also given. Mobilization was monitored by flow cytometry analysis of circulating CD34(+) cells and by in vitro colony assay for clonogenic progenitors. Co-expression by CD34(+) cells of CD133, CD90, CD184, CD117 and CD31 was also assessed. Twenty patients completed the four-course schedule. One patient died and one refused to continue the program before starting the mobilization courses; four discontinued the study protocol because of disease progression. Overall, 89 G-CSF courses were delivered. There were two severe adverse events: one prolactinoma and one deep vein thrombosis. There were no discontinuations as a result of toxic complications. Circulating CD34(+) cells were monitored during 85 G-CSF courses and were always markedly increased; the range of median peak values was 41-57/microL, with no significant differences among the four G-CSF courses. Circulating clonogenic progenitor levels paralleled CD34(+) cell levels. Most mobilized CD34(+) cells co-expressed stem cell markers, with a significant increase in CD133 co-expression. It is feasible to deliver repeated courses of G-CSF to mobilize a substantial number of CD34(+) cells in patients with ALS; mobilized BMC include immature cells with potential clinical usefulness.

  14. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    DEFF Research Database (Denmark)

    Kozine, Igor; Christensen, P.; Winther-Jensen, M.

    2000-01-01

    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking the ...... similar safety systems. The database was established with Microsoft Access DatabaseManagement System, the software for reliability and availability assessments was created with Visual Basic....... the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all thecomponents and systems, especially the safety system. The report consists of a description of the theoretical......The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking...

  15. Coordinating user interfaces for consistency

    CERN Document Server

    Nielsen, Jakob

    2001-01-01

    In the years since Jakob Nielsen's classic collection on interface consistency first appeared, much has changed, and much has stayed the same. On the one hand, there's been exponential growth in the opportunities for following or disregarding the principles of interface consistency-more computers, more applications, more users, and of course the vast expanse of the Web. On the other, there are the principles themselves, as persistent and as valuable as ever. In these contributed chapters, you'll find details on many methods for seeking and enforcing consistency, along with bottom-line analys

  16. AMSAA Reliability Growth Guide

    National Research Council Canada - National Science Library

    Broemm, William

    2000-01-01

    ... has developed reliability growth methodology for all phases of the process, from planning to tracking to projection. The report presents this methodology and associated reliability growth concepts.

  17. Reliability benefits of dispersed wind resource development

    International Nuclear Information System (INIS)

    Milligan, M.; Artig, R.

    1998-05-01

    Generating capacity that is available during the utility peak period is worth more than off-peak capacity. Wind power from a single location might not be available during enough of the peak period to provide sufficient value. However, if the wind power plant is developed over geographically disperse locations, the timing and availability of wind power from these multiple sources could provide a better match with the utility's peak load than a single site. There are other issues that arise when considering disperse wind plant development. Singular development can result in economies of scale and might reduce the costs of obtaining multiple permits and multiple interconnections. However, disperse development can result in cost efficiencies if interconnection can be accomplished at lower voltages or at locations closer to load centers. Several wind plants are in various stages of planning or development in the US. Although some of these are small-scale demonstration projects, significant wind capacity has been developed in Minnesota, with additional developments planned in Wyoming, Iowa and Texas. As these and other projects are planned and developed, there is a need to perform analysis of the value of geographically disperse sites on the reliability of the overall wind plant.This paper uses a production-cost/reliability model to analyze the reliability of several wind sites in the state of Minnesota. The analysis finds that the use of a model with traditional reliability measures does not produce consistent, robust results. An approach based on fuzzy set theory is applied in this paper, with improved results. Using such a model, the authors find that system reliability can be optimized with a mix of disperse wind sites

  18. Choice, internal consistency, and rationality

    OpenAIRE

    Aditi Bhattacharyya; Prasanta K. Pattanaik; Yongsheng Xu

    2010-01-01

    The classical theory of rational choice is built on several important internal consistency conditions. In recent years, the reasonableness of those internal consistency conditions has been questioned and criticized, and several responses to accommodate such criticisms have been proposed in the literature. This paper develops a general framework to accommodate the issues raised by the criticisms of classical rational choice theory, and examines the broad impact of these criticisms from both no...

  19. Self-consistent quark bags

    International Nuclear Information System (INIS)

    Rafelski, J.

    1979-01-01

    After an introductory overview of the bag model the author uses the self-consistent solution of the coupled Dirac-meson fields to represent a bound state of strongly ineteracting fermions. In this framework he discusses the vivial approach to classical field equations. After a short description of the used numerical methods the properties of bound states of scalar self-consistent Fields and the solutions of a self-coupled Dirac field are considered. (HSI) [de

  20. RTE - Reliability report 2016

    International Nuclear Information System (INIS)

    2017-06-01

    Every year, RTE produces a reliability report for the past year. This document lays out the main factors that affected the electrical power system's operational reliability in 2016 and the initiatives currently under way intended to ensure its reliability in the future. Within a context of the energy transition, changes to the European interconnected network mean that RTE has to adapt on an on-going basis. These changes include the increase in the share of renewables injecting an intermittent power supply into networks, resulting in a need for flexibility, and a diversification in the numbers of stakeholders operating in the energy sector and changes in the ways in which they behave. These changes are dramatically changing the structure of the power system of tomorrow and the way in which it will operate - particularly the way in which voltage and frequency are controlled, as well as the distribution of flows, the power system's stability, the level of reserves needed to ensure supply-demand balance, network studies, assets' operating and control rules, the tools used and the expertise of operators. The results obtained in 2016 are evidence of a globally satisfactory level of reliability for RTE's operations in somewhat demanding circumstances: more complex supply-demand balance management, cross-border schedules at interconnections indicating operation that is closer to its limits and - most noteworthy - having to manage a cold spell just as several nuclear power plants had been shut down. In a drive to keep pace with the changes expected to occur in these circumstances, RTE implemented numerous initiatives to ensure high levels of reliability: - maintaining investment levels of euro 1.5 billion per year; - increasing cross-zonal capacity at borders with our neighbouring countries, thus bolstering the security of our electricity supply; - implementing new mechanisms (demand response, capacity mechanism, interruptibility, etc.); - involvement in tests or projects

  1. Consistency Results for the ROC Curves of Fused Classifiers

    National Research Council Canada - National Science Library

    Bjerkaas, Kristopher

    2004-01-01

    .... An established performance quantifier is the Receiver Operating Characteristic (ROC) curve, which allows one to view the probability of detection versus the probability of false alarm in one graph...

  2. Regulatory and personality predictors of the reliability of professional actions

    Directory of Open Access Journals (Sweden)

    Morosanova V.I.

    2017-12-01

    Full Text Available Background. The present research is carried out in the context of the conscious self-regulation of professional activity. Objective. It investigates the regulatory and personality predictors of reliability in rescue operations under stressful conditions. Design. The research sample includes 87 rescuers (72 men and 15 women aged from 25 to 50 years. Respondents were asked to complete the Morosanova’s Self-Regulation Profile Questionnaire – SRPQM, the Eysenck Personality Profile - Short (EPP-S, and the expert questionnaire “Professional Reliability of Rescue Operation” designed for this particular study. Results. On the basis of a correlation analysis, the structural model of the predictors of action reliability was constructed using the maximum likelihood method. Consistency indices showed a good agreement between the model and empirical data. The model contains three latent factors: “Self-regulation”, “Neuroticism” and “Reliability of actions”. As the model displays, the “Self-regulation” factor is a significant predictor of professional action reliability. There are two indicator variables for the factor “Self-regulation”: the self-regulation reliability considered as its stability in the stressful situations, and the rescuers’ levels of development of professionally critical regulatory features - modeling of conditions significant for the achievement of goals and the programming of actions. The study results also show that personality dispositions (by Eysenck have only indirect influence on action reliability. As the structural model reveals, the conscious self-regulation is a mediator in the relationship of neuroticism traits and action reliability. Conclusion. The conscious self-regulation is a significant predictor of professional action reliability under stressful conditions. It is also the mediator of the effects of personality dispositions on the reliability of action.

  3. Time-consistent and market-consistent evaluations

    NARCIS (Netherlands)

    Pelsser, A.; Stadje, M.A.

    2014-01-01

    We consider evaluation methods for payoffs with an inherent financial risk as encountered for instance for portfolios held by pension funds and insurance companies. Pricing such payoffs in a way consistent to market prices typically involves combining actuarial techniques with methods from

  4. Reliability and construction control

    Directory of Open Access Journals (Sweden)

    Sherif S. AbdelSalam

    2016-06-01

    Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.

  5. Reshaping the Science of Reliability with the Entropy Function

    Directory of Open Access Journals (Sweden)

    Paolo Rocchi

    2015-01-01

    Full Text Available The present paper revolves around two argument points. As first, we have observed a certain parallel between the reliability of systems and the progressive disorder of thermodynamical systems; and we import the notion of reversibility/irreversibility into the reliability domain. As second, we note that the reliability theory is a very active area of research which although has not yet become a mature discipline. This is due to the majority of researchers who adopt the inductive logic instead of the deductive logic typical of mature scientific sectors. The deductive approach was inaugurated by Gnedenko in the reliability domain. We mean to continue Gnedenko’s work and we use the Boltzmann-like entropy to pursue this objective. This paper condenses the papers published in the past decade which illustrate the calculus of the Boltzmann-like entropy. It is demonstrated how the every result complies with the deductive logic and are consistent with Gnedenko’s achievements.

  6. Reliability analysis of the reactor protection system with fault diagnosis

    International Nuclear Information System (INIS)

    Lee, D.Y.; Han, J.B.; Lyou, J.

    2004-01-01

    The main function of a reactor protection system (RPS) is to maintain the reactor core integrity and reactor coolant system pressure boundary. The RPS consists of the 2-out-of-m redundant architecture to assure a reliable operation. The system reliability of the RPS is a very important factor for the probability safety assessment (PSA) evaluation in the nuclear field. To evaluate the system failure rate of the k-out-of-m redundant system is not so easy with the deterministic method. In this paper, the reliability analysis method using the binomial process is suggested to calculate the failure rate of the RPS system with a fault diagnosis function. The suggested method is compared with the result of the Markov process to verify the validation of the suggested method, and applied to the several kinds of RPS architectures for a comparative evaluation of the reliability. (orig.)

  7. Reliability and validity of the McDonald Play Inventory.

    Science.gov (United States)

    McDonald, Ann E; Vigen, Cheryl

    2012-01-01

    This study examined the ability of a two-part self-report instrument, the McDonald Play Inventory, to reliably and validly measure the play activities and play styles of 7- to 11-yr-old children and to discriminate between the play of neurotypical children and children with known learning and developmental disabilities. A total of 124 children ages 7-11 recruited from a sample of convenience and a subsample of 17 parents participated in this study. Reliability estimates yielded moderate correlations for internal consistency, total test intercorrelations, and test-retest reliability. Validity estimates were established for content and construct validity. The results suggest that a self-report instrument yields reliable and valid measures of a child's perceived play performance and discriminates between the play of children with and without disabilities. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  8. Development of web-based reliability data base platform

    International Nuclear Information System (INIS)

    Hwang, Seok Won; Lee, Chang Ju; Sung, Key Yong

    2004-01-01

    Probabilistic safety assessment (PSA) is a systematic technique which estimates the degree of risk impacts to the public due to an accident scenario. Estimating the occurrence frequencies and consequences of potential scenarios requires a thorough analysis of the accident details and all fundamental parameters. The robustness of PSA to check weaknesses in a design and operation will allow a better informed and balanced decision to be reached. The fundamental parameters for PSA, such as the component failure rates, should be estimated under the condition of steady collection of the evidence throughout the operational period. However, since any single plant data does not sufficiently enough to provide an adequate PSA result, in actual, the whole operating data was commonly used to estimate the reliability parameters for the same type of components. The reliability data of any component type consists of two categories; the generic that is based on the operating experiences of whole plants, and the plant-specific that is based on the operation of a specific plant of interest. The generic data is highly essential for new or recently-built nuclear power plants (NPPs). Generally, the reliability data base may be categorized into the component reliability, initiating event frequencies, human performance, and so on. Among these data, the component reliability seems a key element because it has the most abundant population. Therefore, the component reliability data is essential for taking a part in the quantification of accident sequences because it becomes an input of various basic events which consists of the fault tree

  9. Market-consistent actuarial valuation

    CERN Document Server

    Wüthrich, Mario V

    2016-01-01

    This is the third edition of this well-received textbook, presenting powerful methods for measuring insurance liabilities and assets in a consistent way, with detailed mathematical frameworks that lead to market-consistent values for liabilities. Topics covered are stochastic discounting with deflators, valuation portfolio in life and non-life insurance, probability distortions, asset and liability management, financial risks, insurance technical risks, and solvency. Including updates on recent developments and regulatory changes under Solvency II, this new edition of Market-Consistent Actuarial Valuation also elaborates on different risk measures, providing a revised definition of solvency based on industry practice, and presents an adapted valuation framework which takes a dynamic view of non-life insurance reserving risk.

  10. Reliability of visual and instrumental color matching.

    Science.gov (United States)

    Igiel, Christopher; Lehmann, Karl Martin; Ghinea, Razvan; Weyhrauch, Michael; Hangx, Ysbrand; Scheller, Herbert; Paravina, Rade D

    2017-09-01

    The aim of this investigation was to evaluate intra-rater and inter-rater reliability of visual and instrumental shade matching. Forty individuals with normal color perception participated in this study. The right maxillary central incisor of a teaching model was prepared and restored with 10 feldspathic all-ceramic crowns of different shades. A shade matching session consisted of the observer (rater) visually selecting the best match by using VITA classical A1-D4 (VC) and VITA Toothguide 3D Master (3D) shade guides and the VITA Easyshade Advance intraoral spectrophotometer (ES) to obtain both VC and 3D matches. Three shade matching sessions were held with 4 to 6 weeks between sessions. Intra-rater reliability was assessed based on the percentage of agreement for the three sessions for the same observer, whereas the inter-rater reliability was calculated as mean percentage of agreement between different observers. The Fleiss' Kappa statistical analysis was used to evaluate visual inter-rater reliability. The mean intra-rater reliability for the visual shade selection was 64(11) for VC and 48(10) for 3D. The corresponding ES values were 96(4) for both VC and 3D. The percentages of observers who matched the same shade with VC and 3D were 55(10) and 43(12), respectively, while corresponding ES values were 88(8) for VC and 92(4) for 3D. The results for visual shade matching exhibited a high to moderate level of inconsistency for both intra-rater and inter-rater comparisons. The VITA Easyshade Advance intraoral spectrophotometer exhibited significantly better reliability compared with visual shade selection. This study evaluates the ability of observers to consistently match the same shade visually and with a dental spectrophotometer in different sessions. The intra-rater and inter-rater reliability (agreement of repeated shade matching) of visual and instrumental tooth color matching strongly suggest the use of color matching instruments as a supplementary tool in

  11. Consistent guiding center drift theories

    International Nuclear Information System (INIS)

    Wimmel, H.K.

    1982-04-01

    Various guiding-center drift theories are presented that are optimized in respect of consistency. They satisfy exact energy conservation theorems (in time-independent fields), Liouville's theorems, and appropriate power balance equations. A theoretical framework is given that allows direct and exact derivation of associated drift-kinetic equations from the respective guiding-center drift-orbit theories. These drift-kinetic equations are listed. Northrop's non-optimized theory is discussed for reference, and internal consistency relations of G.C. drift theories are presented. (orig.)

  12. Weak consistency and strong paraconsistency

    Directory of Open Access Journals (Sweden)

    Gemma Robles

    2009-11-01

    Full Text Available In a standard sense, consistency and paraconsistency are understood as, respectively, the absence of any contradiction and as the absence of the ECQ (“E contradictione quodlibet” rule that allows us to conclude any well formed formula from any contradiction. The aim of this paper is to explain the concepts of weak consistency alternative to the standard one, the concepts of paraconsistency related to them and the concept of strong paraconsistency, all of which have been defined by the author together with José M. Méndez.

  13. Glass consistency and glass performance

    International Nuclear Information System (INIS)

    Plodinec, M.J.; Ramsey, W.G.

    1994-01-01

    Glass produced by the Defense Waste Processing Facility (DWPF) will have to consistently be more durable than a benchmark glass (evaluated using a short-term leach test), with high confidence. The DWPF has developed a Glass Product Control Program to comply with this specification. However, it is not clear what relevance product consistency has on long-term glass performance. In this report, the authors show that DWPF glass, produced in compliance with this specification, can be expected to effectively limit the release of soluble radionuclides to natural environments. However, the release of insoluble radionuclides to the environment will be limited by their solubility, and not glass durability

  14. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  15. Method for assessing reliability of a network considering probabilistic safety assessment

    International Nuclear Information System (INIS)

    Cepin, M.

    2005-01-01

    A method for assessment of reliability of the network is developed, which uses the features of the fault tree analysis. The method is developed in a way that the increase of the network under consideration does not require significant increase of the model. The method is applied to small examples of network consisting of a small number of nodes and a small number of their connections. The results give the network reliability. They identify equipment, which is to be carefully maintained in order that the network reliability is not reduced, and equipment, which is a candidate for redundancy, as this would improve network reliability significantly. (author)

  16. Design and implementation of reliability data system of emergency diesel generator for YGN 3,4

    International Nuclear Information System (INIS)

    Kim, S. H.; Jang, S. D.; Kim, G. Y.; Kim, T. W.; Kim, Y. H.; Jeong, H. J.; Choi, G. H.

    1998-01-01

    This paper describes the design and implementation of D 2 REAMS that supports management and monitoring of the reliability data of emergency diesel generator of YGN 3,4 nuclear power plant. D2REAMS is the computerized reliability database management system to control the reliability of the emergency diesel generator of nuclear power plant and consists of seven sub-modules. Also, it was developed with intranet technology to eliminate the common problems of conventional client-server architecture. As the result of this implementation, the reliability and unavailability can be automatically computed by D2REAMS with the stored test and operation data of YGN 3,4 nuclear power plant

  17. Reliability analysis of the automatic control and power supply of reactor equipment

    International Nuclear Information System (INIS)

    Monori, Pal; Nagy, J.A.; Meszaros, Zoltan; Konkoly, Laszlo; Szabo, Antal; Nagy, Laszlo

    1988-01-01

    Based on reliability analysis the shortcomings of nuclear facilities are discovered. Fault tree types constructed for the technology of automatic control and for power supply serve as input data of the ORCHARD 2 computer code. In order to charaterize the reliability of the system, availability, failure rates and time intervals between failures are calculated. The results of the reliability analysis of the feedwater system of the Paks Nuclear Power Plant showed that the system consisted of elements of similar reliabilities. (V.N.) 8 figs.; 3 tabs

  18. Predicting risk and human reliability: a new approach

    International Nuclear Information System (INIS)

    Duffey, R.; Ha, T.-S.

    2009-01-01

    Learning from experience describes human reliability and skill acquisition, and the resulting theory has been validated by comparison against millions of outcome data from multiple industries and technologies worldwide. The resulting predictions were used to benchmark the classic first generation human reliability methods adopted in probabilistic risk assessments. The learning rate, probabilities and response times are also consistent with the existing psychological models for human learning and error correction. The new approach also implies a finite lower bound probability that is not predicted by empirical statistical distributions that ignore the known and fundamental learning effects. (author)

  19. Time-consistent actuarial valuations

    NARCIS (Netherlands)

    Pelsser, A.A.J.; Salahnejhad Ghalehjooghi, A.

    2016-01-01

    Time-consistent valuations (i.e. pricing operators) can be created by backward iteration of one-period valuations. In this paper we investigate the continuous-time limits of well-known actuarial premium principles when such backward iteration procedures are applied. This method is applied to an

  20. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the

  1. Consistently violating the non-Gaussian consistency relation

    International Nuclear Information System (INIS)

    Mooij, Sander; Palma, Gonzalo A.

    2015-01-01

    Non-attractor models of inflation are characterized by the super-horizon evolution of curvature perturbations, introducing a violation of the non-Gaussian consistency relation between the bispectrum's squeezed limit and the power spectrum's spectral index. In this work we show that the bispectrum's squeezed limit of non-attractor models continues to respect a relation dictated by the evolution of the background. We show how to derive this relation using only symmetry arguments, without ever needing to solve the equations of motion for the perturbations

  2. Comment on the internal consistency of thermodynamic databases supporting repository safety assessments

    International Nuclear Information System (INIS)

    Arthur, R.C.

    2001-11-01

    This report addresses the concept of internal consistency and its relevance to the reliability of thermodynamic databases used in repository safety assessments. In addition to being internally consistent, a reliable database should be accurate over a range of relevant temperatures and pressures, complete in the sense that all important aqueous species, gases and solid phases are represented, and traceable to original experimental results. No single definition of internal consistency need to be universally accepted as the most appropriate under all conditions, however. As a result, two databases that are each internally consistent may be inconsistent with respect to each other, and a database derived from two or more such databases must itself be internally inconsistent. The consequences of alternative definitions that are reasonably attributable to the concept of internal consistency can be illustrated with reference to the thermodynamic database supporting SKB's recent SR 97 safety assessment. This database is internally inconsistent because it includes equilibrium constants calculated over a range of temperatures: using conflicting reference values for some solids, gases and aqueous species that are common to two internally consistent databases (the OECD/NEA database for radioelements and SUPCRT databases for non-radioactive elements) that serve as source databases for the SR 97 TDB, using different definitions in these source databases of standard states for condensed phases and aqueous species, based on different mathematical expressions used in these source databases representing the temperature dependence of the heat capacity, and based on different chemical models adopted in these source databases for the aqueous phase. The importance of such inconsistencies must be considered in relation to the other database reliability criteria noted above, however. Thus, accepting a certain level of internal inconsistency in a database it is probably preferable to use a

  3. Comment on the internal consistency of thermodynamic databases supporting repository safety assessments

    Energy Technology Data Exchange (ETDEWEB)

    Arthur, R.C. [Monitor Scientific, LLC, Denver, CO (United States)

    2001-11-01

    This report addresses the concept of internal consistency and its relevance to the reliability of thermodynamic databases used in repository safety assessments. In addition to being internally consistent, a reliable database should be accurate over a range of relevant temperatures and pressures, complete in the sense that all important aqueous species, gases and solid phases are represented, and traceable to original experimental results. No single definition of internal consistency need to be universally accepted as the most appropriate under all conditions, however. As a result, two databases that are each internally consistent may be inconsistent with respect to each other, and a database derived from two or more such databases must itself be internally inconsistent. The consequences of alternative definitions that are reasonably attributable to the concept of internal consistency can be illustrated with reference to the thermodynamic database supporting SKB's recent SR 97 safety assessment. This database is internally inconsistent because it includes equilibrium constants calculated over a range of temperatures: using conflicting reference values for some solids, gases and aqueous species that are common to two internally consistent databases (the OECD/NEA database for radioelements and SUPCRT databases for non-radioactive elements) that serve as source databases for the SR 97 TDB, using different definitions in these source databases of standard states for condensed phases and aqueous species, based on different mathematical expressions used in these source databases representing the temperature dependence of the heat capacity, and based on different chemical models adopted in these source databases for the aqueous phase. The importance of such inconsistencies must be considered in relation to the other database reliability criteria noted above, however. Thus, accepting a certain level of internal inconsistency in a database it is probably preferable to

  4. Self-consistent radial sheath

    International Nuclear Information System (INIS)

    Hazeltine, R.D.

    1988-12-01

    The boundary layer arising in the radial vicinity of a tokamak limiter is examined, with special reference to the TEXT tokamak. It is shown that sheath structure depends upon the self-consistent effects of ion guiding-center orbit modification, as well as the radial variation of E /times/ B-induced toroidal rotation. Reasonable agreement with experiment is obtained from an idealized model which, however simplified, preserves such self-consistent effects. It is argued that the radial sheath, which occurs whenever confining magnetic field-lines lie in the plasma boundary surface, is an object of some intrinsic interest. It differs from the more familiar axial sheath because magnetized charges respond very differently to parallel and perpendicular electric fields. 11 refs., 1 fig

  5. The reliability of WorkWell Systems Functional Capacity Evaluation: a systematic review

    Science.gov (United States)

    2014-01-01

    Background Functional capacity evaluation (FCE) determines a person’s ability to perform work-related tasks and is a major component of the rehabilitation process. The WorkWell Systems (WWS) FCE (formerly known as Isernhagen Work Systems FCE) is currently the most commonly used FCE tool in German rehabilitation centres. Our systematic review investigated the inter-rater, intra-rater and test-retest reliability of the WWS FCE. Methods We performed a systematic literature search of studies on the reliability of the WWS FCE and extracted item-specific measures of inter-rater, intra-rater and test-retest reliability from the identified studies. Intraclass correlation coefficients ≥ 0.75, percentages of agreement ≥ 80%, and kappa coefficients ≥ 0.60 were categorised as acceptable, otherwise they were considered non-acceptable. The extracted values were summarised for the five performance categories of the WWS FCE, and the results were classified as either consistent or inconsistent. Results From 11 identified studies, 150 item-specific reliability measures were extracted. 89% of the extracted inter-rater reliability measures, all of the intra-rater reliability measures and 96% of the test-retest reliability measures of the weight handling and strength tests had an acceptable level of reliability, compared to only 67% of the test-retest reliability measures of the posture/mobility tests and 56% of the test-retest reliability measures of the locomotion tests. Both of the extracted test-retest reliability measures of the balance test were acceptable. Conclusions Weight handling and strength tests were found to have consistently acceptable reliability. Further research is needed to explore the reliability of the other tests as inconsistent findings or a lack of data prevented definitive conclusions. PMID:24674029

  6. Reliability data banks

    International Nuclear Information System (INIS)

    Cannon, A.G.; Bendell, A.

    1991-01-01

    Following an introductory chapter on Reliability, what is it, why it is needed, how it is achieved and measured, the principles of reliability data bases and analysis methodologies are the subject of the next two chapters. Achievements due to the development of data banks are mentioned for different industries in the next chapter, FACTS, a comprehensive information system for industrial safety and reliability data collection in process plants are covered next. CREDO, the Central Reliability Data Organization is described in the next chapter and is indexed separately, as is the chapter on DANTE, the fabrication reliability Data analysis system. Reliability data banks at Electricite de France and IAEA's experience in compiling a generic component reliability data base are also separately indexed. The European reliability data system, ERDS, and the development of a large data bank come next. The last three chapters look at 'Reliability data banks, - friend foe or a waste of time'? and future developments. (UK)

  7. Lagrangian multiforms and multidimensional consistency

    Energy Technology Data Exchange (ETDEWEB)

    Lobb, Sarah; Nijhoff, Frank [Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2009-10-30

    We show that well-chosen Lagrangians for a class of two-dimensional integrable lattice equations obey a closure relation when embedded in a higher dimensional lattice. On the basis of this property we formulate a Lagrangian description for such systems in terms of Lagrangian multiforms. We discuss the connection of this formalism with the notion of multidimensional consistency, and the role of the lattice from the point of view of the relevant variational principle.

  8. Deep Feature Consistent Variational Autoencoder

    OpenAIRE

    Hou, Xianxu; Shen, Linlin; Sun, Ke; Qiu, Guoping

    2016-01-01

    We present a novel method for constructing Variational Autoencoder (VAE). Instead of using pixel-by-pixel loss, we enforce deep feature consistency between the input and the output of a VAE, which ensures the VAE's output to preserve the spatial correlation characteristics of the input, thus leading the output to have a more natural visual appearance and better perceptual quality. Based on recent deep learning works such as style transfer, we employ a pre-trained deep convolutional neural net...

  9. Problem of nuclear power plant reliability

    International Nuclear Information System (INIS)

    Popyrin, L.S.; Nefedov, Yu.V.

    1989-01-01

    The problem of substantiation of rational and methods of ensurance of NPP reliability at the stage of its designing has been studied. It is shown that the optimal level of NPP reliability is determined by coordinating solution of the proiblems for optimization of reliability of power industry, heat and power supply and nuclear power generation systems comprising NPP, and problems of reliability optimization of NPP proper, as a complex engineering system. The conclusion is made that the greatest attention should be paid to the development of mathematical models of reliability, taking into account different methods of equipment redundancy, as well as dependence of failures on barious factors, improvement of NPP reliability indices, development of data base, working out of the complec of consistent standards of reliability. 230 refs.; 2 figs.; 1 tab

  10. Evaluation of the quality of results obtained in institutions participating in interlaboratory experiments and of the reliability characteristics of the analytical methods used on the basis of certification of standard soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Parshin, A.K.; Obol' yaninova, V.G.; Sul' dina, N.P.

    1986-08-20

    Rapid monitoring of the level of pollution of the environment and, especially, of soils necessitates preparation of standard samples (SS) close in properties and material composition to the objects to be analyzed. During 1978-1982 four sets (three types of samples in each) of State Standard Samples of different soils were developed: soddy-podzolic sandy-loamy, typical chernozem, krasnozem, and calcareous sierozem. The certification studies of the SS of the soils were carried out in accordance with the classical scheme of interlab experiment (ILE). More than 100 institutions were involved in the ILE and the total number of independent analytical results was of the order of 10/sup 4/. With such a volume of analytical information at their disposal they were able to find some general characteristics intrinsic to certification studies, to assess the quality of work of the ILE participants with due regard for their specialization, and the reliability characteristics of the analytical methods used.

  11. The Accelerator Reliability Forum

    CERN Document Server

    Lüdeke, Andreas; Giachino, R

    2014-01-01

    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  12. Integrating reliability analysis and design

    International Nuclear Information System (INIS)

    Rasmuson, D.M.

    1980-10-01

    This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems

  13. The value of reliability

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Karlström, Anders

    2010-01-01

    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...... of the form of the standardised distribution of trip durations. This insight provides a unification of the scheduling model and models that include the standard deviation of trip duration directly as an argument in the cost or utility function. The results generalise approximately to the case where the mean...

  14. Reliability assessment of Wind turbines

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2015-01-01

    Wind turbines can be considered as structures that are in between civil engineering structures and machines since they consist of structural components and many electrical and machine components together with a control system. Further, a wind turbine is not a one-of-a-kind structure...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  15. Human Reliability Program Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  16. Reliability of software

    International Nuclear Information System (INIS)

    Kopetz, H.

    1980-01-01

    Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de

  17. Data processing of qualitative results from an interlaboratory comparison for the detection of "Flavescence dorée" phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology.

    Science.gov (United States)

    Chabirand, Aude; Loiseau, Marianne; Renaudin, Isabelle; Poliakoff, Françoise

    2017-01-01

    A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their

  18. Data processing of qualitative results from an interlaboratory comparison for the detection of "Flavescence dorée" phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology.

    Directory of Open Access Journals (Sweden)

    Aude Chabirand

    Full Text Available A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007, Pelletier (2009 and under patent oligonucleotides achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper

  19. Daily Behavior Report Cards: An Investigation of the Consistency of On-Task Data across Raters and Methods

    Science.gov (United States)

    Chafouleas, Sandra M.; Riley-Tillman, T. Chris; Sassu, Kari A.; LaFrance, Mary J.; Patwa, Shamim S.

    2007-01-01

    In this study, the consistency of on-task data collected across raters using either a Daily Behavior Report Card (DBRC) or systematic direct observation was examined to begin to understand the decision reliability of using DBRCs to monitor student behavior. Results suggested very similar conclusions might be drawn when visually examining data…

  20. Smallest detectable change and test-retest reliability of a self-reported outcome measure: Results of the Center for Epidemiologic Studies Depression Scale, General Self-Efficacy Scale, and 12-item General Health Questionnaire.

    Science.gov (United States)

    Ohno, Shotaro; Takahashi, Kana; Inoue, Aimi; Takada, Koki; Ishihara, Yoshiaki; Tanigawa, Masaru; Hirao, Kazuki

    2017-12-01

    This study aims to examine the smallest detectable change (SDC) and test-retest reliability of the Center for Epidemiologic Studies Depression Scale (CES-D), General Self-Efficacy Scale (GSES), and 12-item General Health Questionnaire (GHQ-12). We tested 154 young adults at baseline and 2 weeks later. We calculated the intra-class correlation coefficients (ICCs) for test-retest reliability with a two-way random effects model for agreement. We then calculated the standard error of measurement (SEM) for agreement using the ICC formula. The SEM for agreement was used to calculate SDC values at the individual level (SDC ind ) and group level (SDC group ). The study participants included 137 young adults. The ICCs for all self-reported outcome measurement scales exceeded 0.70. The SEM of CES-D was 3.64, leading to an SDC ind of 10.10 points and SDC group of 0.86 points. The SEM of GSES was 1.56, leading to an SDC ind of 4.33 points and SDC group of 0.37 points. The SEM of GHQ-12 with bimodal scoring was 1.47, leading to an SDC ind of 4.06 points and SDC group of 0.35 points. The SEM of GHQ-12 with Likert scoring was 2.44, leading to an SDC ind of 6.76 points and SDC group of 0.58 points. To confirm that the change was not a result of measurement error, a score of self-reported outcome measurement scales would need to change by an amount greater than these SDC values. This has important implications for clinicians and epidemiologists when assessing outcomes. © 2017 John Wiley & Sons, Ltd.

  1. Reliable Design Versus Trust

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  2. Pocket Handbook on Reliability

    Science.gov (United States)

    1975-09-01

    exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future

  3. Nonspecialist Raters Can Provide Reliable Assessments of Procedural Skills

    DEFF Research Database (Denmark)

    Mahmood, Oria; Dagnæs, Julia; Bube, Sarah

    2018-01-01

    was significant (p Pearson's correlation of 0.77 for the nonspecialists and 0.75 for the specialists. The test-retest reliability showed the biggest difference between the 2 groups, 0.59 and 0.38 for the nonspecialist raters and the specialist raters, respectively (p ... was chosen as it is a simple procedural skill that is crucial to master in a resident urology program. RESULTS: The internal consistency of assessments was high, Cronbach's α = 0.93 and 0.95 for nonspecialist and specialist raters, respectively (p correlations). The interrater reliability...

  4. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  5. Reliability in engineering '87

    International Nuclear Information System (INIS)

    Tuma, M.

    1987-01-01

    The participants heard 51 papers dealing with the reliability of engineering products. Two of the papers were incorporated in INIS, namely ''Reliability comparison of two designs of low pressure regeneration of the 1000 MW unit at the Temelin nuclear power plant'' and ''Use of probability analysis of reliability in designing nuclear power facilities.''(J.B.)

  6. Reliability analysis framework for computer-assisted medical decision systems

    International Nuclear Information System (INIS)

    Habas, Piotr A.; Zurada, Jacek M.; Elmaghraby, Adel S.; Tourassi, Georgia D.

    2007-01-01

    We present a technique that enhances computer-assisted decision (CAD) systems with the ability to assess the reliability of each individual decision they make. Reliability assessment is achieved by measuring the accuracy of a CAD system with known cases similar to the one in question. The proposed technique analyzes the feature space neighborhood of the query case to dynamically select an input-dependent set of known cases relevant to the query. This set is used to assess the local (query-specific) accuracy of the CAD system. The estimated local accuracy is utilized as a reliability measure of the CAD response to the query case. The underlying hypothesis of the study is that CAD decisions with higher reliability are more accurate. The above hypothesis was tested using a mammographic database of 1337 regions of interest (ROIs) with biopsy-proven ground truth (681 with masses, 656 with normal parenchyma). Three types of decision models, (i) a back-propagation neural network (BPNN), (ii) a generalized regression neural network (GRNN), and (iii) a support vector machine (SVM), were developed to detect masses based on eight morphological features automatically extracted from each ROI. The performance of all decision models was evaluated using the Receiver Operating Characteristic (ROC) analysis. The study showed that the proposed reliability measure is a strong predictor of the CAD system's case-specific accuracy. Specifically, the ROC area index for CAD predictions with high reliability was significantly better than for those with low reliability values. This result was consistent across all decision models investigated in the study. The proposed case-specific reliability analysis technique could be used to alert the CAD user when an opinion that is unlikely to be reliable is offered. The technique can be easily deployed in the clinical environment because it is applicable with a wide range of classifiers regardless of their structure and it requires neither additional

  7. Inter-Observer Reliability of DSM-5 Substance Use Disorders*

    Science.gov (United States)

    Denis, Cécile M.; Gelernter, Joel; Hart, Amy B.; Kranzler, Henry R.

    2015-01-01

    Aims Although studies have examined the impact of changes made in DSM-5 on the estimated prevalence of substance use disorder (SUD) diagnoses, there is limited evidence of the reliability of DSM-5 SUDs. We evaluated the inter-observer reliability of four DSM-5 SUDs in a sample in which we had previously evaluated the reliability of DSM-IV diagnoses, allowing us to compare the two systems. Methods Two different interviewers each assessed 173 subjects over a 2-week period using the Semi-Structured Assessment for Drug Dependence and Alcoholism (SSADDA). Using the percent agreement and kappa (κ) coefficient, we examined the reliability of DSM-5 lifetime alcohol, opioid, cocaine, and cannabis use disorders, which we compared to that of SSADDA-derived DSM-IV SUD diagnoses. We also assessed the effect of additional lifetime SUD and lifetime mood or anxiety disorder diagnoses on the reliability of the DSM-5 SUD diagnoses. Results Reliability was good to excellent for the four disorders, with κ values ranging from 0.65 to 0.94. Agreement was consistently lower for SUDs of mild severity than for moderate or severe disorders. DSM-5 SUD diagnoses showed greater reliability than DSM-IV diagnoses of abuse or dependence or dependence only. Co-occurring SUD and lifetime mood or anxiety disorders exerted a modest effect on the reliability of the DSM-5 SUD diagnoses. Conclusions For alcohol, opioid, cocaine and cannabis use disorders, DSM-5 criteria and diagnoses are at least as reliable as those of DSM-IV. PMID:26048641

  8. Maintaining consistency in distributed systems

    Science.gov (United States)

    Birman, Kenneth P.

    1991-01-01

    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.

  9. Optimal, Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2002-01-01

    Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....

  10. Consistent histories and operational quantum theory

    International Nuclear Information System (INIS)

    Rudolph, O.

    1996-01-01

    In this work a generalization of the consistent histories approach to quantum mechanics is presented. We first critically review the consistent histories approach to nonrelativistic quantum mechanics in a mathematically rigorous way and give some general comments about it. We investigate to what extent the consistent histories scheme is compatible with the results of the operational formulation of quantum mechanics. According to the operational approach, nonrelativistic quantum mechanics is most generally formulated in terms of effects, states, and operations. We formulate a generalized consistent histories theory using the concepts and the terminology which have proven useful in the operational formulation of quantum mechanics. The logical rule of the logical interpretation of quantum mechanics is generalized to the present context. The algebraic structure of the generalized theory is studied in detail

  11. Do Health Systems Have Consistent Performance Across Locations and Is Consistency Associated With Higher Performance?

    Science.gov (United States)

    Crespin, Daniel J; Christianson, Jon B; McCullough, Jeffrey S; Finch, Michael D

    This study addresses whether health systems have consistent diabetes care performance across their ambulatory clinics and whether increasing consistency is associated with improvements in clinic performance. Study data included 2007 to 2013 diabetes care intermediate outcome measures for 661 ambulatory clinics in Minnesota and bordering states. Health systems provided more consistent performance, as measured by the standard deviation of performance for clinics in a system, relative to propensity score-matched proxy systems created for comparison purposes. No evidence was found that improvements in consistency were associated with higher clinic performance. The combination of high performance and consistent care is likely to enhance a health system's brand reputation, allowing it to better mitigate the financial risks of consumers seeking care outside the organization. These results suggest that larger health systems are most likely to deliver the combination of consistent and high-performance care. Future research should explore the mechanisms that drive consistent care within health systems.

  12. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  13. Reliability analysis of shutdown system

    International Nuclear Information System (INIS)

    Kumar, C. Senthil; John Arul, A.; Pal Singh, Om; Suryaprakasa Rao, K.

    2005-01-01

    This paper presents the results of reliability analysis of Shutdown System (SDS) of Indian Prototype Fast Breeder Reactor. Reliability analysis carried out using Fault Tree Analysis predicts a value of 3.5 x 10 -8 /de for failure of shutdown function in case of global faults and 4.4 x 10 -8 /de for local faults. Based on 20 de/y, the frequency of shutdown function failure is 0.7 x 10 -6 /ry, which meets the reliability target, set by the Indian Atomic Energy Regulatory Board. The reliability is limited by Common Cause Failure (CCF) of actuation part of SDS and to a lesser extent CCF of electronic components. The failure frequency of individual systems is -3 /ry, which also meets the safety criteria. Uncertainty analysis indicates a maximum error factor of 5 for the top event unavailability

  14. Reliability analysis in intelligent machines

    Science.gov (United States)

    Mcinroy, John E.; Saridis, George N.

    1990-01-01

    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  15. Decentralized Consistent Updates in SDN

    KAUST Repository

    Nguyen, Thanh Dang

    2017-04-10

    We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and blackholes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.

  16. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  17. Reliability and validity in a nutshell.

    Science.gov (United States)

    Bannigan, Katrina; Watson, Roger

    2009-12-01

    To explore and explain the different concepts of reliability and validity as they are related to measurement instruments in social science and health care. There are different concepts contained in the terms reliability and validity and these are often explained poorly and there is often confusion between them. To develop some clarity about reliability and validity a conceptual framework was built based on the existing literature. The concepts of reliability, validity and utility are explored and explained. Reliability contains the concepts of internal consistency and stability and equivalence. Validity contains the concepts of content, face, criterion, concurrent, predictive, construct, convergent (and divergent), factorial and discriminant. In addition, for clinical practice and research, it is essential to establish the utility of a measurement instrument. To use measurement instruments appropriately in clinical practice, the extent to which they are reliable, valid and usable must be established.

  18. MultiSIMNRA: A computational tool for self-consistent ion beam analysis using SIMNRA

    International Nuclear Information System (INIS)

    Silva, T.F.; Rodrigues, C.L.; Mayer, M.; Moro, M.V.; Trindade, G.F.; Aguirre, F.R.; Added, N.; Rizzutto, M.A.; Tabacniks, M.H.

    2016-01-01

    Highlights: • MultiSIMNRA enables the self-consistent analysis of multiple ion beam techniques. • Self-consistent analysis enables unequivocal and reliable modeling of the sample. • Four different computational algorithms available for model optimizations. • Definition of constraints enables to include prior knowledge into the analysis. - Abstract: SIMNRA is widely adopted by the scientific community of ion beam analysis for the simulation and interpretation of nuclear scattering techniques for material characterization. Taking advantage of its recognized reliability and quality of the simulations, we developed a computer program that uses multiple parallel sessions of SIMNRA to perform self-consistent analysis of data obtained by different ion beam techniques or in different experimental conditions of a given sample. In this paper, we present a result using MultiSIMNRA for a self-consistent multi-elemental analysis of a thin film produced by magnetron sputtering. The results demonstrate the potentialities of the self-consistent analysis and its feasibility using MultiSIMNRA.

  19. Reliability and validity of the Wolfram Unified Rating Scale (WURS

    Directory of Open Access Journals (Sweden)

    Nguyen Chau

    2012-11-01

    Full Text Available Abstract Background Wolfram syndrome (WFS is a rare, neurodegenerative disease that typically presents with childhood onset insulin dependent diabetes mellitus, followed by optic atrophy, diabetes insipidus, deafness, and neurological and psychiatric dysfunction. There is no cure for the disease, but recent advances in research have improved understanding of the disease course. Measuring disease severity and progression with reliable and validated tools is a prerequisite for clinical trials of any new intervention for neurodegenerative conditions. To this end, we developed the Wolfram Unified Rating Scale (WURS to measure the severity and individual variability of WFS symptoms. The aim of this study is to develop and test the reliability and validity of the Wolfram Unified Rating Scale (WURS. Methods A rating scale of disease severity in WFS was developed by modifying a standardized assessment for another neurodegenerative condition (Batten disease. WFS experts scored the representativeness of WURS items for the disease. The WURS was administered to 13 individuals with WFS (6-25 years of age. Motor, balance, mood and quality of life were also evaluated with standard instruments. Inter-rater reliability, internal consistency reliability, concurrent, predictive and content validity of the WURS were calculated. Results The WURS had high inter-rater reliability (ICCs>.93, moderate to high internal consistency reliability (Cronbach’s α = 0.78-0.91 and demonstrated good concurrent and predictive validity. There were significant correlations between the WURS Physical Assessment and motor and balance tests (rs>.67, ps>.76, ps=-.86, p=.001. The WURS demonstrated acceptable content validity (Scale-Content Validity Index=0.83. Conclusions These preliminary findings demonstrate that the WURS has acceptable reliability and validity and captures individual differences in disease severity in children and young adults with WFS.

  20. Rating scales for dystonia in cerebral palsy: reliability and validity.

    Science.gov (United States)

    Monbaliu, E; Ortibus, E; Roelens, F; Desloovere, K; Deklerck, J; Prinzie, P; de Cock, P; Feys, H

    2010-06-01

    This study investigated the reliability and validity of the Barry-Albright Dystonia Scale (BADS), the Burke-Fahn-Marsden Movement Scale (BFMMS), and the Unified Dystonia Rating Scale (UDRS) in patients with bilateral dystonic cerebral palsy (CP). Three raters independently scored videotapes of 10 patients (five males, five females; mean age 13 y 3 mo, SD 5 y 2 mo, range 5-22 y). One patient each was classified at levels I-IV in the Gross Motor Function Classification System and six patients were classified at level V. Reliability was measured by (1) intraclass correlation coefficient (ICC) for interrater reliability, (2) standard error of measurement (SEM) and smallest detectable difference (SDD), and (3) Cronbach's alpha for internal consistency. Validity was assessed by Pearson's correlations among the three scales used and by content analysis. Moderate to good interrater reliability was found for total scores of the three scales (ICC: BADS=0.87; BFMMS=0.86; UDRS=0.79). However, many subitems showed low reliability, in particular for the UDRS. SEM and SDD were respectively 6.36% and 17.72% for the BADS, 9.88% and 27.39% for the BFMMS, and 8.89% and 24.63% for the UDRS. High internal consistency was found. Pearson's correlations were high. Content validity showed insufficient accordance with the new CP definition and classification. Our results support the internal consistency and concurrent validity of the scales; however, taking into consideration the limitations in reliability, including the large SDD values and the content validity, further research on methods of assessment of dystonia is warranted.

  1. Transmission reliability faces future challenges

    International Nuclear Information System (INIS)

    Beaty, W.

    1993-01-01

    The recently published Washington International Energy Group's 1993 Electric Utility Outlook states that nearly one-third (31 percent) of U.S. utility executives expect reliability to decrease in the near future. Electric power system stability is crucial to reliability. Stability analysis determines whether a system will stay intact under normal operating conditions, during minor disturbances such as load fluctuations, and during major disturbances when one or more parts of the system fails. All system elements contribute to reliability or the lack of it. However, this report centers on the transmission segment of the electric system. The North American Electric Reliability Council (NERC) says the transmission systems as planned will be adequate over the next 10 years. However, delays in building new lines and increasing demands for transmission services are serious concerns. Reliability concerns exist in the Mid-Continent Area Power Pool and the Mid-America Interconnected Network regions where transmission facilities have not been allowed to be constructed as planned. Portions of the transmission systems in other regions are loaded at or near their limits. NERC further states that utilities must be allowed to complete planned generation and transmission as scheduled. A reliable supply of electricity also depends on adhering to established operating criteria. Factors that could complicate operations include: More interchange schedules resulting from increased transmission services. Increased line loadings in portions of the transmission systems. Proliferation of non-utility generators

  2. Reliability Evaluation for the Surface to Air Missile Weapon Based on Cloud Model

    Directory of Open Access Journals (Sweden)

    Deng Jianjun

    2015-01-01

    Full Text Available The fuzziness and randomness is integrated by using digital characteristics, such as Expected value, Entropy and Hyper entropy. The cloud model adapted to reliability evaluation is put forward based on the concept of the surface to air missile weapon. The cloud scale of the qualitative evaluation is constructed, and the quantitative variable and the qualitative variable in the system reliability evaluation are corresponded. The practical calculation result shows that it is more effective to analyze the reliability of the surface to air missile weapon by this way. The practical calculation result also reflects the model expressed by cloud theory is more consistent with the human thinking style of uncertainty.

  3. Consistent Visual Analyses of Intrasubject Data

    Science.gov (United States)

    Kahng, SungWoo; Chung, Kyong-Mee; Gutshall, Katharine; Pitts, Steven C.; Kao, Joyce; Girolami, Kelli

    2010-01-01

    Visual inspection of single-case data is the primary method of interpretation of the effects of an independent variable on a dependent variable in applied behavior analysis. The purpose of the current study was to replicate and extend the results of DeProspero and Cohen (1979) by reexamining the consistency of visual analysis across raters. We…

  4. Consistent feeding positions of great tit parents

    NARCIS (Netherlands)

    Lessells, C.M.; Poelman, E.H.; Mateman, A.C.; Cassey, Ph.

    2006-01-01

    When parent birds arrive at the nest to provision their young, their position on the nest rim may influence which chick or chicks are fed. As a result, the consistency of feeding positions of the individual parents, and the difference in position between the parents, may affect how equitably food is

  5. The reliability of the Adelaide in-shoe foot model.

    Science.gov (United States)

    Bishop, Chris; Hillier, Susan; Thewlis, Dominic

    2017-07-01

    Understanding the biomechanics of the foot is essential for many areas of research and clinical practice such as orthotic interventions and footwear development. Despite the widespread attention paid to the biomechanics of the foot during gait, what largely remains unknown is how the foot moves inside the shoe. This study investigated the reliability of the Adelaide In-Shoe Foot Model, which was designed to quantify in-shoe foot kinematics and kinetics during walking. Intra-rater reliability was assessed in 30 participants over five walking trials whilst wearing shoes during two data collection sessions, separated by one week. Sufficient reliability for use was interpreted as a coefficient of multiple correlation and intra-class correlation coefficient of >0.61. Inter-rater reliability was investigated separately in a second sample of 10 adults by two researchers with experience in applying markers for the purpose of motion analysis. The results indicated good consistency in waveform estimation for most kinematic and kinetic data, as well as good inter-and intra-rater reliability. The exception is the peak medial ground reaction force, the minimum abduction angle and the peak abduction/adduction external hindfoot joint moments which resulted in less than acceptable repeatability. Based on our results, the Adelaide in-shoe foot model can be used with confidence for 24 commonly measured biomechanical variables during shod walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  7. Standardizing the practice of human reliability analysis

    International Nuclear Information System (INIS)

    Hallbert, B.P.

    1993-01-01

    The practice of human reliability analysis (HRA) within the nuclear industry varies greatly in terms of posited mechanisms that shape human performance, methods of characterizing and analytically modeling human behavior, and the techniques that are employed to estimate the frequency with which human error occurs. This variation has been a source of contention among HRA practitioners regarding the validity of results obtained from different HRA methods. It has also resulted in attempts to develop standard methods and procedures for conducting HRAs. For many of the same reasons, the practice of HRA has not been standardized or has been standardized only to the extent that individual analysts have developed heuristics and consistent approaches in their practice of HRA. From the standpoint of consumers and regulators, this has resulted in a lack of clear acceptance criteria for the assumptions, modeling, and quantification of human errors in probabilistic risk assessments

  8. Reliability Analysis on NPP's Safety-Related Control Module with Field Data

    International Nuclear Information System (INIS)

    Lee, Sang Yong; Jung, Jae Hyun; Kim, Seong Hun

    2006-01-01

    The automatic control systems used in nuclear power plant (NPP) consists of numerous control modules that can be considered to be a network of components various complex ways. The control modules require relatively high reliability than industrial electronic products. Reliability prediction provides the rational basis of system designs and also provides the safety significance of system operations. The aim of this paper is to minimize the deficiencies of the traditional reliability prediction method calculation using the available field return data. This way is possible to do more realistic reliability assessment. SAMCHANG Enterprise Company (SEC) has established database containing high quality data at the module and component level from module maintenance in NPP. On the basis of these, this paper compares results that add failure record (field data) to Telcordia-SR-332 reliability prediction model with MIL-HDBK-217F prediction results

  9. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  10. Consistent ranking of volatility models

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger

    2006-01-01

    We show that the empirical ranking of volatility models can be inconsistent for the true ranking if the evaluation is based on a proxy for the population measure of volatility. For example, the substitution of a squared return for the conditional variance in the evaluation of ARCH-type models can...... variance in out-of-sample evaluations rather than the squared return. We derive the theoretical results in a general framework that is not specific to the comparison of volatility models. Similar problems can arise in comparisons of forecasting models whenever the predicted variable is a latent variable....

  11. Operational safety reliability research

    International Nuclear Information System (INIS)

    Hall, R.E.; Boccio, J.L.

    1986-01-01

    Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant

  12. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  13. Web server's reliability improvements using recurrent neural networks

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Rǎzvan-Daniel; Felea, Ioan

    2012-01-01

    In this paper we describe an interesting approach to error prediction illustrated by experimental results. The application consists of monitoring the activity for the web servers in order to collect the specific data. Predicting an error with severe consequences for the performance of a server (t...... usage, network usage and memory usage. We collect different data sets from monitoring the web server's activity and for each one we predict the server's reliability with the proposed recurrent neural network. © 2012 Taylor & Francis Group...

  14. Self-consistent asset pricing models

    Science.gov (United States)

    Malevergne, Y.; Sornette, D.

    2007-08-01

    We discuss the foundations of factor or regression models in the light of the self-consistency condition that the market portfolio (and more generally the risk factors) is (are) constituted of the assets whose returns it is (they are) supposed to explain. As already reported in several articles, self-consistency implies correlations between the return disturbances. As a consequence, the alphas and betas of the factor model are unobservable. Self-consistency leads to renormalized betas with zero effective alphas, which are observable with standard OLS regressions. When the conditions derived from internal consistency are not met, the model is necessarily incomplete, which means that some sources of risk cannot be replicated (or hedged) by a portfolio of stocks traded on the market, even for infinite economies. Analytical derivations and numerical simulations show that, for arbitrary choices of the proxy which are different from the true market portfolio, a modified linear regression holds with a non-zero value αi at the origin between an asset i's return and the proxy's return. Self-consistency also introduces “orthogonality” and “normality” conditions linking the betas, alphas (as well as the residuals) and the weights of the proxy portfolio. Two diagnostics based on these orthogonality and normality conditions are implemented on a basket of 323 assets which have been components of the S&P500 in the period from January 1990 to February 2005. These two diagnostics show interesting departures from dynamical self-consistency starting about 2 years before the end of the Internet bubble. Assuming that the CAPM holds with the self-consistency condition, the OLS method automatically obeys the resulting orthogonality and normality conditions and therefore provides a simple way to self-consistently assess the parameters of the model by using proxy portfolios made only of the assets which are used in the CAPM regressions. Finally, the factor decomposition with the

  15. Metrological Reliability of Medical Devices

    Science.gov (United States)

    Costa Monteiro, E.; Leon, L. F.

    2015-02-01

    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  16. Psychometrics and the neuroscience of individual differences: Internal consistency limits between-subjects effects.

    Science.gov (United States)

    Hajcak, Greg; Meyer, Alexandria; Kotov, Roman

    2017-08-01

    In the clinical neuroscience literature, between-subjects differences in neural activity are presumed to reflect reliable measures-even though the psychometric properties of neural measures are almost never reported. The current article focuses on the critical importance of assessing and reporting internal consistency reliability-the homogeneity of "items" that comprise a neural "score." We demonstrate how variability in the internal consistency of neural measures limits between-subjects (i.e., individual differences) effects. To this end, we utilize error-related brain activity (i.e., the error-related negativity or ERN) in both healthy and generalized anxiety disorder (GAD) participants to demonstrate options for psychometric analyses of neural measures; we examine between-groups differences in internal consistency, between-groups effect sizes, and between-groups discriminability (i.e., ROC analyses)-all as a function of increasing items (i.e., number of trials). Overall, internal consistency should be used to inform experimental design and the choice of neural measures in individual differences research. The internal consistency of neural measures is necessary for interpreting results and guiding progress in clinical neuroscience-and should be routinely reported in all individual differences studies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Intra- and Interobserver Reliability of Three Classification Systems for Hallux Rigidus.

    Science.gov (United States)

    Dillard, Sarita; Schilero, Christina; Chiang, Sharon; Pham, Peter

    2018-04-18

    There are over ten classification systems currently used in the staging of hallux rigidus. This results in confusion and inconsistency with radiographic interpretation and treatment. The reliability of hallux rigidus classification systems has not yet been tested. The purpose of this study was to evaluate intra- and interobserver reliability using three commonly used classifications for hallux rigidus. Twenty-one plain radiograph sets were presented to ten ACFAS board-certified foot and ankle surgeons. Each physician classified each radiograph based on clinical experience and knowledge according to the Regnauld, Roukis, and Hattrup and Johnson classification systems. The two-way mixed single-measure consistency intraclass correlation was used to calculate intra- and interrater reliability. The intrarater reliability of individual sets for the Roukis and Hattrup and Johnson classification systems was "fair to good" (Roukis, 0.62±0.19; Hattrup and Johnson, 0.62±0.28), whereas the intrarater reliability of individual sets for the Regnauld system bordered between "fair to good" and "poor" (0.43±0.24). The interrater reliability of the mean classification was "excellent" for all three classification systems. Conclusions Reliable and reproducible classification systems are essential for treatment and prognostic implications in hallux rigidus. In our study, Roukis classification system had the best intrarater reliability. Although there are various classification systems for hallux rigidus, our results indicate that all three of these classification systems show reliability and reproducibility.

  18. Towards thermodynamical consistency of quasiparticle picture

    International Nuclear Information System (INIS)

    Biro, T.S.; Shanenko, A.A.; Toneev, V.D.; Research Inst. for Particle and Nuclear Physics, Hungarian Academy of Sciences, Budapest

    2003-01-01

    The purpose of the present article is to call attention to some realistic quasi-particle-based description of the quark/gluon matter and its consistent implementation in thermodynamics. A simple and transparent representation of the thermodynamical consistency conditions is given. This representation allows one to review critically and systemize available phenomenological approaches to the deconfinement problem with respect to their thermodynamical consistency. A particular attention is paid to the development of a method for treating the string screening in the dense matter of unbound color charges. The proposed method yields an integrable effective pair potential, which can be incorporated into the mean-field picture. The results of its application are in reasonable agreement with lattice data on the QCD thermodynamics [ru

  19. Toward thermodynamic consistency of quasiparticle picture

    International Nuclear Information System (INIS)

    Biro, T.S.; Toneev, V.D.; Shanenko, A.A.

    2003-01-01

    The purpose of the present article is to call attention to some realistic quasiparticle-based description of quark/gluon matter and its consistent implementation in thermodynamics. A simple and transparent representation of the thermodynamic consistency conditions is given. This representation allows one to review critically and systemize available phenomenological approaches to the deconfinement problem with respect to their thermodynamic consistency. Particular attention is paid to the development of a method for treating the string screening in the dense matter of unbound color charges. The proposed method yields an integrable effective pair potential that can be incorporated into the mean-field picture. The results of its application are in reasonable agreement with lattice data on the QCD thermodynamics

  20. Toward a consistent RHA-RPA

    International Nuclear Information System (INIS)

    Shepard, J.R.

    1991-01-01

    The authors examine the RPA based on a relativistic Hartree approximation description for nuclear ground states. This model includes contributions from the negative energy sea at the 1-loop level. They emphasize consistency between the treatment of the ground state and the RPA. This consistency is important in the description of low-lying collective levels but less important for the longitudinal (e, e') quasi-elastic response. They also study the effect of imposing a 3-momentum cutoff on negative energy sea contributions. A cutoff of twice the nucleon mass improves agreement with observed spin orbit splittings in nuclei compared to the standard infinite cutoff results, an effect traceable to the fact that imposing the cutoff reduces m*/m. The cutoff is much less important than consistency in the description of low-lying collective levels. The cutoff model provides excellent agreement with quasi-elastic (e, e') data

  1. Personalized recommendation based on unbiased consistence

    Science.gov (United States)

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao

    2015-08-01

    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  2. Validation of consistency of Mendelian sampling variance.

    Science.gov (United States)

    Tyrisevä, A-M; Fikse, W F; Mäntysaari, E A; Jakobsen, J; Aamand, G P; Dürr, J; Lidauer, M H

    2018-03-01

    Experiences from international sire evaluation indicate that the multiple-trait across-country evaluation method is sensitive to changes in genetic variance over time. Top bulls from birth year classes with inflated genetic variance will benefit, hampering reliable ranking of bulls. However, none of the methods available today enable countries to validate their national evaluation models for heterogeneity of genetic variance. We describe a new validation method to fill this gap comprising the following steps: estimating within-year genetic variances using Mendelian sampling and its prediction error variance, fitting a weighted linear regression between the estimates and the years under study, identifying possible outliers, and defining a 95% empirical confidence interval for a possible trend in the estimates. We tested the specificity and sensitivity of the proposed validation method with simulated data using a real data structure. Moderate (M) and small (S) size populations were simulated under 3 scenarios: a control with homogeneous variance and 2 scenarios with yearly increases in phenotypic variance of 2 and 10%, respectively. Results showed that the new method was able to estimate genetic variance accurately enough to detect bias in genetic variance. Under the control scenario, the trend in genetic variance was practically zero in setting M. Testing cows with an average birth year class size of more than 43,000 in setting M showed that tolerance values are needed for both the trend and the outlier tests to detect only cases with a practical effect in larger data sets. Regardless of the magnitude (yearly increases in phenotypic variance of 2 or 10%) of the generated trend, it deviated statistically significantly from zero in all data replicates for both cows and bulls in setting M. In setting S with a mean of 27 bulls in a year class, the sampling error and thus the probability of a false-positive result clearly increased. Still, overall estimated genetic

  3. On estimation of reliability of a nuclear power plant with tokamak reactor

    International Nuclear Information System (INIS)

    Klemin, A.I.; Smetannikov, V.P.; Shiverskij, E.A.

    1982-01-01

    The results of the analysis of INTOR plant reliability are presented. The first stage of the analysis consists in the calculation of the INTOR plant structural reliability factors (15 ibs main systems have been considered). For each system the failure flow parameter (W(1/h)) and operational readiness Ksub(r) have been determined which for the plant as a whole besides these factors-technological utilization coefficient Ksub(TU) and mean-cycles-between failures Tsub(o). The second stage of the reliability analysis consists in investigating methods of improving its reliability factors reratively to the one calculated at the first level stage. It is shown that the reliability of the whole plant to the most essential extent is determined by the power supply system reliability. The following as to the influence extent on the INTOR plant reliability is the cryogenic system. Calculations of the INTOR plant reliability factors have given the following values: W=4,5x10 -3 1/h. Tsub(o)=152 h, Ksub(r)=0,71, Ksub(TU)=o,4 g

  4. Scale for positive aspects of caregiving experience: development, reliability, and factor structure.

    Science.gov (United States)

    Kate, N; Grover, S; Kulhara, P; Nehra, R

    2012-06-01

    OBJECTIVE. To develop an instrument (Scale for Positive Aspects of Caregiving Experience [SPACE]) that evaluates positive caregiving experience and assess its psychometric properties. METHODS. Available scales which assess some aspects of positive caregiving experience were reviewed and a 50-item questionnaire with a 5-point rating was constructed. In all, 203 primary caregivers of patients with severe mental disorders were asked to complete the questionnaire. Internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity were evaluated. Principal component factor analysis was run to assess the factorial validity of the scale. RESULTS. The scale developed as part of the study was found to have good internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity. Principal component factor analysis yielded a 4-factor structure, which also had good test-retest reliability and cross-language reliability. There was a strong correlation between the 4 factors obtained. CONCLUSION. The SPACE developed as part of this study has good psychometric properties.

  5. Reliability of the Cooking Task in adults with acquired brain injury.

    Science.gov (United States)

    Poncet, Frédérique; Swaine, Bonnie; Taillefer, Chantal; Lamoureux, Julie; Pradat-Diehl, Pascale; Chevignard, Mathilde

    2015-01-01

    Acquired brain injury (ABI) often leads to deficits in executive functioning (EF) responsible for severe and long-standing disabilities in daily life activities. The Cooking Task is an ecological and valid test of EF involving multi-tasking in a real environment. Given its complex scoring system, it is important to establish the tool's reliability. The objective of the study was to examine the reliability of the Cooking Task (internal consistency, inter-rater and test-retest reliability). A total of 160 patients with ABI (113 men, mean age 37 years, SD = 14.3) were tested using the Cooking Task. For test-retest reliability, patients were assessed by the same rater on two occasions (mean interval 11 days) while two raters independently and simultaneously observed and scored patients' performances to estimate inter-rater reliability. Internal consistency was high for the global scale (Cronbach α = .74). Inter-rater reliability (n = 66) for total errors was also high (ICC = .93), however the test-retest reliability (n = 11) was poor (ICC = .36). In general the Cooking Task appears to be a reliable tool. The low test-retest results were expected given the importance of EF in the performance of novel tasks.

  6. Reliability Based Ship Structural Design

    DEFF Research Database (Denmark)

    Dogliani, M.; Østergaard, C.; Parmentier, G.

    1996-01-01

    This paper deals with the development of different methods that allow the reliability-based design of ship structures to be transferred from the area of research to the systematic application in current design. It summarises the achievements of a three-year collaborative research project dealing...... with developments of models of load effects and of structural collapse adopted in reliability formulations which aim at calibrating partial safety factors for ship structural design. New probabilistic models of still-water load effects are developed both for tankers and for containerships. New results are presented...... structure of several tankers and containerships. The results of the reliability analysis were the basis for the definition of a target safety level which was used to asses the partial safety factors suitable for in a new design rules format to be adopted in modern ship structural design. Finally...

  7. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...

  8. Reliability Characteristics of Power Plants

    Directory of Open Access Journals (Sweden)

    Zbynek Martinek

    2017-01-01

    Full Text Available This paper describes the phenomenon of reliability of power plants. It gives an explanation of the terms connected with this topic as their proper understanding is important for understanding the relations and equations which model the possible real situations. The reliability phenomenon is analysed using both the exponential distribution and the Weibull distribution. The results of our analysis are specific equations giving information about the characteristics of the power plants, the mean time of operations and the probability of failure-free operation. Equations solved for the Weibull distribution respect the failures as well as the actual operating hours. Thanks to our results, we are able to create a model of dynamic reliability for prediction of future states. It can be useful for improving the current situation of the unit as well as for creating the optimal plan of maintenance and thus have an impact on the overall economics of the operation of these power plants.

  9. Validation of Land Cover Products Using Reliability Evaluation Methods

    Directory of Open Access Journals (Sweden)

    Wenzhong Shi

    2015-06-01

    Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.

  10. Long-term reliability of the visual EEG Poffenberger paradigm.

    Science.gov (United States)

    Friedrich, Patrick; Ocklenburg, Sebastian; Mochalski, Lisa; Schlüter, Caroline; Güntürkün, Onur; Genc, Erhan

    2017-07-14

    The Poffenberger paradigm is a simple perception task that is used to estimate the speed of information transfer between the two hemispheres, the so-called interhemispheric transfer time (IHTT). Although the original paradigm is a behavioral task, it can be combined with electroencephalography (EEG) to assess the underlying neurophysiological processes during task execution. While older studies have supported the validity of both paradigms for investigating interhemispheric interactions, their long-term reliability has not been assessed systematically before. The present study aims to fill this gap by determining both internal consistency and long-term test-retest reliability of IHTTs produced by using the two different versions of the Poffenberger paradigm in a sample of 26 healthy subjects. The results show high reliability for the EEG Poffenberger paradigm. In contrast, reliability measures for the behavioral Poffenberger paradigm were low. Hence, our results indicate that electrophysiological measures of interhemispheric transfer are more reliable than behavioral measures; the later should be used with caution in research investigating inter-individual differences of neurocognitive measures. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Business of reliability

    Science.gov (United States)

    Engel, Pierre

    1999-12-01

    The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

  12. Dictionary-based fiber orientation estimation with improved spatial consistency.

    Science.gov (United States)

    Ye, Chuyang; Prince, Jerry L

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) has enabled in vivo investigation of white matter tracts. Fiber orientation (FO) estimation is a key step in tract reconstruction and has been a popular research topic in dMRI analysis. In particular, the sparsity assumption has been used in conjunction with a dictionary-based framework to achieve reliable FO estimation with a reduced number of gradient directions. Because image noise can have a deleterious effect on the accuracy of FO estimation, previous works have incorporated spatial consistency of FOs in the dictionary-based framework to improve the estimation. However, because FOs are only indirectly determined from the mixture fractions of dictionary atoms and not modeled as variables in the objective function, these methods do not incorporate FO smoothness directly, and their ability to produce smooth FOs could be limited. In this work, we propose an improvement to Fiber Orientation Reconstruction using Neighborhood Information (FORNI), which we call FORNI+; this method estimates FOs in a dictionary-based framework where FO smoothness is better enforced than in FORNI alone. We describe an objective function that explicitly models the actual FOs and the mixture fractions of dictionary atoms. Specifically, it consists of data fidelity between the observed signals and the signals represented by the dictionary, pairwise FO dissimilarity that encourages FO smoothness, and weighted ℓ 1 -norm terms that ensure the consistency between the actual FOs and the FO configuration suggested by the dictionary representation. The FOs and mixture fractions are then jointly estimated by minimizing the objective function using an iterative alternating optimization strategy. FORNI+ was evaluated on a simulation phantom, a physical phantom, and real brain dMRI data. In particular, in the real brain dMRI experiment, we have qualitatively and quantitatively evaluated the reproducibility of the proposed method. Results demonstrate that

  13. Reliability and validity of the de Morton Mobility Index in individuals with sub-acute stroke.

    Science.gov (United States)

    Braun, Tobias; Marks, Detlef; Thiel, Christian; Grüneberg, Christian

    2018-02-04

    To establish the validity and reliability of the de Morton Mobility Index (DEMMI) in patients with sub-acute stroke. This cross-sectional study was performed in a neurological rehabilitation hospital. We assessed unidimensionality, construct validity, internal consistency reliability, inter-rater reliability, minimal detectable change and possible floor and ceiling effects of the DEMMI in adult patients with sub-acute stroke. The study included a total sample of 121 patients with sub-acute stroke. We analysed validity (n = 109) and reliability (n = 51) in two sub-samples. Rasch analysis indicated unidimensionality with an overall fit to the model (chi-square = 12.37, p = 0.577). All hypotheses on construct validity were confirmed. Internal consistency reliability (Cronbach's alpha = 0.94) and inter-rater reliability (intraclass correlation coefficient = 0.95; 95% confidence interval: 0.92-0.97) were excellent. The minimal detectable change with 90% confidence was 13 points. No floor or ceiling effects were evident. These results indicate unidimensionality, sufficient internal consistency reliability, inter-rater reliability, and construct validity of the DEMMI in patients with a sub-acute stroke. Advantages of the DEMMI in clinical application are the short administration time, no need for special equipment and interval level data. The de Morton Mobility Index, therefore, may be a useful performance-based bedside test to measure mobility in individuals with a sub-acute stroke across the whole mobility spectrum. Implications for Rehabilitation The de Morton Mobility Index (DEMMI) is an unidimensional measurement instrument of mobility in individuals with sub-acute stroke. The DEMMI has excellent internal consistency and inter-rater reliability, and sufficient construct validity. The minimal detectable change of the DEMMI with 90% confidence in stroke rehabilitation is 13 points. The lack of any floor or ceiling effects on hospital admission indicates

  14. Hawaii Electric System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  15. Hawaii electric system reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  16. Improving machinery reliability

    CERN Document Server

    Bloch, Heinz P

    1998-01-01

    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  17. LED system reliability

    NARCIS (Netherlands)

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.

    2011-01-01

    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific

  18. Integrated system reliability analysis

    DEFF Research Database (Denmark)

    Gintautas, Tomas; Sørensen, John Dalsgaard

    Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...

  19. The reliability paradox of the Parent-Child Conflict Tactics Corporal Punishment Subscale.

    Science.gov (United States)

    Lorber, Michael F; Slep, Amy M Smith

    2018-02-01

    In the present investigation we consider and explain an apparent paradox in the measurement of corporal punishment with the Parent-Child Conflict Tactics Scale (CTS-PC): How can it have poor internal consistency and still be reliable? The CTS-PC was administered to a community sample of 453 opposite sex couples who were parents of 3- to 7-year-old children. Internal consistency was marginal, yet item response theory analyses revealed that reliability rose sharply with increasing corporal punishment, exceeding .80 in the upper ranges of the construct. The results suggest that the CTS-PC Corporal Punishment subscale reliably discriminates among parents who report average to high corporal punishment (64% of mothers and 56% of fathers in the present sample), despite low overall internal consistency. These results have straightforward implications for the use and reporting of the scale. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Reliability of reactor materials

    International Nuclear Information System (INIS)

    Toerroenen, K.; Aho-Mantila, I.

    1986-05-01

    This report is the final technical report of the fracture mechanics part of the Reliability of Reactor Materials Programme, which was carried out at the Technical Research Centre of Finland (VTT) through the years 1981 to 1983. Research and development work was carried out in five major areas, viz. statistical treatment and modelling of cleavage fracture, crack arrest, ductile fracture, instrumented impact testing as well as comparison of numerical and experimental elastic-plastic fracture mechanics. In the area of cleavage fracture the critical variables affecting the fracture of steels are considered in the frames of a statistical model, so called WST-model. Comparison of fracture toughness values predicted by the model and corresponding experimental values shows excellent agreement for a variety of microstructures. different posibilities for using the model are discussed. The development work in the area of crack arrest testing was concentrated in the crack starter properties, test arrangement and computer control. A computerized elastic-plastic fracture testing method with a variety of test specimen geometries in a large temperature range was developed for a routine stage. Ductile fracture characteristics of reactor pressure vessel steel A533B and comparable weld material are given. The features of a new, patented instrumented impact tester are described. Experimental and theoretical comparisons between the new and conventional testers indicated clearly the improvements achieved with the new tester. A comparison of numerical and experimental elastic-plastic fracture mechanics capabilities at VTT was carried out. The comparison consisted of two-dimensional linear elastic as well as elastic-plastic finite element analysis of four specimen geometries and equivalent experimental tests. (author)

  1. Design reliability engineering

    International Nuclear Information System (INIS)

    Buden, D.; Hunt, R.N.M.

    1989-01-01

    Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design process that will be more complete and less expensive. The intent is to integrate the best features of design, reliability analysis, and expert systems to design highly reliable systems to meet stressing needs. Taken into account are the large uncertainties that exist in materials, design models, and fabrication techniques. Expert systems are a convenient method to integrate into the design process a complete definition of all elements that should be considered and an opportunity to integrate the design process with reliability, safety, test engineering, maintenance and operator training. 1 fig

  2. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  3. Measuring attitude towards Buddhism and Sikhism : internal consistency reliability for two new instruments

    OpenAIRE

    Thanissaro, Phra Nicholas

    2011-01-01

    This paper describes and discusses the development and empirical properties of two new\\ud 24-item scales – one measuring attitude toward Buddhism and the other measuring attitude\\ud toward Sikhism. The scale is designed to facilitate inter-faith comparisons within the\\ud psychology of religion alongside the well-established Francis Scale of Attitude toward\\ud Christianity. Data were obtained from a multi-religious sample of 369 school pupils aged\\ud between 13 and 15 in London. Application of...

  4. Captions, Consistency, Creativity, and the Consensual Assessment Technique: New Evidence of Reliability

    Science.gov (United States)

    Kaufman, James C.; Lee, Joohyun; Baer, John; Lee, Soonmook

    2007-01-01

    The consensual assessment technique (CAT) is a measurement tool for creativity research in which appropriate experts evaluate creative products [Amabile, T. M. (1996). "Creativity in context: Update to the social psychology of creativity." Boulder, CO: Westview]. However, the CAT is hampered by the time-consuming nature of the products (asking…

  5. Reliable temperature probe monitoring - Favorable esophageal motion for consistent probe contact during atrial fibrillation catheter ablation

    Directory of Open Access Journals (Sweden)

    Masahiro Esato

    2013-10-01

    Full Text Available Left atrial-esophageal (LA-Eso fistula is now a well-recognized and fatal complication of percutaneous catheter ablation performed using radiofrequency energy for atrial fibrillation (AF. We noted an important esophageal motion during temperature monitoring by a multipolar sensing probe, which could resolve several potential concerns of accurate esophageal temperature measurement and could consequently minimize esophageal injuries including LA-Eso fistulas during catheter ablation for AF.

  6. Process-aware information system development for the healthcare domain : consistency, reliability and effectiveness

    NARCIS (Netherlands)

    Mans, R.S.; Aalst, van der W.M.P.; Russell, N.C.; Bakker, P.J.M.; Moleman, A.J.; Rinderle-Ma, S.; Sadiq, S.; Leymann, F.

    2010-01-01

    Optimal support for complex healthcare processes cannot be provided by a single out-of-the-box Process-Aware Information System and necessitates the construction of customized applications based on these systems. In order to allow for the seamless integration of the new technology into the existing

  7. Analyzing the reliability of shuffle-exchange networks using reliability block diagrams

    International Nuclear Information System (INIS)

    Bistouni, Fathollah; Jahanshahi, Mohsen

    2014-01-01

    Supercomputers and multi-processor systems are comprised of thousands of processors that need to communicate in an efficient way. One reasonable solution would be the utilization of multistage interconnection networks (MINs), where the challenge is to analyze the reliability of such networks. One of the methods to increase the reliability and fault-tolerance of the MINs is use of various switching stages. Therefore, recently, the reliability of one of the most common MINs namely shuffle-exchange network (SEN) has been evaluated through the investigation on the impact of increasing the number of switching stage. Also, it is concluded that the reliability of SEN with one additional stage (SEN+) is better than SEN or SEN with two additional stages (SEN+2), even so, the reliability of SEN is better compared to SEN with two additional stages (SEN+2). Here we re-evaluate the reliability of these networks where the results of the terminal, broadcast, and network reliability analysis demonstrate that SEN+ and SEN+2 continuously outperform SEN and are very alike in terms of reliability. - Highlights: • The impact of increasing the number of stages on reliability of MINs is investigated. • The RBD method as an accurate method is used for the reliability analysis of MINs. • Complex series–parallel RBDs are used to determine the reliability of the MINs. • All measures of the reliability (i.e. terminal, broadcast, and network reliability) are analyzed. • All reliability equations will be calculated for different size N×N

  8. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  9. Influences on and Limitations of Classical Test Theory Reliability Estimates.

    Science.gov (United States)

    Arnold, Margery E.

    It is incorrect to say "the test is reliable" because reliability is a function not only of the test itself, but of many factors. The present paper explains how different factors affect classical reliability estimates such as test-retest, interrater, internal consistency, and equivalent forms coefficients. Furthermore, the limits of classical test…

  10. Reliability of movement control tests in the lumbar spine

    Directory of Open Access Journals (Sweden)

    de Bruin Eling D

    2007-09-01

    Full Text Available Abstract Background Movement control dysfunction [MCD] reduces active control of movements. Patients with MCD might form an important subgroup among patients with non specific low back pain. The diagnosis is based on the observation of active movements. Although widely used clinically, only a few studies have been performed to determine the test reliability. The aim of this study was to determine the inter- and intra-observer reliability of movement control dysfunction tests of the lumbar spine. Methods We videoed patients performing a standardized test battery consisting of 10 active movement tests for motor control in 27 patients with non specific low back pain and 13 patients with other diagnoses but without back pain. Four physiotherapists independently rated test performances as correct or incorrect per observation, blinded to all other patient information and to each other. The study was conducted in a private physiotherapy outpatient practice in Reinach, Switzerland. Kappa coefficients, percentage agreements and confidence intervals for inter- and intra-rater results were calculated. Results The kappa values for inter-tester reliability ranged between 0.24 – 0.71. Six tests out of ten showed a substantial reliability [k > 0.6]. Intra-tester reliability was between 0.51 – 0.96, all tests but one showed substantial reliability [k > 0.6]. Conclusion Physiotherapists were able to reliably rate most of the tests in this series of motor control tasks as being performed correctly or not, by viewing films of patients with and without back pain performing the task.

  11. Consistency relations in effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Munshi, Dipak; Regan, Donough, E-mail: D.Munshi@sussex.ac.uk, E-mail: D.Regan@sussex.ac.uk [Astronomy Centre, School of Mathematical and Physical Sciences, University of Sussex, Brighton BN1 9QH (United Kingdom)

    2017-06-01

    The consistency relations in large scale structure relate the lower-order correlation functions with their higher-order counterparts. They are direct outcome of the underlying symmetries of a dynamical system and can be tested using data from future surveys such as Euclid. Using techniques from standard perturbation theory (SPT), previous studies of consistency relation have concentrated on continuity-momentum (Euler)-Poisson system of an ideal fluid. We investigate the consistency relations in effective field theory (EFT) which adjusts the SPT predictions to account for the departure from the ideal fluid description on small scales. We provide detailed results for the 3D density contrast δ as well as the scaled divergence of velocity θ-bar . Assuming a ΛCDM background cosmology, we find the correction to SPT results becomes important at k ∼> 0.05 h/Mpc and that the suppression from EFT to SPT results that scales as square of the wave number k , can reach 40% of the total at k ≈ 0.25 h/Mpc at z = 0. We have also investigated whether effective field theory corrections to models of primordial non-Gaussianity can alter the squeezed limit behaviour, finding the results to be rather insensitive to these counterterms. In addition, we present the EFT corrections to the squeezed limit of the bispectrum in redshift space which may be of interest for tests of theories of modified gravity.

  12. Reliability, validity and sensitivity to change of neurogenic bowel dysfunction score in patients with spinal cord injury

    DEFF Research Database (Denmark)

    Erdem, D.; Hava, D.; Keskinoglu, P.

    2017-01-01

    cord injury (SCI). The reliability of NBD score was assessed by test-retest reliability and internal consistency. Cronbach's alpha coefficient was calculated to determine internal consistency. The construct validity was evaluated by exploring correlations between the NBD score and SF-36 scales, patient...... assessment of impact of NBD on quality of life (QoL) and the physician global assessment (PGA). The Global Rating of Change (GRC) scale was used to assess the change of NBD to investigate the sensitivity of the score to change. Results: Cronbach's alpha coefficient was 0.547. In test-retest reliability...

  13. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  14. Emergency diesel generator reliability program

    International Nuclear Information System (INIS)

    Serkiz, A.W.

    1989-01-01

    The need for an emergency diesel generator (EDG) reliability program has been established by 10 CFR Part 50, Section 50.63, Loss of All Alternating Current Power, which requires that utilities assess their station blackout duration and recovery capability. EDGs are the principal emergency ac power sources for coping with a station blackout. Regulatory Guide 1.155, Station Blackout, identifies a need for (1) an EDG reliability equal to or greater than 0.95, and (2) an EDG reliability program to monitor and maintain the required levels. The resolution of Generic Safety Issue (GSI) B-56 embodies the identification of a suitable EDG reliability program structure, revision of pertinent regulatory guides and Tech Specs, and development of an Inspection Module. Resolution of B-56 is coupled to the resolution of Unresolved Safety Issue (USI) A-44, Station Blackout, which resulted in the station blackout rule, 10 CFR 50.63 and Regulatory Guide 1.155, Station Blackout. This paper discusses the principal elements of an EDG reliability program developed for resolving GSI B-56 and related matters

  15. Self-consistency in Capital Markets

    Science.gov (United States)

    Benbrahim, Hamid

    2013-03-01

    Capital Markets are considered, at least in theory, information engines whereby traders contribute to price formation with their diverse perspectives. Regardless whether one believes in efficient market theory on not, actions by individual traders influence prices of securities, which in turn influence actions by other traders. This influence is exerted through a number of mechanisms including portfolio balancing, margin maintenance, trend following, and sentiment. As a result market behaviors emerge from a number of mechanisms ranging from self-consistency due to wisdom of the crowds and self-fulfilling prophecies, to more chaotic behavior resulting from dynamics similar to the three body system, namely the interplay between equities, options, and futures. This talk will address questions and findings regarding the search for self-consistency in capital markets.

  16. The problem of software reliability

    International Nuclear Information System (INIS)

    Ballard, G.M.

    1989-01-01

    The state of the art in safety and reliability assessment of the software of industrial computer systems is reviewed and likely progress over the next few years is identified and compared with the perceived needs of the user. Some of the current projects contributing to the development of new techniques for assessing software reliability are described. One is the software test and evaluation method which looked at the faults within and between two manufacturers specifications, faults in the codes and inconsistencies between the codes and specifications. The results are given. (author)

  17. Accessing data transfer reliability for duty cycled mobile wireless sensor network

    International Nuclear Information System (INIS)

    Shaikh, F.K.

    2014-01-01

    Mobility in WSNs (Wireless Sensor Networks) introduces significant challenges which do not arise in static WSNs. Reliable data transport is an important aspect of attaining consistency and QoS (Quality of Service) in several applications of MWSNs (Mobile Wireless Sensor Networks). It is important to understand how each of the wireless sensor networking characteristics such as duty cycling, collisions, contention and mobility affects the reliability of data transfer. If reliability is not managed well, the MWSN can suffer from overheads which reduce its applicability in the real world. In this paper, reliability assessment is being studied by deploying MWSN in different indoor and outdoor scenarios with various duty cycles of the motes and speeds of the mobile mote. Results show that the reliability is greatly affected by the duty cycled motes and the mobility using inherent broadcast mechanisms. (author)

  18. Equipment Reliability Program in NPP Krsko

    International Nuclear Information System (INIS)

    Skaler, F.; Djetelic, N.

    2006-01-01

    Operation that is safe, reliable, effective and acceptable to public is the common message in a mission statement of commercial nuclear power plants (NPPs). To fulfill these goals, nuclear industry, among other areas, has to focus on: 1 Human Performance (HU) and 2 Equipment Reliability (EQ). The performance objective of HU is as follows: The behaviors of all personnel result in safe and reliable station operation. While unwanted human behaviors in operations mostly result directly in the event, the behavior flaws either in the area of maintenance or engineering usually cause decreased equipment reliability. Unsatisfied Human performance leads even the best designed power plants into significant operating events, which can be found as well-known examples in nuclear industry. Equipment reliability is today recognized as the key to success. While the human performance at most NPPs has been improving since the start of WANO / INPO / IAEA evaluations, the open energy market has forced the nuclear plants to reduce production costs and operate more reliably and effectively. The balance between these two (opposite) goals has made equipment reliability even more important for safe, reliable and efficient production. Insisting on on-line operation by ignoring some principles of safety could nowadays in a well-developed safety culture and human performance environment exceed the cost of electricity losses. In last decade the leading USA nuclear companies put a lot of effort to improve equipment reliability primarily based on INPO Equipment Reliability Program AP-913 at their NPP stations. The Equipment Reliability Program is the key program not only for safe and reliable operation, but also for the Life Cycle Management and Aging Management on the way to the nuclear power plant life extension. The purpose of Equipment Reliability process is to identify, organize, integrate and coordinate equipment reliability activities (preventive and predictive maintenance, maintenance

  19. Mission reliability of semi-Markov systems under generalized operational time requirements

    International Nuclear Information System (INIS)

    Wu, Xiaoyue; Hillston, Jane

    2015-01-01

    Mission reliability of a system depends on specific criteria for mission success. To evaluate the mission reliability of some mission systems that do not need to work normally for the whole mission time, two types of mission reliability for such systems are studied. The first type corresponds to the mission requirement that the system must remain operational continuously for a minimum time within the given mission time interval, while the second corresponds to the mission requirement that the total operational time of the system within the mission time window must be greater than a given value. Based on Markov renewal properties, matrix integral equations are derived for semi-Markov systems. Numerical algorithms and a simulation procedure are provided for both types of mission reliability. Two examples are used for illustration purposes. One is a one-unit repairable Markov system, and the other is a cold standby semi-Markov system consisting of two components. By the proposed approaches, the mission reliability of systems with time redundancy can be more precisely estimated to avoid possible unnecessary redundancy of system resources. - Highlights: • Two types of mission reliability under generalized requirements are defined. • Equations for both types of reliability are derived for semi-Markov systems. • Numerical methods are given for solving both types of reliability. • Simulation procedure is given for estimating both types of reliability. • Verification of the numerical methods is given by the results of simulation

  20. Statistical Primer for Athletic Trainers: The Essentials of Understanding Measures of Reliability and Minimal Important Change.

    Science.gov (United States)

    Riemann, Bryan L; Lininger, Monica R

    2018-01-01

      To describe the concepts of measurement reliability and minimal important change.   All measurements have some magnitude of error. Because clinical practice involves measurement, clinicians need to understand measurement reliability. The reliability of an instrument is integral in determining if a change in patient status is meaningful.   Measurement reliability is the extent to which a test result is consistent and free of error. Three perspectives of reliability-relative reliability, systematic bias, and absolute reliability-are often reported. However, absolute reliability statistics, such as the minimal detectable difference, are most relevant to clinicians because they provide an expected error estimate. The minimal important difference is the smallest change in a treatment outcome that the patient would identify as important.   Clinicians should use absolute reliability characteristics, preferably the minimal detectable difference, to determine the extent of error around a patient's measurement. The minimal detectable difference, coupled with an appropriately estimated minimal important difference, can assist the practitioner in identifying clinically meaningful changes in patients.

  1. Uncertainty propagation and sensitivity analysis in system reliability assessment via unscented transformation

    International Nuclear Information System (INIS)

    Rocco Sanseverino, Claudio M.; Ramirez-Marquez, José Emmanuel

    2014-01-01

    The reliability of a system, notwithstanding it intended function, can be significantly affected by the uncertainty in the reliability estimate of the components that define the system. This paper implements the Unscented Transformation to quantify the effects of the uncertainty of component reliability through two approaches. The first approach is based on the concept of uncertainty propagation, which is the assessment of the effect that the variability of the component reliabilities produces on the variance of the system reliability. This assessment based on UT has been previously considered in the literature but only for system represented through series/parallel configuration. In this paper the assessment is extended to systems whose reliability cannot be represented through analytical expressions and require, for example, Monte Carlo Simulation. The second approach consists on the evaluation of the importance of components, i.e., the evaluation of the components that most contribute to the variance of the system reliability. An extension of the UT is proposed to evaluate the so called “main effects” of each component, as well to assess high order component interaction. Several examples with excellent results illustrate the proposed approach. - Highlights: • Simulation based approach for computing reliability estimates. • Computation of reliability variance via 2n+1 points. • Immediate computation of component importance. • Application to network systems

  2. Reliability Analysis of Money Habitudes

    Science.gov (United States)

    Delgadillo, Lucy M.; Bushman, Brittani S.

    2015-01-01

    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  3. Image recognition and consistency of response

    Science.gov (United States)

    Haygood, Tamara M.; Ryan, John; Liu, Qing Mary A.; Bassett, Roland; Brennan, Patrick C.

    2012-02-01

    Purpose: To investigate the connection between conscious recognition of an image previously encountered in an experimental setting and consistency of response to the experimental question. Materials and Methods: Twenty-four radiologists viewed 40 frontal chest radiographs and gave their opinion as to the position of a central venous catheter. One-to-three days later they again viewed 40 frontal chest radiographs and again gave their opinion as to the position of the central venous catheter. Half of the radiographs in the second set were repeated images from the first set and half were new. The radiologists were asked of each image whether it had been included in the first set. For this study, we are evaluating only the 20 repeated images. We used the Kruskal-Wallis test and Fisher's exact test to determine the relationship between conscious recognition of a previously interpreted image and consistency in interpretation of the image. Results. There was no significant correlation between recognition of the image and consistency in response regarding the position of the central venous catheter. In fact, there was a trend in the opposite direction, with radiologists being slightly more likely to give a consistent response with respect to images they did not recognize than with respect to those they did recognize. Conclusion: Radiologists' recognition of previously-encountered images in an observer-performance study does not noticeably color their interpretation on the second encounter.

  4. Strength of pelvic floor in men: reliability intra examiners

    Directory of Open Access Journals (Sweden)

    Patricia Zaidan

    2018-05-01

    Full Text Available Abstract Introduction: The obtaining of urinary continence is due to the strength of the pelvic floor muscles (MAPs at the moment of muscle contraction, when there are sudden increases in intra-abdominal pressure, which increases urethral closure pressure and decreases the possibility of urinary loss. Objective: To verify the reliability, type: stability, intra-examiner, of the measure of the strength of MAPs held with Peritron. Methods: Test and retest study to assess the intra-rater reliability of Peritron to measure the strength of MAPs. The sample consisted of 36 male patients, mean age 65.3 ± 7.2 years, all with urinary incontinence (UI after radical prostatectomy. The physical therapist conducted a training for familiarization with the procedures of MAPs strength assessment with Peritron for two weeks. The strength of MAPs was measured by a perineometer of the Peritron brand (PFX 9300®, Cardio-Design Pty. Ltd, Baulkham Hills, Australia, 2153. Results: The intraclass correlation coefficient (ICC was equal to 0.99; P = 0.0001. The typical measurement error (ETM was equal to 3.1 cmH2O and ETM% of 4. Conclusion: Peritron showed high reliability for measuring the strength of MAPs in men, both for clinical practice and for the production of scientific knowledge. It should be noted that such measures were carried out in stability, so it is suggested that in internal consistency reliability is equivalent.

  5. Human reliability analysis methods for probabilistic safety assessment

    International Nuclear Information System (INIS)

    Pyy, P.

    2000-11-01

    Human reliability analysis (HRA) of a probabilistic safety assessment (PSA) includes identifying human actions from safety point of view, modelling the most important of them in PSA models, and assessing their probabilities. As manifested by many incidents and studies, human actions may have both positive and negative effect on safety and economy. Human reliability analysis is one of the areas of probabilistic safety assessment (PSA) that has direct applications outside the nuclear industry. The thesis focuses upon developments in human reliability analysis methods and data. The aim is to support PSA by extending the applicability of HRA. The thesis consists of six publications and a summary. The summary includes general considerations and a discussion about human actions in the nuclear power plant (NPP) environment. A condensed discussion about the results of the attached publications is then given, including new development in methods and data. At the end of the summary part, the contribution of the publications to good practice in HRA is presented. In the publications, studies based on the collection of data on maintenance-related failures, simulator runs and expert judgement are presented in order to extend the human reliability analysis database. Furthermore, methodological frameworks are presented to perform a comprehensive HRA, including shutdown conditions, to study reliability of decision making, and to study the effects of wrong human actions. In the last publication, an interdisciplinary approach to analysing human decision making is presented. The publications also include practical applications of the presented methodological frameworks. (orig.)

  6. Reliability of construction materials

    International Nuclear Information System (INIS)

    Merz, H.

    1976-01-01

    One can also speak of reliability with respect to materials. While for reliability of components the MTBF (mean time between failures) is regarded as the main criterium, this is replaced with regard to materials by possible failure mechanisms like physical/chemical reaction mechanisms, disturbances of physical or chemical equilibrium, or other interactions or changes of system. The main tasks of the reliability analysis of materials therefore is the prediction of the various failure reasons, the identification of interactions, and the development of nondestructive testing methods. (RW) [de

  7. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  8. RTE - 2013 Reliability Report

    International Nuclear Information System (INIS)

    Denis, Anne-Marie

    2014-01-01

    RTE publishes a yearly reliability report based on a standard model to facilitate comparisons and highlight long-term trends. The 2013 report is not only stating the facts of the Significant System Events (ESS), but it moreover underlines the main elements dealing with the reliability of the electrical power system. It highlights the various elements which contribute to present and future reliability and provides an overview of the interaction between the various stakeholders of the Electrical Power System on the scale of the European Interconnected Network. (author)

  9. Reliability of Power Units in Poland and the World

    OpenAIRE

    Józef Paska

    2015-01-01

    One of a power system’s subsystems is the generation subsystem consisting of power units, the reliability of which to a large extent determines the reliability of the power system and electricity supply to consumers. This paper presents definitions of the basic indices of power unit reliability used in Poland and in the world. They are compared and analysed on the basis of data published by the Energy Market Agency (Poland), NERC (North American Electric Reliability Corporation – USA), ...

  10. Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire

    Directory of Open Access Journals (Sweden)

    Hazel Ekin Akmaz

    2018-05-01

    Full Text Available Background: Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. Aims: To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Study Design: Methodological and cross sectional study. Methods: A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. Results: The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. Conclusion: The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance

  11. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V

    2014-09-27

    Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.

  12. Finite element reliability analysis of fatigue life

    International Nuclear Information System (INIS)

    Harkness, H.H.; Belytschko, T.; Liu, W.K.

    1992-01-01

    Fatigue reliability is addressed by the first-order reliability method combined with a finite element method. Two-dimensional finite element models of components with cracks in mode I are considered with crack growth treated by the Paris law. Probability density functions of the variables affecting fatigue are proposed to reflect a setting where nondestructive evaluation is used, and the Rosenblatt transformation is employed to treat non-Gaussian random variables. Comparisons of the first-order reliability results and Monte Carlo simulations suggest that the accuracy of the first-order reliability method is quite good in this setting. Results show that the upper portion of the initial crack length probability density function is crucial to reliability, which suggests that if nondestructive evaluation is used, the probability of detection curve plays a key role in reliability. (orig.)

  13. Approach to reliability assessment

    International Nuclear Information System (INIS)

    Green, A.E.; Bourne, A.J.

    1975-01-01

    Experience has shown that reliability assessments can play an important role in the early design and subsequent operation of technological systems where reliability is at a premium. The approaches to and techniques for such assessments, which have been outlined in the paper, have been successfully applied in variety of applications ranging from individual equipments to large and complex systems. The general approach involves the logical and systematic establishment of the purpose, performance requirements and reliability criteria of systems. This is followed by an appraisal of likely system achievment based on the understanding of different types of variational behavior. A fundamental reliability model emerges from the correlation between the appropriate Q and H functions for performance requirement and achievement. This model may cover the complete spectrum of performance behavior in all the system dimensions

  14. A consistent interpretation of quantum mechanics

    International Nuclear Information System (INIS)

    Omnes, Roland

    1990-01-01

    Some mostly recent theoretical and mathematical advances can be linked together to yield a new consistent interpretation of quantum mechanics. It relies upon a unique and universal interpretative rule of a logical character which is based upon Griffiths consistent history. Some new results in semi-classical physics allow classical physics to be derived from this rule, including its logical aspects, and to prove accordingly the existence of determinism within the quantum framework. Together with decoherence, this can be used to retrieve the existence of facts, despite the probabilistic character of the theory. Measurement theory can then be made entirely deductive. It is accordingly found that wave packet reduction is a logical property, whereas one can always choose to avoid using it. The practical consequences of this interpretation are most often in agreement with the Copenhagen formulation but they can be proved never to give rise to any logical inconsistency or paradox. (author)

  15. Contribution to high voltage matrix switches reliability

    International Nuclear Information System (INIS)

    Lausenaz, Yvan

    2000-01-01

    Nowadays, power electronic equipment requirements are important, concerning performances, quality and reliability. On the other hand, costs have to be reduced in order to satisfy the market rules. To provide cheap, reliability and performances, many standard components with mass production are developed. But the construction of specific products must be considered following these two different points: in one band you can produce specific components, with delay, over-cost problems and eventuality quality and reliability problems, in the other and you can use standard components in a adapted topologies. The CEA of Pierrelatte has adopted this last technique of power electronic conception for the development of these high voltage pulsed power converters. The technique consists in using standard components and to associate them in series and in parallel. The matrix constitutes high voltage macro-switch where electrical parameters are distributed between the synchronized components. This study deals with the reliability of these structures. It brings up the high reliability aspect of MOSFETs matrix associations. Thanks to several homemade test facilities, we obtained lots of data concerning the components we use. The understanding of defects propagation mechanisms in matrix structures has allowed us to put forwards the necessity of robust drive system, adapted clamping voltage protection, and careful geometrical construction. All these reliability considerations in matrix associations have notably allowed the construction of a new matrix structure regrouping all solutions insuring reliability. Reliable and robust, this product has already reaches the industrial stage. (author) [fr

  16. Structural systems reliability analysis

    International Nuclear Information System (INIS)

    Frangopol, D.

    1975-01-01

    For an exact evaluation of the reliability of a structure it appears necessary to determine the distribution densities of the loads and resistances and to calculate the correlation coefficients between loads and between resistances. These statistical characteristics can be obtained only on the basis of a long activity period. In case that such studies are missing the statistical properties formulated here give upper and lower bounds of the reliability. (orig./HP) [de

  17. Reliability and maintainability

    International Nuclear Information System (INIS)

    1994-01-01

    Several communications in this conference are concerned with nuclear plant reliability and maintainability; their titles are: maintenance optimization of stand-by Diesels of 900 MW nuclear power plants; CLAIRE: an event-based simulation tool for software testing; reliability as one important issue within the periodic safety review of nuclear power plants; design of nuclear building ventilation by the means of functional analysis; operation characteristic analysis for a power industry plant park, as a function of influence parameters

  18. Reliability data book

    International Nuclear Information System (INIS)

    Bento, J.P.; Boerje, S.; Ericsson, G.; Hasler, A.; Lyden, C.O.; Wallin, L.; Poern, K.; Aakerlund, O.

    1985-01-01

    The main objective for the report is to improve failure data for reliability calculations as parts of safety analyses for Swedish nuclear power plants. The work is based primarily on evaluations of failure reports as well as information provided by the operation and maintenance staff of each plant. In the report are presented charts of reliability data for: pumps, valves, control rods/rod drives, electrical components, and instruments. (L.E.)

  19. STP: A mathematically and physically consistent library of steam properties

    International Nuclear Information System (INIS)

    Aguilar, F.; Hutter, A.C.; Tuttle, P.G.

    1982-01-01

    A new FORTRAN library of subroutines has been developed from the fundamental equation of Keenan et al. to evaluate a large set of water properties including derivatives such as sound speed and isothermal compressibility. The STP library uses the true saturation envelope of the Keenan et al. fundamental equation. The evaluation of the true envelope by a continuation method is explained. This envelope, along with other design features, imparts an exceptionally high degree of thermodynamic and mathematical consistency to the STP library, even at the critical point. Accuracy and smoothness, library self-consistency, and designed user convenience make the STP library a reliable and versatile water property package

  20. Multidisciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  1. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  2. Reliability of the Fermilab Antiproton Source

    International Nuclear Information System (INIS)

    Harms, E. Jr.

    1993-05-01

    This paper reports on the reliability of the Fermilab Antiproton source since it began operation in 1985. Reliability of the complex as a whole as well as subsystem performance is summarized. Also discussed is the trending done to determine causes of significant machine downtime and actions taken to reduce the incidence of failure. Finally, results of a study to detect previously unidentified reliability limitations are presented

  3. Quality assurance and reliability

    International Nuclear Information System (INIS)

    Normand, J.; Charon, M.

    1975-01-01

    Concern for obtaining high-quality products which will function properly when required to do so is nothing new - it is one manifestation of a conscientious attitude to work. However, the complexity and cost of equipment and the consequences of even temporary immobilization are such that it has become necessary to make special arrangements for obtaining high-quality products and examining what one has obtained. Each unit within an enterprise must examine its own work or arrange for it to be examined; a unit whose specific task is quality assurance is responsible for overall checking, but does not relieve other units of their responsibility. Quality assurance is a form of mutual assistance within an enterprise, designed to remove the causes of faults as far as possible. It begins very early in a project and continues through the ordering stage, construction, start-up trials and operation. Quality and hence reliability are the direct result of what is done at all stages of a project. They depend on constant attention to detail, for even a minor piece of poor workmanship can, in the case of an essential item of equipment, give rise to serious operational difficulties

  4. Analysis and Application of Reliability

    International Nuclear Information System (INIS)

    Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju

    1999-05-01

    This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.

  5. Workplace Bullying Scale: The Study of Validity and Reliability

    Directory of Open Access Journals (Sweden)

    Nizamettin Doğar

    2015-01-01

    Full Text Available The aim of this research is to adapt the Workplace Bullying Scale (Tınaz, Gök & Karatuna, 2013 to Albanian language and to examine its psychometric properties. The research was conducted on 386 person from different sectors of Albania. Results of exploratory and confirmatory factor analysis demonstrated that Albanian scale yielded 2 factors different from original form because of cultural differences. Internal consistency coefficients are,890 -,801 and split-half test reliability coefficients, 864 -,808. Comfirmatory Factor Analysis results change from,40 to,73. Corrected item-total correlations ranged,339 to,672 and according to t-test results differences between each item’s means of upper 27% and lower 27% points were significant. Thus Workplace Bullying Scale can be use as a valid and reliable instrument in social sciences in Albania.

  6. Prediction of safety critical software operational reliability from test reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1999-01-01

    It has been a critical issue to predict the safety critical software reliability in nuclear engineering area. For many years, many researches have focused on the quantification of software reliability and there have been many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. User's interest is however on the operational reliability rather than on the test reliability. The experiences show that the operational reliability is higher than the test reliability. With the assumption that the difference in reliability results from the change of environment, from testing to operation, testing environment factors comprising the aging factor and the coverage factor are developed in this paper and used to predict the ultimate operational reliability with the failure data in testing phase. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results show that the proposed method can estimate the operational reliability accurately. (Author). 14 refs., 1 tab., 1 fig

  7. Reliabilities of mental rotation tasks: limits to the assessment of individual differences.

    Science.gov (United States)

    Hirschfeld, Gerrit; Thielsch, Meinald T; Zernikow, Boris

    2013-01-01

    Mental rotation tasks with objects and body parts as targets are widely used in cognitive neuropsychology. Even though these tasks are well established to study between-groups differences, the reliability on an individual level is largely unknown. We present a systematic study on the internal consistency and test-retest reliability of individual differences in mental rotation tasks comparing different target types and orders of presentations. In total n = 99 participants (n = 63 for the retest) completed the mental rotation tasks with hands, feet, faces, and cars as targets. Different target types were presented in either randomly mixed blocks or blocks of homogeneous targets. Across all target types, the consistency (split-half reliability) and stability (test-retest reliabilities) were good or acceptable both for intercepts and slopes. At the level of individual targets, only intercepts showed acceptable reliabilities. Blocked presentations resulted in significantly faster and numerically more consistent and stable responses. Mental rotation tasks-especially in blocked variants-can be used to reliably assess individual differences in global processing speed. However, the assessment of the theoretically important slope parameter for individual targets requires further adaptations to mental rotation tests.

  8. Individual consistency and flexibility in human social information use.

    Science.gov (United States)

    Toelch, Ulf; Bruce, Matthew J; Newson, Lesley; Richerson, Peter J; Reader, Simon M

    2014-02-07

    Copying others appears to be a cost-effective way of obtaining adaptive information, particularly when flexibly employed. However, adult humans differ considerably in their propensity to use information from others, even when this 'social information' is beneficial, raising the possibility that stable individual differences constrain flexibility in social information use. We used two dissimilar decision-making computer games to investigate whether individuals flexibly adjusted their use of social information to current conditions or whether they valued social information similarly in both games. Participants also completed established personality questionnaires. We found that participants demonstrated considerable flexibility, adjusting social information use to current conditions. In particular, individuals employed a 'copy-when-uncertain' social learning strategy, supporting a core, but untested, assumption of influential theoretical models of cultural transmission. Moreover, participants adjusted the amount invested in their decision based on the perceived reliability of personally gathered information combined with the available social information. However, despite this strategic flexibility, participants also exhibited consistent individual differences in their propensities to use and value social information. Moreover, individuals who favoured social information self-reported as more collectivist than others. We discuss the implications of our results for social information use and cultural transmission.

  9. Classifier Fusion With Contextual Reliability Evaluation.

    Science.gov (United States)

    Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You

    2018-05-01

    Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.

  10. Orthology and paralogy constraints: satisfiability and consistency.

    Science.gov (United States)

    Lafond, Manuel; El-Mabrouk, Nadia

    2014-01-01

    A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family  G. But is a given set  C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for  G? While previous studies have focused on full sets of constraints, here we consider the general case where  C does not necessarily involve a constraint for each pair of genes. The problem is subdivided in two parts: (1) Is  C satisfiable, i.e. can we find an event-labeled gene tree G inducing  C? (2) Is there such a G which is consistent, i.e., such that all displayed triplet phylogenies are included in a species tree? Previous results on the Graph sandwich problem can be used to answer to (1), and we provide polynomial-time algorithms for satisfiability and consistency with a given species tree. We also describe a new polynomial-time algorithm for the case of consistency with an unknown species tree and full knowledge of pairwise orthology/paralogy relationships, as well as a branch-and-bound algorithm in the case when unknown relations are present. We show that our algorithms can be used in combination with ProteinOrtho, a sequence similarity-based orthology detection tool, to extract a set of robust orthology/paralogy relationships.

  11. Test Reliability at the Individual Level

    Science.gov (United States)

    Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly

    2016-01-01

    Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107

  12. Reliability analysis of reactor pressure vessel intensity

    International Nuclear Information System (INIS)

    Zheng Liangang; Lu Yongbo

    2012-01-01

    This paper performs the reliability analysis of reactor pressure vessel (RPV) with ANSYS. The analysis method include direct Monte Carlo Simulation method, Latin Hypercube Sampling, central composite design and Box-Behnken Matrix design. The RPV integrity reliability under given input condition is proposed. The result shows that the effects on the RPV base material reliability are internal press, allowable basic stress and elasticity modulus of base material in descending order, and the effects on the bolt reliability are allowable basic stress of bolt material, preload of bolt and internal press in descending order. (authors)

  13. Safety and reliability of automatization software

    Energy Technology Data Exchange (ETDEWEB)

    Kapp, K; Daum, R [Karlsruhe Univ. (TH) (Germany, F.R.). Lehrstuhl fuer Angewandte Informatik, Transport- und Verkehrssysteme

    1979-02-01

    Automated technical systems have to meet very high requirements concerning safety, security and reliability. Today, modern computers, especially microcomputers, are used as integral parts of those systems. In consequence computer programs must work in a safe and reliable mannter. Methods are discussed which allow to construct safe and reliable software for automatic systems such as reactor protection systems and to prove that the safety requirements are met. As a result it is shown that only the method of total software diversification can satisfy all safety requirements at tolerable cost. In order to achieve a high degree of reliability, structured and modular programming in context with high level programming languages are recommended.

  14. Consistency Checking of Web Service Contracts

    DEFF Research Database (Denmark)

    Cambronero, M. Emilia; Okika, Joseph C.; Ravn, Anders Peter

    2008-01-01

    Behavioural properties are analyzed for web service contracts formulated in Business Process Execution Language (BPEL) and Choreography Description Language (CDL). The key result reported is an automated technique to check consistency between protocol aspects of the contracts. The contracts...... are abstracted to (timed) automata and from there a simulation is set up, which is checked using automated tools for analyzing networks of finite state processes. Here we use the Concurrency Work Bench. The proposed techniques are illustrated with a case study that include otherwise difficult to analyze fault...

  15. Validity and Reliability of the Upper Extremity Work Demands Scale.

    Science.gov (United States)

    Jacobs, Nora W; Berduszek, Redmar J; Dijkstra, Pieter U; van der Sluis, Corry K

    2017-12-01

    Purpose To evaluate validity and reliability of the upper extremity work demands (UEWD) scale. Methods Participants from different levels of physical work demands, based on the Dictionary of Occupational Titles categories, were included. A historical database of 74 workers was added for factor analysis. Criterion validity was evaluated by comparing observed and self-reported UEWD scores. To assess structural validity, a factor analysis was executed. For reliability, the difference between two self-reported UEWD scores, the smallest detectable change (SDC), test-retest reliability and internal consistency were determined. Results Fifty-four participants were observed at work and 51 of them filled in the UEWD twice with a mean interval of 16.6 days (SD 3.3, range = 10-25 days). Criterion validity of the UEWD scale was moderate (r = .44, p = .001). Factor analysis revealed that 'force and posture' and 'repetition' subscales could be distinguished with Cronbach's alpha of .79 and .84, respectively. Reliability was good; there was no significant difference between repeated measurements. An SDC of 5.0 was found. Test-retest reliability was good (intraclass correlation coefficient for agreement = .84) and all item-total correlations were >.30. There were two pairs of highly related items. Conclusion Reliability of the UEWD scale was good, but criterion validity was moderate. Based on current results, a modified UEWD scale (2 items removed, 1 item reworded, divided into 2 subscales) was proposed. Since observation appeared to be an inappropriate gold standard, we advise to investigate other types of validity, such as construct validity, in further research.

  16. Safety and reliability criteria

    International Nuclear Information System (INIS)

    O'Neil, R.

    1978-01-01

    Nuclear power plants and, in particular, reactor pressure boundary components have unique reliability requirements, in that usually no significant redundancy is possible, and a single failure can give rise to possible widespread core damage and fission product release. Reliability may be required for availability or safety reasons, but in the case of the pressure boundary and certain other systems safety may dominate. Possible Safety and Reliability (S and R) criteria are proposed which would produce acceptable reactor design. Without some S and R requirement the designer has no way of knowing how far he must go in analysing his system or component, or whether his proposed solution is likely to gain acceptance. The paper shows how reliability targets for given components and systems can be individually considered against the derived S and R criteria at the design and construction stage. Since in the case of nuclear pressure boundary components there is often very little direct experience on which to base reliability studies, relevant non-nuclear experience is examined. (author)

  17. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  18. Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)

    Science.gov (United States)

    Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.

    2017-10-01

    When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach), which departs from a (in operational hydrology) commonly used definition of consistency. A period is considered to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the rating curve model behaves satisfactorily. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each country, regional information is maximally used to estimate observational uncertainty. Based on this uncertainty, a BReach analysis is performed and, subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear to be consistent with this knowledge of historical changes and thus facilitates a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model

  19. Internal consistency of a Spanish translation of the Francis Scale of Attitude Toward Christianity Short Form.

    Science.gov (United States)

    Campo-Arias, Adalberto; Oviedo, Heidi Celina; Díaz, Carmen Elena; Cogollo, Zuleima

    2006-12-01

    This study evaluated the internal consistency of a Spanish version of the short form of the Francis Scale of Attitude Toward Christianity based on responses of 405 Colombian adolescent students ages 13 to 17 years. This translated short-form version of the scale had an internal consistency of .80. This estimate indicates suitable internal consistency reliability for research use in this population.

  20. The effect of introducing increased-reliability-risk electronic components into 3rd generation telecommunications systems

    International Nuclear Information System (INIS)

    Salmela, Olli

    2005-01-01

    In this paper, the dependability of 3rd generation telecommunications network systems is studied. Special attention is paid to a case where increased-reliability-risk electronic components are introduced to the system. The paper consists of three parts: First, the reliability data of four electronic components is considered. This includes statistical analysis of the reliability test data, thermo-mechanical finite element analysis of the printed wiring board structures, and based on those, a field reliability estimate of the components is constructed. Second, the component level reliability data is introduced into the network element reliability analysis. This is accomplished by using a reliability block diagram technique and Monte Carlo simulation of the network element. The end result of the second part is a reliability estimate of the network element with and without the high-risk component. Third, the whole 3rd generation network having multiple network elements is analyzed. In this part, the criticality of introducing high-risk electronic components into a 3rd generation telecommunications network is considered

  1. Nodal price volatility reduction and reliability enhancement of restructured power systems considering demand-price elasticity

    International Nuclear Information System (INIS)

    Goel, L.; Wu, Qiuwei; Wang, Peng

    2008-01-01

    With the development of restructured power systems, the conventional 'same for all customers' electricity price is getting replaced by nodal prices. Electricity prices will fluctuate with time and nodes. In restructured power systems, electricity demands will interact mutually with prices. Customers may shift some of their electricity consumption from time slots of high electricity prices to those of low electricity prices if there is a commensurate price incentive. The demand side load shift will influence nodal prices in return. This interaction between demand and price can be depicted using demand-price elasticity. This paper proposes an evaluation technique incorporating the impact of the demand-price elasticity on nodal prices, system reliability and nodal reliabilities of restructured power systems. In this technique, demand and price correlations are represented using the demand-price elasticity matrix which consists of self/cross-elasticity coefficients. Nodal prices are determined using optimal power flow (OPF). The OPF and customer damage functions (CDFs) are combined in the proposed reliability evaluation technique to assess the reliability enhancement of restructured power systems considering demand-price elasticity. The IEEE reliability test system (RTS) is simulated to illustrate the developed techniques. The simulation results show that demand-price elasticity reduces the nodal price volatility and improves both the system reliability and nodal reliabilities of restructured power systems. Demand-price elasticity can therefore be utilized as a possible efficient tool to reduce price volatility and to enhance the reliability of restructured power systems. (author)

  2. The effect of introducing increased-reliability-risk electronic components into 3rd generation telecommunications systems

    Energy Technology Data Exchange (ETDEWEB)

    Salmela, Olli [Nokia Networks, P.O. Box 301, 00045 Nokia Group (Finland)]. E-mail: olli.salmela@nokia.com

    2005-08-01

    In this paper, the dependability of 3rd generation telecommunications network systems is studied. Special attention is paid to a case where increased-reliability-risk electronic components are introduced to the system. The paper consists of three parts: First, the reliability data of four electronic components is considered. This includes statistical analysis of the reliability test data, thermo-mechanical finite element analysis of the printed wiring board structures, and based on those, a field reliability estimate of the components is constructed. Second, the component level reliability data is introduced into the network element reliability analysis. This is accomplished by using a reliability block diagram technique and Monte Carlo simulation of the network element. The end result of the second part is a reliability estimate of the network element with and without the high-risk component. Third, the whole 3rd generation network having multiple network elements is analyzed. In this part, the criticality of introducing high-risk electronic components into a 3rd generation telecommunications network is considered.

  3. A reliability study of the new sensors for movement analysis (SHARIF-HMIS).

    Science.gov (United States)

    Abedi, Mohen; Manshadi, Farideh Dehghan; Zavieh, Minoo Khalkhali; Ashouri, Sajad; Azimi, Hadi; Parnanpour, Mohamad

    2016-04-01

    SHARIF-HMIS is a new inertial sensor designed for movement analysis. The aim of the present study was to assess the inter-tester and intra-tester reliability of some kinematic parameters in different lumbar motions making use of this sensor. 24 healthy persons and 28 patients with low back pain participated in the current reliability study. The test was performed in five different lumbar motions consisting of lumbar flexion in 0, 15, and 30° in the right and left directions. For measuring inter-tester reliability, all the tests were carried out twice on the same day separately by two physiotherapists. Intra-tester reliability was assessed by reproducing the tests after 3 days by the same physiotherapist. The present study revealed satisfactory inter- and intra-tester reliability indices in different positions. ICCs for intra-tester reliability ranged from 0.65 to 0.98 and 0.59 to 0.81 for healthy and patient participants, respectively. Also, ICCs for inter-tester reliability ranged from 0.65 to 0.92 for the healthy and 0.65 to 0.87 for patient participants. In general, it can be inferred from the results that measuring the kinematic parameters in lumbar movements using inertial sensors enjoys acceptable reliability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Depression, Anxiety and Stress Scale (DASS): The Study of Validity and Reliability

    Science.gov (United States)

    Basha, Ertan; Kaya, Mehmet

    2016-01-01

    The purpose of this study is to examine validity and reliability of the Albanian version of the Depression, Anxiety and Stress Scale (DASS), which is developed by Lovibond and Lovibond (1995). The sample of this study is consisted of 555 subjects who were living in Kosovo. The results of confirmatory factor analysis indicated 42 items loaded on…

  5. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  6. Issues in cognitive reliability

    International Nuclear Information System (INIS)

    Woods, D.D.; Hitchler, M.J.; Rumancik, J.A.

    1984-01-01

    This chapter examines some problems in current methods to assess reactor operator reliability at cognitive tasks and discusses new approaches to solve these problems. The two types of human failures are errors in the execution of an intention and errors in the formation/selection of an intention. Topics considered include the types of description, error correction, cognitive performance and response time, the speed-accuracy tradeoff function, function based task analysis, and cognitive task analysis. One problem of human reliability analysis (HRA) techniques in general is the question of what are the units of behavior whose reliability are to be determined. A second problem for HRA is that people often detect and correct their errors. The use of function based analysis, which maps the problem space for plant control, is recommended

  7. Reliability of PWR type nuclear power plants

    International Nuclear Information System (INIS)

    Ribeiro, A.A.T.; Muniz, A.A.

    1978-12-01

    Results of the analysis of factors influencing the reliability of international nuclear power plants of the PWR type are presented. The reliability factor is estimated and the probability of its having lower values than a certain specified value is discussed. (Author) [pt

  8. Internal Consistency and Convergent Validity of the Klontz Money Behavior Inventory (KMBI

    Directory of Open Access Journals (Sweden)

    Colby D. Taylor

    2015-12-01

    Full Text Available The Klontz Money Behavior Inventory (KMBI is a standalone, multi-scale measure than can screen for the presence of eight distinct money disorders. Given the well-established relationship between mental health and financial behaviors, results from the KMBI can be used to inform both mental health care professionals and financial planners. The present study examined the internal consistency and convergent validity of the KMBI, through comparison with similar measures, among a sample of college students (n = 232. Results indicate that the KMBI demonstrates acceptable internal consistency reliability and some convergence for most subscales when compared to other analogous measures. These findings highlight a need for literature and assessments to identify and describe disordered money behaviors.

  9. Reliability issues in PACS

    Science.gov (United States)

    Taira, Ricky K.; Chan, Kelby K.; Stewart, Brent K.; Weinberg, Wolfram S.

    1991-07-01

    Reliability is an increasing concern when moving PACS from the experimental laboratory to the clinical environment. Any system downtime may seriously affect patient care. The authors report on the several classes of errors encountered during the pre-clinical release of the PACS during the past several months and present the solutions implemented to handle them. The reliability issues discussed include: (1) environmental precautions, (2) database backups, (3) monitor routines of critical resources and processes, (4) hardware redundancy (networks, archives), and (5) development of a PACS quality control program.

  10. Reliability Parts Derating Guidelines

    Science.gov (United States)

    1982-06-01

    226-30, October 1974. 66 I, 26. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser Engineering and...Vol. R-23, No. 4, 226-30, October 1974. 28. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser...opnatien ot 󈨊 deg C, mounted on a 4-inach square 0.250~ inch thick al~loy alum~nusi panel.. This mounting technique should be L~ ken into cunoidur~tiou

  11. Reliability parameters of distribution networks components

    Energy Technology Data Exchange (ETDEWEB)

    Gono, R.; Kratky, M.; Rusek, S.; Kral, V. [Technical Univ. of Ostrava (Czech Republic)

    2009-03-11

    This paper presented a framework for the retrieval of parameters from various heterogenous power system databases. The framework was designed to transform the heterogenous outage data in a common relational scheme. The framework was used to retrieve outage data parameters from the Czech and Slovak republics in order to demonstrate the scalability of the framework. A reliability computation of the system was computed in 2 phases representing the retrieval of component reliability parameters and the reliability computation. Reliability rates were determined using component reliability and global reliability indices. Input data for the reliability was retrieved from data on equipment operating under similar conditions, while the probability of failure-free operations was evaluated by determining component status. Anomalies in distribution outage data were described as scheme, attribute, and term differences. Input types consisted of input relations; transformation programs; codebooks; and translation tables. The system was used to successfully retrieve data from 7 distributors in the Czech Republic and Slovak Republic between 2000-2007. The database included 301,555 records. Data were queried using SQL language. 29 refs., 2 tabs., 2 figs.

  12. Prediction of software operational reliability using testing environment factors

    International Nuclear Information System (INIS)

    Jung, Hoan Sung; Seong, Poong Hyun

    1995-01-01

    For many years, many researches have focused on the quantification of software reliability and there are many models developed to quantify software reliability. Most software reliability models estimate the reliability with the failure data collected during the test assuming that the test environments well represent the operation profile. The experiences show that the operational reliability is higher than the test reliability User's interest is on the operational reliability rather than on the test reliability, however. With the assumption that the difference in reliability results from the change of environment, testing environment factors comprising the aging factor and the coverage factor are defined in this study to predict the ultimate operational reliability with the failure data. It is by incorporating test environments applied beyond the operational profile into testing environment factors. The application results are close to the actual data

  13. Escala de Autoestima de Rosenberg (EAR: validade fatorial e consistência interna Rosenberg Self-Esteem Scale (RSS: factorial validity and internal consistency

    Directory of Open Access Journals (Sweden)

    Juliana Burges Sbicigo

    2010-12-01

    Full Text Available O objetivo deste estudo foi investigar as propriedades psicométricas da Escala de Autoestima de Rosenberg (EAR para adolescentes. Participaram 4.757 adolescentes, com idades entre 14 e 18 anos (M=15,77; DP=1,22, de nove cidades brasileiras. Os participantes responderam a uma versão da EAR adaptada para o Brasil. A análise fatorial exploratória apontou uma estrutura bidimensional, com 51.4% da variância explicada, que foi sustentada pela análise fatorial confirmatória. As análises de consistência interna realizadas por meio do coeficiente alfa de Cronbach, confiabilidade composta e variância extraída indicaram bons valores de fidedignidade. Diferenças nos escores de autoestima em função do sexo e da idade não foram encontradas. Conclui-se que a EAR apresenta qualidades psicométricas satisfatórias, mostrando-se um instrumento confiável para medir autoestima em adolescentes brasileiros.The aim of this study was to investigate the psychometrics properties of the Rosenberg Self-Esteem Scale (RSS for adolescents. The sample was composed of 4.757 adolescents, with ages between 14 and 18 years old (M=15.77; SD=1.22 in nine Brazilian cities. Participants responded to an adapted version of the RSS for Brazil. Exploratory factorial analysis showed a bidimensional structure, with 51.4% of explained variance. This result was supported by confirmatory factor analysis. The internal consistency analysis by Cronbach alpha coefficient, composite reliability and extracted variance indicated good reliability. Differences in self-esteem for gender and age were not found. These findings show that RSS has satisfactory psychometric qualities and it's a reliable instrument to assess self-esteem in Brazilian adolescents.

  14. PWR system reliability improvement activities

    International Nuclear Information System (INIS)

    Yoshikawa, Yuichiro

    1985-01-01

    In Japan lacking in energy resources, it is our basic energy policy to accelerate the development program of nuclear power, thereby reducing our dependence. As referred to in the foregoing, every effort has been exerted on our part to improve the PWR system reliability by dint of the so-called 'HOMEMADE' TQC activities, which is our brain-child as a result of applying to the energy industry the quality control philosophy developed in the field of manufacturing industry

  15. Improving consistency in findings from pharmacoepidemiological studies: The IMI-protect (Pharmacoepidemiological research on outcomes of therapeutics by a European consortium) project

    NARCIS (Netherlands)

    De Groot, Mark C.H.; Schlienger, Raymond; Reynolds, Robert; Gardarsdottir, Helga; Juhaeri, Juhaeri; Hesse, Ulrik; Gasse, Christiane; Rottenkolber, Marietta; Schuerch, Markus; Kurz, Xavier; Klungel, Olaf H.

    2013-01-01

    Background: Pharmacoepidemiological (PE) research should provide consistent, reliable and reproducible results to contribute to the benefit-risk assessment of medicines. IMI-PROTECT aims to identify sources of methodological variations in PE studies using a common protocol and analysis plan across

  16. Towards a fully self-consistent inversion combining historical and paleomagnetic data for geomagnetic field reconstructions

    Science.gov (United States)

    Arneitz, P.; Leonhardt, R.; Fabian, K.; Egli, R.

    2017-12-01

    Historical and paleomagnetic data are the two main sources of information about the long-term geomagnetic field evolution. Historical observations extend to the late Middle Ages, and prior to the 19th century, they consisted mainly of pure declination measurements from navigation and orientation logs. Field reconstructions going back further in time rely solely on magnetization acquired by rocks, sediments, and archaeological artefacts. The combined dataset is characterized by a strongly inhomogeneous spatio-temporal distribution and highly variable data reliability and quality. Therefore, an adequate weighting of the data that correctly accounts for data density, type, and realistic error estimates represents the major challenge for an inversion approach. Until now, there has not been a fully self-consistent geomagnetic model that correctly recovers the variation of the geomagnetic dipole together with the higher-order spherical harmonics. Here we present a new geomagnetic field model for the last 4 kyrs based on historical, archeomagnetic and volcanic records. The iterative Bayesian inversion approach targets the implementation of reliable error treatment, which allows different record types to be combined in a fully self-consistent way. Modelling results will be presented along with a thorough analysis of model limitations, validity and sensitivity.

  17. Reliability analysis of stiff versus flexible piping

    International Nuclear Information System (INIS)

    Lu, S.C.

    1985-01-01

    The overall objective of this research project is to develop a technical basis for flexible piping designs which will improve piping reliability and minimize the use of pipe supports, snubbers, and pipe whip restraints. The current study was conducted to establish the necessary groundwork based on the piping reliability analysis. A confirmatory piping reliability assessment indicated that removing rigid supports and snubbers tends to either improve or affect very little the piping reliability. The authors then investigated a couple of changes to be implemented in Regulatory Guide (RG) 1.61 and RG 1.122 aimed at more flexible piping design. They concluded that these changes substantially reduce calculated piping responses and allow piping redesigns with significant reduction in number of supports and snubbers without violating ASME code requirements. Furthermore, the more flexible piping redesigns are capable of exhibiting reliability levels equal to or higher than the original stiffer design. An investigation of the malfunction of pipe whip restraints confirmed that the malfunction introduced higher thermal stresses and tended to reduce the overall piping reliability. Finally, support and component reliabilities were evaluated based on available fragility data. Results indicated that the support reliability usually exhibits a moderate decrease as the piping flexibility increases. Most on-line pumps and valves showed an insignificant reduction in reliability for a more flexible piping design

  18. System reliability analysis with natural language and expert's subjectivity

    International Nuclear Information System (INIS)

    Onisawa, T.

    1996-01-01

    This paper introduces natural language expressions and expert's subjectivity to system reliability analysis. To this end, this paper defines a subjective measure of reliability and presents the method of the system reliability analysis using the measure. The subjective measure of reliability corresponds to natural language expressions of reliability estimation, which is represented by a fuzzy set defined on [0,1]. The presented method deals with the dependence among subsystems and employs parametrized operations of subjective measures of reliability which can reflect expert 's subjectivity towards the analyzed system. The analysis results are also expressed by linguistic terms. Finally this paper gives an example of the system reliability analysis by the presented method

  19. Guarantee of reliability of devices complexes for plastic tube welding

    International Nuclear Information System (INIS)

    Voskresenskij, L.A.; Zajtsev, A.I.; Nelyubov, V.I.; Fedorov, M.A.

    1988-01-01

    Results of calculations and experimental studies on providing reliability of complex for plastic tube welding are presented. Choice of reliability indeces and standards is based. Reliability levels of components are determined. The most waded details are calculated. It is shown that they meet the reqrurements of strength and reliability. Service life tests supported the correct choice of springs. Recommendations on elevating reliability are given. Directions of further developments are shown. 8 refs.; 2 figs.; 1 tab

  20. Evaluating Temporal Consistency in Marine Biodiversity Hotspots.

    Science.gov (United States)

    Piacenza, Susan E; Thurman, Lindsey L; Barner, Allison K; Benkwitt, Cassandra E; Boersma, Kate S; Cerny-Chipman, Elizabeth B; Ingeman, Kurt E; Kindinger, Tye L; Lindsley, Amy J; Nelson, Jake; Reimer, Jessica N; Rowe, Jennifer C; Shen, Chenchen; Thompson, Kevin A; Heppell, Selina S

    2015-01-01

    With the ongoing crisis of biodiversity loss and limited resources for conservation, the concept of biodiversity hotspots has been useful in determining conservation priority areas. However, there has been limited research into how temporal variability in biodiversity may influence conservation area prioritization. To address this information gap, we present an approach to evaluate the temporal consistency of biodiversity hotspots in large marine ecosystems. Using a large scale, public monitoring dataset collected over an eight year period off the US Pacific Coast, we developed a methodological approach for avoiding biases associated with hotspot delineation. We aggregated benthic fish species data from research trawls and calculated mean hotspot thresholds for fish species richness and Shannon's diversity indices over the eight year dataset. We used a spatial frequency distribution method to assign hotspot designations to the grid cells annually. We found no areas containing consistently high biodiversity through the entire study period based on the mean thresholds, and no grid cell was designated as a hotspot for greater than 50% of the time-series. To test if our approach was sensitive to sampling effort and the geographic extent of the survey, we followed a similar routine for the northern region of the survey area. Our finding of low consistency in benthic fish biodiversity hotspots over time was upheld, regardless of biodiversity metric used, whether thresholds were calculated per year or across all years, or the spatial extent for which we calculated thresholds and identified hotspots. Our results suggest that static measures of benthic fish biodiversity off the US West Coast are insufficient for identification of hotspots and that long-term data are required to appropriately identify patterns of high temporal variability in biodiversity for these highly mobile taxa. Given that ecological communities are responding to a changing climate and other

  1. Self-consistent gravitational self-force

    International Nuclear Information System (INIS)

    Pound, Adam

    2010-01-01

    I review the problem of motion for small bodies in general relativity, with an emphasis on developing a self-consistent treatment of the gravitational self-force. An analysis of the various derivations extant in the literature leads me to formulate an asymptotic expansion in which the metric is expanded while a representative worldline is held fixed. I discuss the utility of this expansion for both exact point particles and asymptotically small bodies, contrasting it with a regular expansion in which both the metric and the worldline are expanded. Based on these preliminary analyses, I present a general method of deriving self-consistent equations of motion for arbitrarily structured (sufficiently compact) small bodies. My method utilizes two expansions: an inner expansion that keeps the size of the body fixed, and an outer expansion that lets the body shrink while holding its worldline fixed. By imposing the Lorenz gauge, I express the global solution to the Einstein equation in the outer expansion in terms of an integral over a worldtube of small radius surrounding the body. Appropriate boundary data on the tube are determined from a local-in-space expansion in a buffer region where both the inner and outer expansions are valid. This buffer-region expansion also results in an expression for the self-force in terms of irreducible pieces of the metric perturbation on the worldline. Based on the global solution, these pieces of the perturbation can be written in terms of a tail integral over the body's past history. This approach can be applied at any order to obtain a self-consistent approximation that is valid on long time scales, both near and far from the small body. I conclude by discussing possible extensions of my method and comparing it to alternative approaches.

  2. Columbus safety and reliability

    Science.gov (United States)

    Longhurst, F.; Wessels, H.

    1988-10-01

    Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.

  3. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  4. Designing reliability into accelerators

    International Nuclear Information System (INIS)

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the ''factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis

  5. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  6. Reliability of Plastic Slabs

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    1989-01-01

    In the paper it is shown how upper and lower bounds for the reliability of plastic slabs can be determined. For the fundamental case it is shown that optimal bounds of a deterministic and a stochastic analysis are obtained on the basis of the same failure mechanisms and the same stress fields....

  7. Reliability based structural design

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2014-01-01

    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A

  8. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  9. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  10. Parametric Mass Reliability Study

    Science.gov (United States)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  11. Reliability Approach of a Compressor System using Reliability Block ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009. [3] P. O'Connor ...

  12. Consistent mutational paths predict eukaryotic thermostability

    Directory of Open Access Journals (Sweden)

    van Noort Vera

    2013-01-01

    Full Text Available Abstract Background Proteomes of thermophilic prokaryotes have been instrumental in structural biology and successfully exploited in biotechnology, however many proteins required for eukaryotic cell function are absent from bacteria or archaea. With Chaetomium thermophilum, Thielavia terrestris and Thielavia heterothallica three genome sequences of thermophilic eukaryotes have been published. Results Studying the genomes and proteomes of these thermophilic fungi, we found common strategies of thermal adaptation across the different kingdoms of Life, including amino acid biases and a reduced genome size. A phylogenetics-guided comparison of thermophilic proteomes with those of other, mesophilic Sordariomycetes revealed consistent amino acid substitutions associated to thermophily that were also present in an independent lineage of thermophilic fungi. The most consistent pattern is the substitution of lysine by arginine, which we could find in almost all lineages but has not been extensively used in protein stability engineering. By exploiting mutational paths towards the thermophiles, we could predict particular amino acid residues in individual proteins that contribute to thermostability and validated some of them experimentally. By determining the three-dimensional structure of an exemplar protein from C. thermophilum (Arx1, we could also characterise the molecular consequences of some of these mutations. Conclusions The comparative analysis of these three genomes not only enhances our understanding of the evolution of thermophily, but also provides new ways to engineer protein stability.

  13. Consistency of extreme flood estimation approaches

    Science.gov (United States)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  14. Consistent biokinetic models for the actinide elements

    International Nuclear Information System (INIS)

    Leggett, R.W.

    2001-01-01

    The biokinetic models for Th, Np, Pu, Am and Cm currently recommended by the International Commission on Radiological Protection (ICRP) were developed within a generic framework that depicts gradual burial of skeletal activity in bone volume, depicts recycling of activity released to blood and links excretion to retention and translocation of activity. For other actinide elements such as Ac, Pa, Bk, Cf and Es, the ICRP still uses simplistic retention models that assign all skeletal activity to bone surface and depicts one-directional flow of activity from blood to long-term depositories to excreta. This mixture of updated and older models in ICRP documents has led to inconsistencies in dose estimates and interpretation of bioassay for radionuclides with reasonably similar biokinetics. This paper proposes new biokinetic models for Ac, Pa, Bk, Cf and Es that are consistent with the updated models for Th, Np, Pu, Am and Cm. The proposed models are developed within the ICRP's generic model framework for bone-surface-seeking radionuclides, and an effort has been made to develop parameter values that are consistent with results of comparative biokinetic data on the different actinide elements. (author)

  15. Consistency Anchor Formalization and Correctness Proofs

    OpenAIRE

    Miguel, Correia; Bessani, Alysson

    2014-01-01

    This is report contains the formal proofs for the techniques for increasing the consistency of cloud storage as presented in "Bessani et al. SCFS: A Cloud-backed File System. Proc. of the 2014 USENIX Annual Technical Conference. June 2014." The consistency anchor technique allows one to increase the consistency provided by eventually consistent cloud storage services like Amazon S3. This technique has been used in the SCFS (Shared Cloud File System) cloud-backed file system for solving rea...

  16. Fault-tolerant embedded system design and optimization considering reliability estimation uncertainty

    International Nuclear Information System (INIS)

    Wattanapongskorn, Naruemon; Coit, David W.

    2007-01-01

    In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed

  17. Reliability methods in nuclear power plant ageing management

    International Nuclear Information System (INIS)

    Simola, K.

    1999-01-01

    The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)

  18. Reliability methods in nuclear power plant ageing management

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K. [VTT Automation, Espoo (Finland). Industrial Automation

    1999-07-01

    The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)

  19. Neurophysiology underlying influence of stimulus reliability on audiovisual integration.

    Science.gov (United States)

    Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J

    2018-01-24

    We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Investigating the Intersession Reliability of Dynamic Brain-State Properties.

    Science.gov (United States)

    Smith, Derek M; Zhao, Yrian; Keilholz, Shella D; Schumacher, Eric H

    2018-06-01

    Dynamic functional connectivity metrics have much to offer to the neuroscience of individual differences of cognition. Yet, despite the recent expansion in dynamic connectivity research, limited resources have been devoted to the study of the reliability of these connectivity measures. To address this, resting-state functional magnetic resonance imaging data from 100 Human Connectome Project subjects were compared across 2 scan days. Brain states (i.e., patterns of coactivity across regions) were identified by classifying each time frame using k means clustering. This was done with and without global signal regression (GSR). Multiple gauges of reliability indicated consistency in the brain-state properties across days and GSR attenuated the reliability of the brain states. Changes in the brain-state properties across the course of the scan were investigated as well. The results demonstrate that summary metrics describing the clustering of individual time frames have adequate test/retest reliability, and thus, these patterns of brain activation may hold promise for individual-difference research.

  1. Health service quality scale: Brazilian Portuguese translation, reliability and validity

    Science.gov (United States)

    2013-01-01

    Background The Health Service Quality Scale is a multidimensional hierarchical scale that is based on interdisciplinary approach. This instrument was specifically created for measuring health service quality based on marketing and health care concepts. The aim of this study was to translate and culturally adapt the Health Service Quality Scale into Brazilian Portuguese and to assess the validity and reliability of the Brazilian Portuguese version of the instrument. Methods We conducted a cross-sectional, observational study, with public health system patients in a Brazilian university hospital. Validity was assessed using Pearson’s correlation coefficient to measure the strength of the association between the Brazilian Portuguese version of the instrument and the SERVQUAL scale. Internal consistency was evaluated using Cronbach’s alpha coefficient; the intraclass (ICC) and Pearson’s correlation coefficients were used for test-retest reliability. Results One hundred and sixteen consecutive postoperative patients completed the questionnaire. Pearson’s correlation coefficient for validity was 0.20. Cronbach's alpha for the first and second administrations of the final version of the instrument were 0.982 and 0.986, respectively. For test-retest reliability, Pearson’s correlation coefficient was 0.89 and ICC was 0.90. Conclusions The culturally adapted, Brazilian Portuguese version of the Health Service Quality Scale is a valid and reliable instrument to measure health service quality. PMID:23327598

  2. Reliability in individual monitoring service.

    Science.gov (United States)

    Mod Ali, N

    2011-03-01

    As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country.

  3. Reliability technology and nuclear power

    International Nuclear Information System (INIS)

    Garrick, B.J.; Kaplan, S.

    1976-01-01

    This paper reviews some of the history and status of nuclear reliability and the evolution of this subject from art towards science. It shows that that probability theory is the appropriate and essential mathematical language of this subject. The authors emphasize that it is more useful to view probability not as a $prime$frequency$prime$, i.e., not as the result of a statistical experiment, but rather as a measure of state of confidence or a state of knowledge. They also show that the probabilistic, quantitative approach has a considerable history of application in the electric power industry in the area of power system planning. Finally, the authors show that the decision theory notion of utility provides a point of view from which risks, benefits, safety, and reliability can be viewed in a unified way thus facilitating understanding, comparison, and communication. 29 refs

  4. Reliability in the utility computing era: Towards reliable Fog computing

    DEFF Research Database (Denmark)

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.

    2013-01-01

    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  5. A COMPUTERIZED DIAGNOSTIC COMPLEX FOR RELIABILITY TESTING OF ELECTRIC MACHINES

    Directory of Open Access Journals (Sweden)

    O.О. Somka

    2015-06-01

    Full Text Available Purpose. To develop a diagnostic complex meeting the criteria and requirements for carrying out accelerated reliability test and realizing the basic modes of electric machines operation and performance of the posed problems necessary in the process of such test. Methodology. To determine and forecast the indices of electric machines reliability in accordance with the statistic data of repair plants we have conditionally divided them into structural parts that are most likely to fail. We have preliminarily assessed the state of each of these parts, which includes revelation of faults and deviations of technical and geometric parameters. We have determined the analyzed electric machine controlled parameters used for assessment of quantitative characteristics of reliability of these parts and electric machines on the whole. Results. As a result of the research, we have substantiated the structure of a computerized complex for electric machines reliability test. It allows us to change thermal and vibration actions without violation of the physics of the processes of aging and wearing of the basic structural parts and elements material. The above mentioned makes it possible to considerably reduce time spent on carrying out electric machines reliability tests and improve trustworthiness of the data obtained as a result of their performance. Originality. A special feature of determination of the controlled parameters consists in removal of vibration components in the idle mode and after disconnection of the analyzed electric machine from the power supply with the aim of singling out the vibration electromagnetic component, fixing the degree of sparking and bend of the shaft by means of phototechnique and local determination of structural parts temperature provided by corresponding location of thermal sensors. Practical value. We have offered a scheme of location of thermal and vibration sensors, which allows improvement of parameters measuring accuracy

  6. Measuring consistency of autobiographical memory recall in depression.

    LENUS (Irish Health Repository)

    Semkovska, Maria

    2012-05-15

    Autobiographical amnesia assessments in depression need to account for normal changes in consistency over time, contribution of mood and type of memories measured. We report herein validation studies of the Columbia Autobiographical Memory Interview - Short Form (CAMI-SF), exclusively used in depressed patients receiving electroconvulsive therapy (ECT) but without previous published report of normative data. The CAMI-SF was administered twice with a 6-month interval to 44 healthy volunteers to obtain normative data for retrieval consistency of its Semantic, Episodic-Extended and Episodic-Specific components and assess their reliability and validity. Healthy volunteers showed significant large decreases in retrieval consistency on all components. The Semantic and Episodic-Specific components demonstrated substantial construct validity. We then assessed CAMI-SF retrieval consistencies over a 2-month interval in 30 severely depressed patients never treated with ECT compared with healthy controls (n=19). On initial assessment, depressed patients produced less episodic-specific memories than controls. Both groups showed equivalent amounts of consistency loss over a 2-month interval on all components. At reassessment, only patients with persisting depressive symptoms were distinguishable from controls on episodic-specific memories retrieved. Research quantifying retrograde amnesia following ECT for depression needs to control for normal loss in consistency over time and contribution of persisting depressive symptoms.

  7. Measuring consistency of autobiographical memory recall in depression.

    Science.gov (United States)

    Semkovska, Maria; Noone, Martha; Carton, Mary; McLoughlin, Declan M

    2012-05-15

    Autobiographical amnesia assessments in depression need to account for normal changes in consistency over time, contribution of mood and type of memories measured. We report herein validation studies of the Columbia Autobiographical Memory Interview - Short Form (CAMI-SF), exclusively used in depressed patients receiving electroconvulsive therapy (ECT) but without previous published report of normative data. The CAMI-SF was administered twice with a 6-month interval to 44 healthy volunteers to obtain normative data for retrieval consistency of its Semantic, Episodic-Extended and Episodic-Specific components and assess their reliability and validity. Healthy volunteers showed significant large decreases in retrieval consistency on all components. The Semantic and Episodic-Specific components demonstrated substantial construct validity. We then assessed CAMI-SF retrieval consistencies over a 2-month interval in 30 severely depressed patients never treated with ECT compared with healthy controls (n=19). On initial assessment, depressed patients produced less episodic-specific memories than controls. Both groups showed equivalent amounts of consistency loss over a 2-month interval on all components. At reassessment, only patients with persisting depressive symptoms were distinguishable from controls on episodic-specific memories retrieved. Research quantifying retrograde amnesia following ECT for depression needs to control for normal loss in consistency over time and contribution of persisting depressive symptoms. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Non linear self consistency of microtearing modes

    International Nuclear Information System (INIS)

    Garbet, X.; Mourgues, F.; Samain, A.

    1987-01-01

    The self consistency of a microtearing turbulence is studied in non linear regimes where the ergodicity of the flux lines determines the electron response. The current which sustains the magnetic perturbation via the Ampere law results from the combines action of the radial electric field in the frame where the island chains are static and of the thermal electron diamagnetism. Numerical calculations show that at usual values of β pol in Tokamaks the turbulence can create a diffusion coefficient of order ν th p 2 i where p i is the ion larmor radius and ν th the electron ion collision frequency. On the other hand, collisionless regimes involving special profiles of each mode near the resonant surface seem possible

  9. Validity and reliability of the NAB Naming Test.

    Science.gov (United States)

    Sachs, Bonnie C; Rush, Beth K; Pedraza, Otto

    2016-05-01

    Confrontation naming is commonly assessed in neuropsychological practice, but few standardized measures of naming exist and those that do are susceptible to the effects of education and culture. The Neuropsychological Assessment Battery (NAB) Naming Test is a 31-item measure used to assess confrontation naming. Despite adequate psychometric information provided by the test publisher, there has been limited independent validation of the test. In this study, we investigated the convergent and discriminant validity, internal consistency, and alternate forms reliability of the NAB Naming Test in a sample of adults (Form 1: n = 247, Form 2: n = 151) clinically referred for neuropsychological evaluation. Results indicate adequate-to-good internal consistency and alternate forms reliability. We also found strong convergent validity as demonstrated by relationships with other neurocognitive measures. We found preliminary evidence that the NAB Naming Test demonstrates a more pronounced ceiling effect than other commonly used measures of naming. To our knowledge, this represents the largest published independent validation study of the NAB Naming Test in a clinical sample. Our findings suggest that the NAB Naming Test demonstrates adequate validity and reliability and merits consideration in the test arsenal of clinical neuropsychologists.

  10. Thermodynamically consistent model calibration in chemical kinetics

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2011-05-01

    Full Text Available Abstract Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new

  11. Validity and reliability of the novel thyroid-specific quality of life questionnaire, ThyPRO

    DEFF Research Database (Denmark)

    Watt, Torquil; Hegedüs, Laszlo; Groenvold, Mogens

    2010-01-01

    Background Appropriate scale validity and internal consistency reliability have recently been documented for the new thyroid-specific quality of life (QoL) patient-reported outcome (PRO) measure for benign thyroid disorders, the ThyPRO. However, before clinical use, clinical validity and test......-retest reliability should be evaluated. Aim To investigate clinical ('known-groups') validity and test-retest reliability of the Danish version of the ThyPRO. Methods For each of the 13 ThyPRO scales, we defined groups expected to have high versus low scores ('known-groups'). The clinical validity (known......-groups validity) was evaluated by whether the ThyPRO scales could detect expected differences in a cross-sectional study of 907 thyroid patients. Test-retest reliability was evaluated by intra-class correlations of two responses to the ThyPRO 2 weeks apart in a subsample of 87 stable patients. Results On all 13...

  12. Reliability analysis of neutron flux monitoring system for PFBR

    International Nuclear Information System (INIS)

    Rajesh, M.G.; Bhatnagar, P.V.; Das, D.; Pithawa, C.K.; Vinod, Gopika; Rao, V.V.S.S.

    2010-01-01

    The Neutron Flux Monitoring System (NFMS) measures reactor power, rate of change of power and reactivity changes in the core in all states of operation and shutdown. The system consists of instrument channels that are designed and built to have high reliability. All channels are required to have a Mean Time Between Failures (MTBF) of 150000 hours minimum. Failure Mode and Effects Analysis (FMEA) and failure rate estimation of NFMS channels has been carried out. FMEA is carried out in compliance with MIL-STD-338B. Reliability estimation of the channels is done according to MIL-HDBK-217FN2. Paper discusses the methodology followed for FMEA and failure rate estimation of two safety channels and results. (author)

  13. Application of modern reliability database techniques to military system data

    International Nuclear Information System (INIS)

    Bunea, Cornel; Mazzuchi, Thomas A.; Sarkani, Shahram; Chang, H.-C.

    2008-01-01

    This paper focuses on analysis techniques of modern reliability databases, with an application to military system data. The analysis of military system data base consists of the following steps: clean the data and perform operation on it in order to obtain good estimators; present simple plots of data; analyze the data with statistical and probabilistic methods. Each step is dealt with separately and the main results are presented. Competing risks theory is advocated as the mathematical support for the analysis. The general framework of competing risks theory is presented together with simple independent and dependent competing risks models available in literature. These models are used to identify the reliability and maintenance indicators required by the operating personnel. Model selection is based on graphical interpretation of plotted data

  14. A Reliable Measure of Information Security Awareness and the Identification of Bias in Responses

    Directory of Open Access Journals (Sweden)

    Agata McCormac

    2017-11-01

    Full Text Available The Human Aspects of Information Security Questionnaire (HAIS-Q is designed to measure Information Security Awareness. More specifically, the tool measures an individual’s knowledge, attitude, and self-reported behaviour relating to information security in the workplace. This paper reports on the reliability of the HAIS-Q, including test-retest reliability and internal consistency. The paper also assesses the reliability of three preliminary over-claiming items, designed specifically to complement the HAIS-Q, and identify those individuals who provide socially desirable responses. A total of 197 working Australians completed two iterations of the HAIS-Q and the over-claiming items, approximately 4 weeks apart. Results of the analysis showed that the HAIS-Q was externally reliable and internally consistent. Therefore, the HAIS-Q can be used to reliably measure information security awareness. Reliability testing on the preliminary over-claiming items was not as robust and further development is required and recommended. The implications of these findings mean that organisations can confidently use the HAIS-Q to not only measure the current state of employee information security awareness within their organisation, but they can also measure the effectiveness and impacts of training interventions, information security awareness programs and campaigns. The influence of cultural changes and the effect of security incidents can also be assessed.

  15. Reliability of Direct Behavior Ratings - Social Competence (DBR-SC) data: How many ratings are necessary?

    Science.gov (United States)

    Kilgus, Stephen P; Riley-Tillman, T Chris; Stichter, Janine P; Schoemann, Alexander M; Bellesheim, Katie

    2016-09-01

    The purpose of this investigation was to evaluate the reliability of Direct Behavior Ratings-Social Competence (DBR-SC) ratings. Participants included 60 students identified as possessing deficits in social competence, as well as their 23 classroom teachers. Teachers used DBR-SC to complete ratings of 5 student behaviors within the general education setting on a daily basis across approximately 5 months. During this time, each student was assigned to 1 of 2 intervention conditions, including the Social Competence Intervention-Adolescent (SCI-A) and a business-as-usual (BAU) intervention. Ratings were collected across 3 intervention phases, including pre-, mid-, and postintervention. Results suggested DBR-SC ratings were highly consistent across time within each student, with reliability coefficients predominantly falling in the .80 and .90 ranges. Findings further indicated such levels of reliability could be achieved with only a small number of ratings, with estimates varying between 2 and 10 data points. Group comparison analyses further suggested the reliability of DBR-SC ratings increased over time, such that student behavior became more consistent throughout the intervention period. Furthermore, analyses revealed that for 2 of the 5 DBR-SC behavior targets, the increase in reliability over time was moderated by intervention grouping, with students receiving SCI-A demonstrating greater increases in reliability relative to those in the BAU group. Limitations of the investigation as well as directions for future research are discussed herein. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. A reliability analysis tool for SpaceWire network

    Science.gov (United States)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  17. Waste package reliability analysis

    International Nuclear Information System (INIS)

    Pescatore, C.; Sastre, C.

    1983-01-01

    Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table

  18. Accelerator reliability workshop

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, L; Duru, Ph; Koch, J M; Revol, J L; Van Vaerenbergh, P; Volpe, A M; Clugnet, K; Dely, A; Goodhew, D

    2002-07-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  19. Human Reliability Program Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  20. Scyllac equipment reliability analysis

    International Nuclear Information System (INIS)

    Gutscher, W.D.; Johnson, K.J.

    1975-01-01

    Most of the failures in Scyllac can be related to crowbar trigger cable faults. A new cable has been designed, procured, and is currently undergoing evaluation. When the new cable has been proven, it will be worked into the system as quickly as possible without causing too much additional down time. The cable-tip problem may not be easy or even desirable to solve. A tightly fastened permanent connection that maximizes contact area would be more reliable than the plug-in type of connection in use now, but it would make system changes and repairs much more difficult. The balance of the failures have such a low occurrence rate that they do not cause much down time and no major effort is underway to eliminate them. Even though Scyllac was built as an experimental system and has many thousands of components, its reliability is very good. Because of this the experiment has been able to progress at a reasonable pace

  1. Improving Power Converter Reliability

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental......The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  2. Accelerator reliability workshop

    International Nuclear Information System (INIS)

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D.

    2002-01-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop

  3. Safety and reliability assessment

    International Nuclear Information System (INIS)

    1979-01-01

    This report contains the papers delivered at the course on safety and reliability assessment held at the CSIR Conference Centre, Scientia, Pretoria. The following topics were discussed: safety standards; licensing; biological effects of radiation; what is a PWR; safety principles in the design of a nuclear reactor; radio-release analysis; quality assurance; the staffing, organisation and training for a nuclear power plant project; event trees, fault trees and probability; Automatic Protective Systems; sources of failure-rate data; interpretation of failure data; synthesis and reliability; quantification of human error in man-machine systems; dispersion of noxious substances through the atmosphere; criticality aspects of enrichment and recovery plants; and risk and hazard analysis. Extensive examples are given as well as case studies

  4. Reliability of Circumplex Axes

    Directory of Open Access Journals (Sweden)

    Micha Strack

    2013-06-01

    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  5. The cost of reliability

    International Nuclear Information System (INIS)

    Ilic, M.

    1998-01-01

    In this article the restructuring process under way in the US power industry is being revisited from the point of view of transmission system provision and reliability was rolled into the average cost of electricity to all, it is not so obvious how is this cost managed in the new industry. A new MIT approach to transmission pricing is here suggested as a possible solution [it

  6. Test-retest reliability of the 40 Hz EEG auditory steady-state response.

    Directory of Open Access Journals (Sweden)

    Kristina L McFadden

    Full Text Available Auditory evoked steady-state responses are increasingly being used as a marker of brain function and dysfunction in various neuropsychiatric disorders, but research investigating the test-retest reliability of this response is lacking. The purpose of this study was to assess the consistency of the auditory steady-state response (ASSR across sessions. Furthermore, the current study aimed to investigate how the reliability of the ASSR is impacted by stimulus parameters and analysis method employed. The consistency of this response across two sessions spaced approximately 1 week apart was measured in nineteen healthy adults using electroencephalography (EEG. The ASSR was entrained by both 40 Hz amplitude-modulated white noise and click train stimuli. Correlations between sessions were assessed with two separate analytical techniques: a channel-level analysis across the whole-head array and b signal-space projection from auditory dipoles. Overall, the ASSR was significantly correlated between sessions 1 and 2 (p<0.05, multiple comparison corrected, suggesting adequate test-retest reliability of this response. The current study also suggests that measures of inter-trial phase coherence may be more reliable between sessions than measures of evoked power. Results were similar between the two analysis methods, but reliability varied depending on the presented stimulus, with click train stimuli producing more consistent responses than white noise stimuli.

  7. A Hierarchical Approach for Measuring the Consistency of Water Areas between Multiple Representations of Tile Maps with Different Scales

    Directory of Open Access Journals (Sweden)

    Yilang Shen

    2017-08-01

    Full Text Available In geographic information systems, the reliability of querying, analysing, or reasoning results depends on the data quality. One central criterion of data quality is consistency, and identifying inconsistencies is crucial for maintaining the integrity of spatial data from multiple sources or at multiple resolutions. In traditional methods of consistency assessment, vector data are used as the primary experimental data. In this manuscript, we describe the use of a new type of raster data, tile maps, to access the consistency of information from multiscale representations of the water bodies that make up drainage systems. We describe a hierarchical methodology to determine the spatial consistency of tile-map datasets that display water areas in a raster format. Three characteristic indices, the degree of global feature consistency, the degree of local feature consistency, and the degree of overlap, are proposed to measure the consistency of multiscale representations of water areas. The perceptual hash algorithm and the scale-invariant feature transform (SIFT descriptor are applied to extract and measure the global and local features of water areas. By performing combined calculations using these three characteristic indices, the degrees of consistency of multiscale representations of water areas can be divided into five grades: exactly consistent, highly consistent, moderately consistent, less consistent, and inconsistent. For evaluation purposes, the proposed method is applied to several test areas from the Tiandi map of China. In addition, we identify key technologies that are related to the process of extracting water areas from a tile map. The accuracy of the consistency assessment method is evaluated, and our experimental results confirm that the proposed methodology is efficient and accurate.

  8. When and why are reliable organizations favored?

    DEFF Research Database (Denmark)

    Ethiraj, Sendil; Yi, Sangyoon

    of the ensuing work examined only corollary implications of this observation. We treat the observation as a research question and ask: when and why are reliable organizations favored by evolutionary forces? Using a simple theoretical model, we direct attention at a minimal set of variables that are implicated...... shocks, reliable organizations can in fact outperform their less reliable counterparts if they can take advantage of the knowledge resident in their historical choices. While these results are counter-intuitive, the caveat is that our results are only an existence proof for our theory rather than...

  9. Force Concept Inventory-based multiple-choice test for investigating students’ representational consistency

    Directory of Open Access Journals (Sweden)

    Pasi Nieminen

    2010-08-01

    Full Text Available This study investigates students’ ability to interpret multiple representations consistently (i.e., representational consistency in the context of the force concept. For this purpose we developed the Representational Variant of the Force Concept Inventory (R-FCI, which makes use of nine items from the 1995 version of the Force Concept Inventory (FCI. These original FCI items were redesigned using various representations (such as motion map, vectorial and graphical, yielding 27 multiple-choice items concerning four central concepts underpinning the force concept: Newton’s first, second, and third laws, and gravitation. We provide some evidence for the validity and reliability of the R-FCI; this analysis is limited to the student population of one Finnish high school. The students took the R-FCI at the beginning and at the end of their first high school physics course. We found that students’ (n=168 representational consistency (whether scientifically correct or not varied considerably depending on the concept. On average, representational consistency and scientifically correct understanding increased during the instruction, although in the post-test only a few students performed consistently both in terms of representations and scientifically correct understanding. We also compared students’ (n=87 results of the R-FCI and the FCI, and found that they correlated quite well.

  10. Software reliability studies

    Science.gov (United States)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  11. Investment in new product reliability

    International Nuclear Information System (INIS)

    Murthy, D.N.P.; Rausand, M.; Virtanen, S.

    2009-01-01

    Product reliability is of great importance to both manufacturers and customers. Building reliability into a new product is costly, but the consequences of inadequate product reliability can be costlier. This implies that manufacturers need to decide on the optimal investment in new product reliability by achieving a suitable trade-off between the two costs. This paper develops a framework and proposes an approach to help manufacturers decide on the investment in new product reliability.

  12. Consistent vapour-liquid equilibrium data containing lipids

    DEFF Research Database (Denmark)

    Cunico, Larissa; Ceriani, Roberta; Sarup, Bent

    Consistent physical and thermodynamic properties of pure components and their mixtures are important for process design, simulation, and optimization as well as design of chemical based products. In the case of lipids, it was observed a lack of experimental data for pure compounds and also...... for their mixtures in open literature, what makes necessary the development of reliable predictive models based on limited data. To contribute to the missing data, measurements of isobaric vapour-liquid equilibrium (VLE) data of three binary mixtures at two different pressures were performed at State University...

  13. [Reliability and validity of warning signs checklist for screening psychological, behavioral and developmental problems of children].

    Science.gov (United States)

    Huang, X N; Zhang, Y; Feng, W W; Wang, H S; Cao, B; Zhang, B; Yang, Y F; Wang, H M; Zheng, Y; Jin, X M; Jia, M X; Zou, X B; Zhao, C X; Robert, J; Jing, Jin

    2017-06-02

    Objective: To evaluate the reliability and validity of warning signs checklist developed by the National Health and Family Planning Commission of the People's Republic of China (NHFPC), so as to determine the screening effectiveness of warning signs on developmental problems of early childhood. Method: Stratified random sampling method was used to assess the reliability and validity of checklist of warning sign and 2 110 children 0 to 6 years of age(1 513 low-risk subjects and 597 high-risk subjects) were recruited from 11 provinces of China. The reliability evaluation for the warning signs included the test-retest reliability and interrater reliability. With the use of Age and Stage Questionnaire (ASQ) and Gesell Development Diagnosis Scale (GESELL) as the criterion scales, criterion validity was assessed by determining the correlation and consistency between the screening results of warning signs and the criterion scales. Result: In terms of the warning signs, the screening positive rates at different ages ranged from 10.8%(21/141) to 26.2%(51/137). The median (interquartile) testing time for each subject was 1(0.6) minute. Both the test-retest reliability and interrater reliability of warning signs reached 0.7 or above, indicating that the stability was good. In terms of validity assessment, there was remarkable consistency between ASQ and warning signs, with the Kappa value of 0.63. With the use of GESELL as criterion, it was determined that the sensitivity of warning signs in children with suspected developmental delay was 82.2%, and the specificity was 77.7%. The overall Youden index was 0.6. Conclusion: The reliability and validity of warning signs checklist for screening early childhood developmental problems have met the basic requirements of psychological screening scales, with the characteristics of short testing time and easy operation. Thus, this warning signs checklist can be used for screening psychological and behavioral problems of early childhood

  14. TFTR CAMAC power supplies reliability

    International Nuclear Information System (INIS)

    Camp, R.A.; Bergin, W.

    1989-01-01

    Since the expected life of the Tokamak Fusion Test Reactor (TFTR) has been extended into the early 1990's, the issues of equipment wear-out, when to refurbish/replace, and the costs associated with these decisions, must be faced. The management of the maintenance of the TFTR Central Instrumentation, Control and Data Acquisition System (CICADA) power supplies within the CAMAC network is a case study of a set of systems to monitor repairable systems reliability, costs, and results of action. The CAMAC network is composed of approximately 500 racks, each with its own power supply. By using a simple reliability estimator on a coarse time interval, in conjunction with determining the root cause of individual failures, a cost effective repair and maintenance program has been realized. This paper describes the estimator, some of the specific causes for recurring failures and their correction, and the subsequent effects on the reliability estimator. By extension of this program the authors can assess the continued viability of CAMAC power supplies into the future, predicting wear-out and developing cost effective refurbishment/replacement policies. 4 refs., 3 figs., 1 tab

  15. Reliability engineering theory and practice

    CERN Document Server

    Birolini, Alessandro

    2017-01-01

    This book shows how to build in and assess reliability, availability, maintainability, and safety (RAMS) of components, equipment, and systems. It presents the state of the art of reliability (RAMS) engineering, in theory & practice, and is based on over 30 years author's experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The book structure allows rapid access to practical results. Methods & tools are given in a way that they can be tailored to cover different RAMS requirement levels. Thanks to Appendices A6 - A8 the book is mathematically self-contained, and can be used as a textbook or as a desktop reference with a large number of tables (60), figures (210), and examples / exercises^ 10,000 per year since 2013) were the motivation for this final edition, the 13th since 1985, including German editions. Extended and carefully reviewed to improve accuracy, it represents the continuous improvement effort to satisfy reader's needs and confidenc...

  16. Operator reliability assessment system (OPERAS)

    International Nuclear Information System (INIS)

    Singh, A.; Spurgin, A.J.; Martin, T.; Welsch, J.; Hallam, J.W.

    1991-01-01

    OPERAS is a personal-computer (PC) based software to collect and process simulator data on control-room operators responses during requalification training scenarios. The data collection scheme is based upon approach developed earlier during the EPRI Operator Reliability Experiments project. The software allows automated data collection from simulator, thus minimizing simulator staff time and resources to collect, maintain and process data which can be useful in monitoring, assessing and enhancing the progress of crew reliability and effectiveness. The system is designed to provide the data and output information in the form of user-friendly charts, tables and figures for use by plant staff. OPERAS prototype software has been implemented at the Diablo Canyon (PWR) and Millstone (BWR) plants and is currently being used to collect operator response data. Data collected from similator include plant-state variables such as reactor pressure and temperature, malfunction, times at which annunciators are activated, operator actions and observations of crew behavior by training staff. The data and systematic analytical results provided by the OPERAS system can contribute to increase objectivity by the utility probabilistic risk analysis (PRA) and training staff in monitoring and assessing reliability of their crews

  17. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  18. Reliability analysis of software based safety functions

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1993-05-01

    The methods applicable in the reliability analysis of software based safety functions are described in the report. Although the safety functions also include other components, the main emphasis in the report is on the reliability analysis of software. The check list type qualitative reliability analysis methods, such as failure mode and effects analysis (FMEA), are described, as well as the software fault tree analysis. The safety analysis based on the Petri nets is discussed. The most essential concepts and models of quantitative software reliability analysis are described. The most common software metrics and their combined use with software reliability models are discussed. The application of software reliability models in PSA is evaluated; it is observed that the recent software reliability models do not produce the estimates needed in PSA directly. As a result from the study some recommendations and conclusions are drawn. The need of formal methods in the analysis and development of software based systems, the applicability of qualitative reliability engineering methods in connection to PSA and the need to make more precise the requirements for software based systems and their analyses in the regulatory guides should be mentioned. (orig.). (46 refs., 13 figs., 1 tab.)

  19. Mission Reliability Estimation for Repairable Robot Teams

    Science.gov (United States)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the

  20. Reliability Analysis of Elasto-Plastic Structures

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Sørensen, John Dalsgaard

    1984-01-01

    . Failure of this type of system is defined either as formation of a mechanism or by failure of a prescribed number of elements. In the first case failure is independent of the order in which the elements fail, but this is not so by the second definition. The reliability analysis consists of two parts...... are described and the two definitions of failure can be used by the first formulation, but only the failure definition based on formation of a mechanism by the second formulation. The second part of the reliability analysis is an estimate of the failure probability for the structure on the basis...