WorldWideScience

Sample records for e-mri reliably detect

  1. Reliability of leak detection systems in LWRs

    International Nuclear Information System (INIS)

    Kupperman, D.S.

    1986-10-01

    In this paper, NRC guidelines for leak detection will be reviewed, current practices described, potential safety-related problems discussed, and potential improvements in leak detection technology (with emphasis on acoustic methods) evaluated

  2. New Multiplexing Tools for Reliable GMO Detection

    NARCIS (Netherlands)

    Pla, M.; Nadal, A.; Baeten, V.; Bahrdt, C.; Berben, G.; Bertheau, Y.; Coll, A.; Dijk, van J.P.; Dobnik, D.; Fernandez-Pierna, J.A.; Gruden, K.; Hamels, S.; Holck, A.; Holst-Jensen, A.; Janssen, E.; Kok, E.J.; Paz, La J.L.; Laval, V.; Leimanis, S.; Malcevschi, A.; Marmiroli, N.; Morisset, D.; Prins, T.W.; Remacle, J.; Ujhelyi, G.; Wulff, D.

    2012-01-01

    Among the available methods for GMO detection, enforcement and routine laboratories use in practice PCR, based on the detection of transgenic DNA. The cost required for GMO analysis is constantly increasing due to the progress of GMO commercialization, with inclusion of higher diversity of species,

  3. Reliably detectable flaw size for NDE methods that use calibration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-04-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  4. Reliability evaluation of the Savannah River reactor leak detection system

    International Nuclear Information System (INIS)

    Daugherty, W.L.; Sindelar, R.L.; Wallace, I.T.

    1991-01-01

    The Savannah River Reactors have been in operation since the mid-1950's. The primary degradation mode for the primary coolant loop piping is intergranular stress corrosion cracking. The leak-before-break (LBB) capability of the primary system piping has been demonstrated as part of an overall structural integrity evaluation. One element of the LBB analyses is a reliability evaluation of the leak detection system. The most sensitive element of the leak detection system is the airborne tritium monitors. The presence of small amounts of tritium in the heavy water coolant provide the basis for a very sensitive system of leak detection. The reliability of the tritium monitors to properly identify a crack leaking at a rate of either 50 or 300 lb/day (0.004 or 0.023 gpm, respectively) has been characterized. These leak rates correspond to action points for which specific operator actions are required. High reliability has been demonstrated using standard fault tree techniques. The probability of not detecting a leak within an assumed mission time of 24 hours is estimated to be approximately 5 x 10 -5 per demand. This result is obtained for both leak rates considered. The methodology and assumptions used to obtain this result are described in this paper. 3 refs., 1 fig., 1 tab

  5. Fault detection and reliability, knowledge based and other approaches

    International Nuclear Information System (INIS)

    Singh, M.G.; Hindi, K.S.; Tzafestas, S.G.

    1987-01-01

    These proceedings are split up into four major parts in order to reflect the most significant aspects of reliability and fault detection as viewed at present. The first part deals with knowledge-based systems and comprises eleven contributions from leading experts in the field. The emphasis here is primarily on the use of artificial intelligence, expert systems and other knowledge-based systems for fault detection and reliability. The second part is devoted to fault detection of technological systems and comprises thirteen contributions dealing with applications of fault detection techniques to various technological systems such as gas networks, electric power systems, nuclear reactors and assembly cells. The third part of the proceedings, which consists of seven contributions, treats robust, fault tolerant and intelligent controllers and covers methodological issues as well as several applications ranging from nuclear power plants to industrial robots to steel grinding. The fourth part treats fault tolerant digital techniques and comprises five contributions. Two papers, one on reactor noise analysis, the other on reactor control system design, are indexed separately. (author)

  6. RELIABILITY OF THE DETECTION OF THE BARYON ACOUSTIC PEAK

    International Nuclear Information System (INIS)

    MartInez, Vicent J.; Arnalte-Mur, Pablo; De la Cruz, Pablo; Saar, Enn; Tempel, Elmo; Pons-BorderIa, MarIa Jesus; Paredes, Silvestre; Fernandez-Soto, Alberto

    2009-01-01

    The correlation function of the distribution of matter in the universe shows, at large scales, baryon acoustic oscillations, which were imprinted prior to recombination. This feature was first detected in the correlation function of the luminous red galaxies of the Sloan Digital Sky Survey (SDSS). Recently, the final release (DR7) of the SDSS has been made available, and the useful volume is about two times bigger than in the old sample. We present here, for the first time, the redshift-space correlation function of this sample at large scales together with that for one shallower, but denser volume-limited subsample drawn from the Two-Degree Field Redshift Survey. We test the reliability of the detection of the acoustic peak at about 100 h -1 Mpc and the behavior of the correlation function at larger scales by means of careful estimation of errors. We confirm the presence of the peak in the latest data although broader than in previous detections.

  7. Reliability analysis for the quench detection in the LHC machine

    CERN Document Server

    Denz, R; Vergara-Fernández, A

    2002-01-01

    The Large Hadron Collider (LHC) will incorporate a large amount of superconducting elements that require protection in case of a quench. Key elements in the quench protection system are the electronic quench detectors. Their reliability will have an important impact on the down time as well as on the operational cost of the collider. The expected rates of both false and missed quenches have been computed for several redundant detection schemes. The developed model takes account of the maintainability of the system to optimise the frequency of foreseen checks, and evaluate their influence on the performance of different detection topologies. Seen the uncertainty of the failure rate of the components combined with the LHC tunnel environment, the study has been completed with a sensitivity analysis of the results. The chosen detection scheme and the maintainability strategy for each detector family are given.

  8. Detecting binary black holes with efficient and reliable templates

    International Nuclear Information System (INIS)

    Damour, T.; Iyer, B.R.; Sathyaprakash, B.S.

    2001-01-01

    Detecting binary black holes in interferometer data requires an accurate knowledge of the orbital phase evolution of the system. From the point of view of data analysis one also needs fast algorithms to compute the templates that will be employed in searching for black hole binaries. Recently, there has been progress on both these fronts: On one hand, re-summation techniques have made it possible to accelerate the convergence of poorly convergent asymptotic post-Newtonian series and derive waveforms beyond the conventional adiabatic approximation. We now have a waveform model that extends beyond the inspiral regime into the plunge phase followed by the quasi-normal mode ringing. On the other hand, explicit Fourier domain waveforms have been derived that make the generation of waveforms fast enough so as not to be a burden on the computational resources required in filtering the detector data. These new developments should make it possible to efficiently and reliably search for black hole binaries in data from first interferometers. (author)

  9. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  10. Reliability of leak detection systems in light water reactors

    International Nuclear Information System (INIS)

    Kupperman, D.S.

    1987-01-01

    US Nuclear Regulatory Commission Guide 1.45 recommends the use of at least three different detection methods in reactors to detect leakage. Monitoring of both sump-flow and airborne particulate radioactivity is recommended. A third method can involve either monitoring of condensate flow rate from air coolers or monitoring of airborne gaseous radioactivity. Although the methods currently used for leak detection reflect the state of the art, other techniques may be developed and used. Since the recommendations of Regulatory Guide 1.45 are not mandatory, the technical specifications for 74 operating plants have been reviewed to determine the types of leak detection methods employed. In addition, Licensee Event Report (LER) Compilations from June 1985 to June 1986 have been reviewed to help establish actual capabilities for detecting leaks and determining their source. Work at Argonne National Laboratory has demonstrated that improvements in leak detection, location, and sizing are possible with advanced acoustic leak detection technology

  11. PV Systems Reliability Final Technical Report: Ground Fault Detection

    Energy Technology Data Exchange (ETDEWEB)

    Lavrova, Olga [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flicker, Jack David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    We have examined ground faults in PhotoVoltaic (PV) arrays and the efficacy of fuse, current detection (RCD), current sense monitoring/relays (CSM), isolation/insulation (Riso) monitoring, and Ground Fault Detection and Isolation (GFID) using simulations based on a Simulation Program with Integrated Circuit Emphasis SPICE ground fault circuit model, experimental ground faults installed on real arrays, and theoretical equations.

  12. Objective Methods for Reliable Detection of Concealed Depression

    Directory of Open Access Journals (Sweden)

    Cynthia eSolomon

    2015-04-01

    Full Text Available Recent research has shown that it is possible to automatically detect clinical depression from audio-visual recordings. Before considering integration in a clinical pathway, a key question that must be asked is whether such systems can be easily fooled. This work explores the potential of acoustic features to detect clinical depression in adults both when acting normally and when asked to conceal their depression. Nine adults diagnosed with mild to moderate depression as per the Beck Depression Inventory (BDI-II and Patient Health Questionnaire (PHQ-9 were asked a series of questions and to read a excerpt from a novel aloud under two different experimental conditions. In one, participants were asked to act naturally and in the other, to suppress anything that they felt would be indicative of their depression. Acoustic features were then extracted from this data and analysed using paired t-tests to determine any statistically significant differences between healthy and depressed participants. Most features that were found to be significantly different during normal behaviour remained so during concealed behaviour. In leave-one-subject-out automatic classification studies of the 9 depressed subjects and 8 matched healthy controls, an 88% classification accuracy and 89% sensitivity was achieved. Results remained relatively robust during concealed behaviour, with classifiers trained on only non-concealed data achieving 81% detection accuracy and 75% sensitivity when tested on concealed data. These results indicate there is good potential to build deception-proof automatic depression monitoring systems.

  13. Research Note The reliability of a field test kit for the detection and ...

    African Journals Online (AJOL)

    Research Note The reliability of a field test kit for the detection and the persistence of ... Open Access DOWNLOAD FULL TEXT ... The objectives were to test a field kit for practicality and reliability, to assess the spread of the bacteria among ...

  14. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    Science.gov (United States)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  15. Bedside ultrasound reliability in locating catheter and detecting complications

    Directory of Open Access Journals (Sweden)

    Payman Moharamzadeh

    2016-10-01

    Full Text Available Introduction: Central venous catheterization is one of the most common medical procedures and is associated with such complications as misplacement and pneumothorax. Chest X-ray is among good ways for evaluation of these complications. However, due to patient’s excessive exposure to radiation, time consumption and low diagnostic value in detecting pneumothorax in the supine patient, the present study intends to examine bedside ultrasound diagnostic value in locating tip of the catheter and pneumothorax. Materials and methods: In the present cross-sectional study, all referred patients requiring central venous catheterization were examined. Central venous catheterization was performed by a trained emergency medicine specialist, and the location of catheter and the presence of pneumothorax were examined and compared using two modalities of ultrasound and x-ray (as the reference standard. Sensitivity, specificity, and positive and negative predicting values were reported. Results: A total of 200 non-trauma patients were included in the study (58% men. Cohen’s Kappa consistency coefficients for catheterization and diagnosis of pneumothorax were found as 0.49 (95% CI: 0.43-0.55, 0.89 (P<0.001, (95% CI: 97.8-100, respectively. Also, ultrasound sensitivity and specificity in diagnosing pneumothorax were 75% (95% CI: 35.6-95.5, and 100% (95% CI: 97.6-100, respectively. Conclusion: The present study results showed low diagnostic value of ultrasound in determining catheter location and in detecting pneumothorax. With knowledge of previous studies, the search still on this field.   Keywords: Central venous catheterization; complications; bedside ultrasound; radiography;

  16. Reliability considerations of electronics components for the deep underwater muon and neutrino detection system

    International Nuclear Information System (INIS)

    Leskovar, B.

    1980-02-01

    The reliability of some electronics components for the Deep Underwater Muon and Neutrino Detection (DUMAND) System is discussed. An introductory overview of engineering concepts and technique for reliability assessment is given. Component reliability is discussed in the contest of major factors causing failures, particularly with respect to physical and chemical causes, process technology and testing, and screening procedures. Failure rates are presented for discrete devices and for integrated circuits as well as for basic electronics components. Furthermore, the military reliability specifications and standards for semiconductor devices are reviewed

  17. The Reliability and Effectiveness of a Radar-Based Animal Detection System

    Science.gov (United States)

    2017-09-22

    This document contains data on the reliability and effectiveness of an animal detection system along U.S. Hwy 95 near Bonners Ferry, Idaho. The system uses a Doppler radar to detect large mammals (e.g., deer and elk) when they approach the highway. T...

  18. The Reliability and Effectiveness of a Radar-Based Animal Detection System

    Science.gov (United States)

    2017-09-01

    This document contains data on the reliability and effectiveness of an animal detection system along U.S. Hwy 95 near Bonners Ferry, Idaho. The system uses a Doppler radar to detect large mammals (e.g., deer and elk) when they approach the highway. T...

  19. Advances in developing rapid, reliable and portable detection systems for alcohol.

    Science.gov (United States)

    Thungon, Phurpa Dema; Kakoti, Ankana; Ngashangva, Lightson; Goswami, Pranab

    2017-11-15

    Development of portable, reliable, sensitive, simple, and inexpensive detection system for alcohol has been an instinctive demand not only in traditional brewing, pharmaceutical, food and clinical industries but also in rapidly growing alcohol based fuel industries. Highly sensitive, selective, and reliable alcohol detections are currently amenable typically through the sophisticated instrument based analyses confined mostly to the state-of-art analytical laboratory facilities. With the growing demand of rapid and reliable alcohol detection systems, an all-round attempt has been made over the past decade encompassing various disciplines from basic and engineering sciences. Of late, the research for developing small-scale portable alcohol detection system has been accelerated with the advent of emerging miniaturization techniques, advanced materials and sensing platforms such as lab-on-chip, lab-on-CD, lab-on-paper etc. With these new inter-disciplinary approaches along with the support from the parallel knowledge growth on rapid detection systems being pursued for various targets, the progress on translating the proof-of-concepts to commercially viable and environment friendly portable alcohol detection systems is gaining pace. Here, we summarize the progress made over the years on the alcohol detection systems, with a focus on recent advancement towards developing portable, simple and efficient alcohol sensors. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Reliability and minimal detectable difference in multisegment foot kinematics during shod walking and running.

    Science.gov (United States)

    Milner, Clare E; Brindle, Richard A

    2016-01-01

    There has been increased interest recently in measuring kinematics within the foot during gait. While several multisegment foot models have appeared in the literature, the Oxford foot model has been used frequently for both walking and running. Several studies have reported the reliability for the Oxford foot model, but most studies to date have reported reliability for barefoot walking. The purpose of this study was to determine between-day (intra-rater) and within-session (inter-trial) reliability of the modified Oxford foot model during shod walking and running and calculate minimum detectable difference for common variables of interest. Healthy adult male runners participated. Participants ran and walked in the gait laboratory for five trials of each. Three-dimensional gait analysis was conducted and foot and ankle joint angle time series data were calculated. Participants returned for a second gait analysis at least 5 days later. Intraclass correlation coefficients and minimum detectable difference were determined for walking and for running, to indicate both within-session and between-day reliability. Overall, relative variables were more reliable than absolute variables, and within-session reliability was greater than between-day reliability. Between-day intraclass correlation coefficients were comparable to those reported previously for adults walking barefoot. It is an extension in the use of the Oxford foot model to incorporate wearing a shoe while maintaining marker placement directly on the skin for each segment. These reliability data for walking and running will aid in the determination of meaningful differences in studies which use this model during shod gait. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. The reliability of magnetic resonance imaging in traumatic brain injury lesion detection

    NARCIS (Netherlands)

    Geurts, B.H.J.; Andriessen, T.M.J.C.; Goraj, B.M.; Vos, P.E.

    2012-01-01

    Objective: This study compares inter-rater-reliability, lesion detection and clinical relevance of T2-weighted imaging (T2WI), Fluid Attenuated Inversion Recovery (FLAIR), T2*-gradient recalled echo (T2*-GRE) and Susceptibility Weighted Imaging (SWI) in Traumatic Brain Injury (TBI). Methods: Three

  2. Reliability of recordings of subgingival calculus detected using an ultrasonic device.

    Science.gov (United States)

    Corraini, Priscila; López, Rodrigo

    2015-04-01

    To assess the intra-examiner reliability of recordings of subgingival calculus detected using an ultrasonic device, and to investigate the influence of subject-, tooth- and site-level factors on the reliability of these subgingival calculus recordings. On two occasions, within a 1-week interval, 147 adult periodontitis patients received a full-mouth clinical periodontal examination by a single trained examiner. Duplicate subgingival calculus recordings, in six sites per tooth, were obtained using an ultrasonic device for calculus detection and removal. Agreement was observed in 65 % of the 22,584 duplicate subgingival calculus recordings, ranging 45 % to 83 % according to subject. Using hierarchical modeling, disagreements in the subgingival calculus duplicate recordings were more likely in all other sites than the mid-buccal, and in sites harboring supragingival calculus. Disagreements were less likely in sites with PD ≥  4 mm and with furcation involvement  ≥  degree 2. Bleeding on probing or suppuration did not influence the reliability of subgingival calculus. At the subject-level, disagreements were less likely in patients presenting with the highest and lowest extent categories of the covariate subgingival calculus. The reliability of subgingival calculus recordings using the ultrasound technology is reasonable. The results of the present study suggest that the reliability of subgingival calculus recordings is not influenced by the presence of inflammation. Moreover, subgingival calculus can be more reliably detected using the ultrasound device at sites with higher need for periodontal therapy, i.e., sites presenting with deep pockets and premolars and molars with furcation involvement.

  3. Test-retest reliability of myofascial trigger point detection in hip and thigh areas.

    Science.gov (United States)

    Rozenfeld, E; Finestone, A S; Moran, U; Damri, E; Kalichman, L

    2017-10-01

    Myofascial trigger points (MTrP's) are a primary source of pain in patients with musculoskeletal disorders. Nevertheless, they are frequently underdiagnosed. Reliable MTrP palpation is the necessary for their diagnosis and treatment. The few studies that have looked for intra-tester reliability of MTrPs detection in upper body, provide preliminary evidence that MTrP palpation is reliable. Reliability tests for MTrP palpation on the lower limb have not yet been performed. To evaluate inter- and intra-tester reliability of MTrP recognition in hip and thigh muscles. Reliability study. 21 patients (15 males and 6 females, mean age 21.1 years) referred to the physical therapy clinic, 10 with knee or hip pain and 11 with pain in an upper limb, low back, shin or ankle. Two experienced physical therapists performed the examinations, blinded to the subjects' identity, medical condition and results of the previous MTrP evaluation. Each subject was evaluated four times, twice by each examiner in a random order. Dichotomous findings included a palpable taut band, tenderness, referred pain, and relevance of referred pain to patient's complaint. Based on these, diagnosis of latent MTrP's or active MTrP's was established. The evaluation was performed on both legs and included a total of 16 locations in the following muscles: rectus femoris (proximal), vastus medialis (middle and distal), vastus lateralis (middle and distal) and gluteus medius (anterior, posterior and distal). Inter- and intra-tester reliability (Cohen's kappa (κ)) values for single sites ranged from -0.25 to 0.77. Median intra-tester reliability was 0.45 and 0.46 for latent and active MTrP's, and median inter-tester reliability was 0.51 and 0.64 for latent and active MTrPs, respectively. The examination of the distal vastus medialis was most reliable for latent and active MTrP's (intra-tester k = 0.27-0.77, inter-tester k = 0.77 and intra-tester k = 0.53-0.72, inter-tester k = 0.72, correspondingly

  4. Reliability assessment for thickness measurements of pipe wall using probability of detection

    International Nuclear Information System (INIS)

    Nakamoto, Hiroyuki; Kojima, Fumio; Kato, Sho

    2013-01-01

    This paper proposes a reliability assessment method for thickness measurements of pipe wall using probability of detection (POD). Thicknesses of pipes are measured by qualified inspectors with ultrasonic thickness gauges. The inspection results are affected by human factors of the inspectors and include some errors, because the inspectors have different experiences and frequency of inspections. In order to ensure reliability for inspection results, first, POD evaluates experimental results of pipe-wall thickness inspection. We verify that the results have differences depending on inspectors including qualified inspectors. Second, two human factors that affect POD are indicated. Finally, it is confirmed that POD can identify the human factors and ensure reliability for pipe-wall thickness inspections. (author)

  5. Reliability Study Regarding the Use of Histogram Similarity Methods for Damage Detection

    Directory of Open Access Journals (Sweden)

    Nicoleta Gillich

    2013-01-01

    Full Text Available The paper analyses the reliability of three dissimilarity estimators to compare histograms, as support for a frequency-based damage detection method, able to identify structural changes in beam-like structures. First a brief presentation of the own developed damage detection method is made, with focus on damage localization. It consists actually in comparing a histogram derived from measurement results, with a large series of histograms, namely the damage location indexes for all locations along the beam, obtained by calculus. We tested some dissimilarity estimators like the Minkowski-form Distances, the Kullback-Leibler Divergence and the Histogram Intersection and found the Minkowski Distance as the method providing best results. It was tested for numerous locations, using real measurement results and with results artificially debased by noise, proving its reliability.

  6. Indian program for development of technologies relevant to reliable, non-intrusive, concealed-contraband detection

    International Nuclear Information System (INIS)

    Auluck, S.K.H.

    2007-01-01

    Generating capability for reliable, non-intrusive detection of concealed-contraband, particularly, organic contraband like explosives and narcotics, has become a national priority. This capability spans a spectrum of technologies. If a technology mission addressing the needs of a highly sophisticated technology like PFNA is set up, the capabilities acquired would be adequate to meet the requirements of many other sets of technologies. This forms the background of the Indian program for development of technologies relevant to reliable, non-intrusive, concealed contraband detection. One of the central themes of the technology development programs would be modularization of the neutron source and detector technologies, so that common elements can be combined in different ways for meeting a variety of application requirements. (author)

  7. Scenario based approach to structural damage detection and its value in a risk and reliability perspective

    DEFF Research Database (Denmark)

    Hovgaard, Mads Knude; Hansen, Jannick Balleby; Brincker, Rune

    2013-01-01

    A scenario- and vibration based structural damage detection method is demonstrated though simulation. The method is Finite Element (FE) based. The value of the monitoring is calculated using structural reliability theory. A high cycle fatigue crack propagation model is assumed as the damage mecha......- and without monitoring. Monte Carlo Sampling (MCS) is used to estimate the probabilities and the tower of an onshore NREL 5MW wind turbine is given as a calculation case......A scenario- and vibration based structural damage detection method is demonstrated though simulation. The method is Finite Element (FE) based. The value of the monitoring is calculated using structural reliability theory. A high cycle fatigue crack propagation model is assumed as the damage...

  8. NDE reliability and probability of detection (POD) evolution and paradigm shift

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Surendra [NDE Engineering, Materials and Process Engineering, Honeywell Aerospace, Phoenix, AZ 85034 (United States)

    2014-02-18

    The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed “Have Cracks – Will Travel” or in short “Have Cracks” by Lockheed Georgia Company for US Air Force during 1974–1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability and Reproducibility (Gage R and R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using

  9. A novel approach for reliable detection of cathepsin S activities in mouse antigen presenting cells.

    Science.gov (United States)

    Steimle, Alex; Kalbacher, Hubert; Maurer, Andreas; Beifuss, Brigitte; Bender, Annika; Schäfer, Andrea; Müller, Ricarda; Autenrieth, Ingo B; Frick, Julia-Stefanie

    2016-05-01

    Cathepsin S (CTSS) is a eukaryotic protease mostly expressed in professional antigen presenting cells (APCs). Since CTSS activity regulation plays a role in the pathogenesis of various autoimmune diseases like multiple sclerosis, atherosclerosis, Sjögren's syndrome and psoriasis as well as in cancer progression, there is an ongoing interest in the reliable detection of cathepsin S activity. Various applications have been invented for specific detection of this enzyme. However, most of them have only been shown to be suitable for human samples, do not deliver quantitative results or the experimental procedure requires technical equipment that is not commonly available in a standard laboratory. We have tested a fluorogen substrate, Mca-GRWPPMGLPWE-Lys(Dnp)-DArg-NH2, that has been described to specifically detect CTSS activities in human APCs for its potential use for mouse samples. We have modified the protocol and thereby offer a cheap, easy, reproducible and quick activity assay to detect CTSS activities in mouse APCs. Since most of basic research on CTSS is performed in mice, this method closes a gap and offers a possibility for reliable and quantitative CTSS activity detection that can be performed in almost every laboratory. Copyright © 2016. Published by Elsevier B.V.

  10. Testing effort dependent software reliability model for imperfect debugging process considering both detection and correction

    International Nuclear Information System (INIS)

    Peng, R.; Li, Y.F.; Zhang, W.J.; Hu, Q.P.

    2014-01-01

    This paper studies the fault detection process (FDP) and fault correction process (FCP) with the incorporation of testing effort function and imperfect debugging. In order to ensure high reliability, it is essential for software to undergo a testing phase, during which faults can be detected and corrected by debuggers. The testing resource allocation during this phase, which is usually depicted by the testing effort function, considerably influences not only the fault detection rate but also the time to correct a detected fault. In addition, testing is usually far from perfect such that new faults may be introduced. In this paper, we first show how to incorporate testing effort function and fault introduction into FDP and then develop FCP as delayed FDP with a correction effort. Various specific paired FDP and FCP models are obtained based on different assumptions of fault introduction and correction effort. An illustrative example is presented. The optimal release policy under different criteria is also discussed

  11. Reliable Grid Condition Detection and Control of Single-Phase Distributed Power Generation Systems

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai

    standards addressed to the grid-connected systems will harmonize the combination of the DPGS and the classical power plants. Consequently, the major tasks of this thesis were to develop new grid condition detection techniques and intelligent control in order to allow the DPGS not only to deliver power...... to the utility grid but also to sustain it. This thesis was divided into two main parts, namely "Grid Condition Detection" and "Control of Single-Phase DPGS". In the first part, the main focus was on reliable Phase Locked Loop (PLL) techniques for monitoring the grid voltage and on grid impedance estimation...... techniques. Additionally, a new technique for detecting the islanding mode has been developed and successfully tested. In the second part, the main reported research was concentrated around adaptive current controllers based on the information provided by the grid condition detection techniques. To guarantee...

  12. Is sequential cranial ultrasound reliable for detection of white matter injury in very preterm infants?

    International Nuclear Information System (INIS)

    Leijser, Lara M.; Steggerda, Sylke J.; Walther, Frans J.; Wezel-Meijler, Gerda van; Bruine, Francisca T. de; Grond, Jeroen van der

    2010-01-01

    Cranial ultrasound (cUS) may not be reliable for detection of diffuse white matter (WM) injury. Our aim was to assess in very preterm infants the reliability of a classification system for WM injury on sequential cUS throughout the neonatal period, using magnetic resonance imaging (MRI) as reference standard. In 110 very preterm infants (gestational age <32 weeks), serial cUS during admission (median 8, range 4-22) and again around term equivalent age (TEA) and a single MRI around TEA were performed. cUS during admission were assessed for presence of WM changes, and contemporaneous cUS and MRI around TEA additionally for abnormality of lateral ventricles. Sequential cUS (from birth up to TEA) and MRI were classified as normal/mildly abnormal, moderately abnormal, or severely abnormal, based on a combination of findings of the WM and lateral ventricles. Predictive values of the cUS classification were calculated. Sequential cUS were classified as normal/mildly abnormal, moderately abnormal, and severely abnormal in, respectively, 22%, 65%, and 13% of infants and MRI in, respectively, 30%, 52%, and 18%. The positive predictive value of the cUS classification for the MRI classification was high for severely abnormal WM (0.79) but lower for normal/mildly abnormal (0.67) and moderately abnormal (0.64) WM. Sequential cUS during the neonatal period detects severely abnormal WM in very preterm infants but is less reliable for mildly and moderately abnormal WM. MRI around TEA seems needed to reliably detect WM injury in very preterm infants. (orig.)

  13. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  14. Rapid and reliable detection and identification of GM events using multiplex PCR coupled with oligonucleotide microarray.

    Science.gov (United States)

    Xu, Xiaodan; Li, Yingcong; Zhao, Heng; Wen, Si-yuan; Wang, Sheng-qi; Huang, Jian; Huang, Kun-lun; Luo, Yun-bo

    2005-05-18

    To devise a rapid and reliable method for the detection and identification of genetically modified (GM) events, we developed a multiplex polymerase chain reaction (PCR) coupled with a DNA microarray system simultaneously aiming at many targets in a single reaction. The system included probes for screening gene, species reference gene, specific gene, construct-specific gene, event-specific gene, and internal and negative control genes. 18S rRNA was combined with species reference genes as internal controls to assess the efficiency of all reactions and to eliminate false negatives. Two sets of the multiplex PCR system were used to amplify four and five targets, respectively. Eight different structure genes could be detected and identified simultaneously for Roundup Ready soybean in a single microarray. The microarray specificity was validated by its ability to discriminate two GM maizes Bt176 and Bt11. The advantages of this method are its high specificity and greatly reduced false-positives and -negatives. The multiplex PCR coupled with microarray technology presented here is a rapid and reliable tool for the simultaneous detection of GM organism ingredients.

  15. Reliability and Minimum Detectable Change of Temporal-Spatial, Kinematic, and Dynamic Stability Measures during Perturbed Gait.

    Directory of Open Access Journals (Sweden)

    Christopher A Rábago

    Full Text Available Temporal-spatial, kinematic variability, and dynamic stability measures collected during perturbation-based assessment paradigms are often used to identify dysfunction associated with gait instability. However, it remains unclear which measures are most reliable for detecting and tracking responses to perturbations. This study systematically determined the between-session reliability and minimum detectable change values of temporal-spatial, kinematic variability, and dynamic stability measures during three types of perturbed gait. Twenty young healthy adults completed two identical testing sessions two weeks apart, comprised of an unperturbed and three perturbed (cognitive, physical, and visual walking conditions in a virtual reality environment. Within each session, perturbation responses were compared to unperturbed walking using paired t-tests. Between-session reliability and minimum detectable change values were also calculated for each measure and condition. All temporal-spatial, kinematic variability and dynamic stability measures demonstrated fair to excellent between-session reliability. Minimal detectable change values, normalized to mean values ranged from 1-50%. Step width mean and variability measures demonstrated the greatest response to perturbations with excellent between-session reliability and low minimum detectable change values. Orbital stability measures demonstrated specificity to perturbation direction and sensitivity with excellent between-session reliability and low minimum detectable change values. We observed substantially greater between-session reliability and lower minimum detectable change values for local stability measures than previously described which may be the result of averaging across trials within a session and using velocity versus acceleration data for reconstruction of state spaces. Across all perturbation types, temporal-spatial, orbital and local measures were the most reliable measures with the

  16. MOA-2010-BLG-311: A PLANETARY CANDIDATE BELOW THE THRESHOLD OF RELIABLE DETECTION

    International Nuclear Information System (INIS)

    Yee, J. C.; Hung, L.-W.; Gould, A.; Gaudi, B. S.; Bond, I. A.; Allen, W.; Monard, L. A. G.; Albrow, M. D.; Fouqué, P.; Dominik, M.; Tsapras, Y.; Udalski, A.; Zellem, R.; Bos, M.; Christie, G. W.; DePoy, D. L.; Dong, Subo; Drummond, J.; Gorbikov, E.; Han, C.

    2013-01-01

    We analyze MOA-2010-BLG-311, a high magnification (A max > 600) microlensing event with complete data coverage over the peak, making it very sensitive to planetary signals. We fit this event with both a point lens and a two-body lens model and find that the two-body lens model is a better fit but with only Δχ 2 ∼ 80. The preferred mass ratio between the lens star and its companion is q = 10 –3.7±0.1 , placing the candidate companion in the planetary regime. Despite the formal significance of the planet, we show that because of systematics in the data the evidence for a planetary companion to the lens is too tenuous to claim a secure detection. When combined with analyses of other high-magnification events, this event helps empirically define the threshold for reliable planet detection in high-magnification events, which remains an open question.

  17. MOA-2010-BLG-311: A PLANETARY CANDIDATE BELOW THE THRESHOLD OF RELIABLE DETECTION

    Energy Technology Data Exchange (ETDEWEB)

    Yee, J. C.; Hung, L.-W.; Gould, A.; Gaudi, B. S. [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Bond, I. A. [Institute for Information and Mathematical Sciences, Massey University, Private Bag 102-904, Auckland 1330 (New Zealand); Allen, W. [Vintage Lane Observatory, Blenheim (New Zealand); Monard, L. A. G. [Bronberg Observatory, Centre for Backyard Astrophysics, Pretoria (South Africa); Albrow, M. D. [Department of Physics and Astronomy, University of Canterbury, Private Bag 4800, Christchurch 8020 (New Zealand); Fouque, P. [IRAP, CNRS, Universite de Toulouse, 14 avenue Edouard Belin, F-31400 Toulouse (France); Dominik, M. [SUPA, University of St. Andrews, School of Physics and Astronomy, North Haugh, St. Andrews, KY16 9SS (United Kingdom); Tsapras, Y. [Las Cumbres Observatory Global Telescope Network, 6740B Cortona Drive, Goleta, CA 93117 (United States); Udalski, A. [Warsaw University Observatory, Al. Ujazdowskie 4, 00-478 Warszawa (Poland); Zellem, R. [Department of Planetary Sciences/LPL, University of Arizona, 1629 East University Boulevard, Tucson, AZ 85721 (United States); Bos, M. [Molehill Astronomical Observatory, North Shore City, Auckland (New Zealand); Christie, G. W. [Auckland Observatory, P.O. Box 24-180, Auckland (New Zealand); DePoy, D. L. [Department of Physics, Texas A and M University, 4242 TAMU, College Station, TX 77843-4242 (United States); Dong, Subo [Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540 (United States); Drummond, J. [Possum Observatory, Patutahi (New Zealand); Gorbikov, E. [School of Physics and Astronomy, Raymond and Beverley Sackler Faculty of Exact Sciences, Tel-Aviv University, Tel Aviv 69978 (Israel); Han, C., E-mail: liweih@astro.ucla.edu, E-mail: rzellem@lpl.arizona.edu, E-mail: tim.natusch@aut.ac.nz [Department of Physics, Chungbuk National University, 410 Seongbong-Rho, Hungduk-Gu, Chongju 371-763 (Korea, Republic of); Collaboration: muFUN Collaboration; MOA Collaboration; OGLE Collaboration; PLANET Collaboration; RoboNet Collaboration; MiNDSTEp Consortium; and others

    2013-05-20

    We analyze MOA-2010-BLG-311, a high magnification (A{sub max} > 600) microlensing event with complete data coverage over the peak, making it very sensitive to planetary signals. We fit this event with both a point lens and a two-body lens model and find that the two-body lens model is a better fit but with only {Delta}{chi}{sup 2} {approx} 80. The preferred mass ratio between the lens star and its companion is q = 10{sup -3.7{+-}0.1}, placing the candidate companion in the planetary regime. Despite the formal significance of the planet, we show that because of systematics in the data the evidence for a planetary companion to the lens is too tenuous to claim a secure detection. When combined with analyses of other high-magnification events, this event helps empirically define the threshold for reliable planet detection in high-magnification events, which remains an open question.

  18. Visual acuity measures do not reliably detect childhood refractive error--an epidemiological study.

    Directory of Open Access Journals (Sweden)

    Lisa O'Donoghue

    Full Text Available PURPOSE: To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. METHODS: The Northern Ireland Childhood Errors of Refraction (NICER study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. RESULTS: Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. CONCLUSIONS: Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.

  19. A Method for Improving Reliability of Radiation Detection using Deep Learning Framework

    International Nuclear Information System (INIS)

    Chang, Hojong; Kim, Tae-Ho; Han, Byunghun; Kim, Hyunduk; Kim, Ki-duk

    2017-01-01

    Radiation detection is essential technology for overall field of radiation and nuclear engineering. Previously, technology for radiation detection composes of preparation of the table of the input spectrum to output spectrum in advance, which requires simulation of numerous predicted output spectrum with simulation using parameters modeling the spectrum. In this paper, we propose new technique to improve the performance of radiation detector. The software in the radiation detector has been stagnant for a while with possible intrinsic error of simulation. In the proposed method, to predict the input source using output spectrum measured by radiation detector is performed using deep neural network. With highly complex model, we expect that the complex pattern between data and the label can be captured well. Furthermore, the radiation detector should be calibrated regularly and beforehand. We propose a method to calibrate radiation detector using GAN. We hope that the power of deep learning may also reach to radiation detectors and make huge improvement on the field. Using improved radiation detector, the reliability of detection would be confident, and there are many tasks remaining to solve using deep learning in nuclear engineering society.

  20. Designing a reliable leak bio-detection system for natural gas pipelines

    International Nuclear Information System (INIS)

    Batzias, F.A.; Siontorou, C.G.; Spanidis, P.-M.P.

    2011-01-01

    Monitoring of natural gas (NG) pipelines is an important task for economical/safety operation, loss prevention and environmental protection. Timely and reliable leak detection of gas pipeline, therefore, plays a key role in the overall integrity management for the pipeline system. Owing to the various limitations of the currently available techniques and the surveillance area that needs to be covered, the research on new detector systems is still thriving. Biosensors are worldwide considered as a niche technology in the environmental market, since they afford the desired detector capabilities at low cost, provided they have been properly designed/developed and rationally placed/networked/maintained by the aid of operational research techniques. This paper addresses NG leakage surveillance through a robust cooperative/synergistic scheme between biosensors and conventional detector systems; the network is validated in situ and optimized in order to provide reliable information at the required granularity level. The proposed scheme is substantiated through a knowledge based approach and relies on Fuzzy Multicriteria Analysis (FMCA), for selecting the best biosensor design that suits both, the target analyte and the operational micro-environment. This approach is illustrated in the design of leak surveying over a pipeline network in Greece.

  1. Designing a reliable leak bio-detection system for natural gas pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Batzias, F.A., E-mail: fbatzi@unipi.gr [Univ. Piraeus, Dept. Industrial Management and Technology, Karaoli and Dimitriou 80, 18534 Piraeus (Greece); Siontorou, C.G., E-mail: csiontor@unipi.gr [Univ. Piraeus, Dept. Industrial Management and Technology, Karaoli and Dimitriou 80, 18534 Piraeus (Greece); Spanidis, P.-M.P., E-mail: pspani@asprofos.gr [Asprofos Engineering S.A, El. Venizelos 284, 17675 Kallithea (Greece)

    2011-02-15

    Monitoring of natural gas (NG) pipelines is an important task for economical/safety operation, loss prevention and environmental protection. Timely and reliable leak detection of gas pipeline, therefore, plays a key role in the overall integrity management for the pipeline system. Owing to the various limitations of the currently available techniques and the surveillance area that needs to be covered, the research on new detector systems is still thriving. Biosensors are worldwide considered as a niche technology in the environmental market, since they afford the desired detector capabilities at low cost, provided they have been properly designed/developed and rationally placed/networked/maintained by the aid of operational research techniques. This paper addresses NG leakage surveillance through a robust cooperative/synergistic scheme between biosensors and conventional detector systems; the network is validated in situ and optimized in order to provide reliable information at the required granularity level. The proposed scheme is substantiated through a knowledge based approach and relies on Fuzzy Multicriteria Analysis (FMCA), for selecting the best biosensor design that suits both, the target analyte and the operational micro-environment. This approach is illustrated in the design of leak surveying over a pipeline network in Greece.

  2. Designing a reliable leak bio-detection system for natural gas pipelines.

    Science.gov (United States)

    Batzias, F A; Siontorou, C G; Spanidis, P-M P

    2011-02-15

    Monitoring of natural gas (NG) pipelines is an important task for economical/safety operation, loss prevention and environmental protection. Timely and reliable leak detection of gas pipeline, therefore, plays a key role in the overall integrity management for the pipeline system. Owing to the various limitations of the currently available techniques and the surveillance area that needs to be covered, the research on new detector systems is still thriving. Biosensors are worldwide considered as a niche technology in the environmental market, since they afford the desired detector capabilities at low cost, provided they have been properly designed/developed and rationally placed/networked/maintained by the aid of operational research techniques. This paper addresses NG leakage surveillance through a robust cooperative/synergistic scheme between biosensors and conventional detector systems; the network is validated in situ and optimized in order to provide reliable information at the required granularity level. The proposed scheme is substantiated through a knowledge based approach and relies on Fuzzy Multicriteria Analysis (FMCA), for selecting the best biosensor design that suits both, the target analyte and the operational micro-environment. This approach is illustrated in the design of leak surveying over a pipeline network in Greece. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. Diagnostic reliability of 3.0-T MRI for detecting osseous abnormalities of the temporomandibular joint.

    Science.gov (United States)

    Sawada, Kunihiko; Amemiya, Toshihiko; Hirai, Shigenori; Hayashi, Yusuke; Suzuki, Toshihiro; Honda, Masahiko; Sisounthone, Johnny; Matsumoto, Kunihito; Honda, Kazuya

    2018-01-01

    We compared the diagnostic reliability of 3.0-T magnetic resonance imaging (MRI) for detection of osseous abnormalities of the temporomandibular joint (TMJ) with that of the gold standard, cone-beam computed tomography (CBCT). Fifty-six TMJs were imaged with CBCT and MRI, and images of condyles and fossae were independently assessed for the presence of osseous abnormalities. The accuracy, sensitivity, and specificity of 3.0-T MRI were 0.88, 1.0, and 0.73, respectively, in condyle evaluation and 0.91, 0.75, and 0.95 in fossa evaluation. The McNemar test showed no significant difference (P > 0.05) between MRI and CBCT in the evaluation of osseous abnormalities in condyles and fossae. The present results indicate that 3.0-T MRI is equal to CBCT in the diagnostic evaluation of osseous abnormalities of the mandibular condyle.

  4. Photogrammetry: an accurate and reliable tool to detect thoracic musculoskeletal abnormalities in preterm infants.

    Science.gov (United States)

    Davidson, Josy; dos Santos, Amelia Miyashiro N; Garcia, Kessey Maria B; Yi, Liu C; João, Priscila C; Miyoshi, Milton H; Goulart, Ana Lucia

    2012-09-01

    To analyse the accuracy and reproducibility of photogrammetry in detecting thoracic abnormalities in infants born prematurely. Cross-sectional study. The Premature Clinic at the Federal University of São Paolo. Fifty-eight infants born prematurely in their first year of life. Measurement of the manubrium/acromion/trapezius angle (degrees) and the deepest thoracic retraction (cm). Digitised photographs were analysed by two blinded physiotherapists using a computer program (SAPO; http://SAPO.incubadora.fapesp.br) to detect shoulder elevation and thoracic retraction. Physical examinations performed independently by two physiotherapists were used to assess the accuracy of the new tool. Thoracic alterations were detected in 39 (67%) and in 40 (69%) infants by Physiotherapists 1 and 2, respectively (kappa coefficient=0.80). Using a receiver operating characteristic curve, measurement of the manubrium/acromion/trapezius angle and the deepest thoracic retraction indicated accuracy of 0.79 and 0.91, respectively. For measurement of the manubrium/acromion/trapezius angle, the Bland and Altman limits of agreement were -6.22 to 7.22° [mean difference (d)=0.5] for repeated measures by one physiotherapist, and -5.29 to 5.79° (d=0.75) between two physiotherapists. For thoracic retraction, the intra-rater limits of agreement were -0.14 to 0.18cm (d=0.02) and the inter-rater limits of agreement were -0.20 to -0.17cm (d=0.02). SAPO provided an accurate and reliable tool for the detection of thoracic abnormalities in preterm infants. Copyright © 2011 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  5. Reliability of the exercise ECG in detecting silent ischemia in patients with prior myocardial infarction

    International Nuclear Information System (INIS)

    Yamagishi, Takashi; Matsuda, Yasuo; Satoh, Akira

    1991-01-01

    To assess the reliability of the exercise ECG in detecting silent ischemia, ECG results were compared with those of stress-redistribution thallium-201 single-photon emission computed tomography (SPECT) in 116 patients with prior myocardial infarction and in 20 normal subjects used as a control. The left ventricle (LV) was divided into 20 segmental images, which were scored blindly on a 5-point scale. The redistribution score was defined as thallium defect score of exercise subtracted by that of redistribution image and was used as a measure of amount of ischemic but viable myocardium. The upper limit of normal redistribution score (=4.32) was defined as mean+2 standard deviations derived from 20 normal subjects. Of 116 patients, 47 had the redistribution score above the normal range. Twenty-five (53%) of the 47 patients showed positive ECG response. Fourteen (20%) of the 69 patients, who had the normal redistribution score, showed positive ECG response. Thus, the ECG response had a sensitivity of 53% and a specificity of 80% in detecting transient ischemia. Furthermore, the 116 patients were subdivided into 4 groups according to the presence or absence of chest pain and ECG change during exercise. Fourteen patients showed both chest pain and ECG change and all these patients had the redistribution score above the normal range. Twenty-five patients showed ECG change without chest pain and 11 (44%) of the 25 patients had the abnormal redistribution. Three (43%) of 7 patients who showed chest pain without ECG change had the abnormal redistribution score. Of 70 patients who had neither chest pain nor ECG change, 19 (27%) had the redistribution score above the normal range. Thus, limitations exist in detecting silent ischemia by ECG in patients with a prior myocardial infarction, because the ECG response to the exercise test may have a low degree of sensitivity and a high degree of false positive and false negative results in detecting silent ischemia. (author)

  6. Prevent cervical cancer by screening with reliable human papillomavirus detection and genotyping

    International Nuclear Information System (INIS)

    Ge, Shichao; Gong, Bo; Cai, Xushan; Yang, Xiaoer; Gan, Xiaowei; Tong, Xinghai; Li, Haichuan; Zhu, Meijuan; Yang, Fengyun; Zhou, Hongrong; Hong, Guofan

    2012-01-01

    The incidence of cervical cancer is expected to rise sharply in China. A reliable routine human papillomavirus (HPV) detection and genotyping test to be supplemented by the limited Papanicolaou cytology facilities is urgently needed to help identify the patients with cervical precancer for preventive interventions. To this end, we evaluated a nested polymerase chain reaction (PCR) protocol for detection of HPV L1 gene DNA in cervicovaginal cells. The PCR amplicons were genotyped by direct DNA sequencing. In parallel, split samples were subjected to a Digene HC2 HPV test which has been widely used for “cervical cancer risk” screen. Of the 1826 specimens, 1655 contained sufficient materials for analysis and 657 were truly negative. PCR/DNA sequencing showed 674 infected by a single high-risk HPV, 188 by a single low-risk HPV, and 136 by multiple HPV genotypes with up to five HPV genotypes in one specimen. In comparison, the HC2 test classified 713 specimens as infected by high-risk HPV, and 942 as negative for HPV infections. The high-risk HC2 test correctly detected 388 (57.6%) of the 674 high-risk HPV isolates in clinical specimens, mislabeled 88 (46.8%) of the 188 low-risk HPV isolates as high-risk genotypes, and classified 180 (27.4%) of the 657 “true-negative” samples as being infected by high-risk HPV. It was found to cross-react with 20 low-risk HPV genotypes. We conclude that nested PCR detection of HPV followed by short target DNA sequencing can be used for screening and genotyping to formulate a paradigm in clinical management of HPV-related disorders in a rapidly developing economy

  7. Diffusion-weighted MR imaging in postoperative follow-up: Reliability for detection of recurrent cholesteatoma

    Energy Technology Data Exchange (ETDEWEB)

    Cimsit, Nuri Cagatay [Marmara University Hospital, Department of Radiology, Istanbul (Turkey); Engin Sitesi Peker Sokak No:1 D:13, 34330 Levent, Istanbul (Turkey)], E-mail: cagataycimsit@gmail.com; Cimsit, Canan [Goztepe Education and Research Hospital, Department of Radiology, Istanbul (Turkey); Istanbul Goztepe Egitim ve Arastirma Hastanesi, Radyoloji Klinigi, Goztepe, Istanbul (Turkey)], E-mail: ccimsit@ttmail.com; Baysal, Begumhan [Goztepe Education and Research Hospital, Department of Radiology, Istanbul (Turkey); Istanbul Goztepe Egitim ve Arastirma Hastanesi, Radyoloji Klinigi, Goztepe, Istanbul (Turkey)], E-mail: begumbaysal@yahoo.com; Ruhi, Ilteris Cagatay [Goztepe Education and Research Hospital, Department of ENT, Istanbul (Turkey); Istanbul Goztepe Egitim ve Arastirma Hastanesi, KBB Klinigi, Goztepe, Istanbul (Turkey)], E-mail: cruhi@yahoo.com; Ozbilgen, Suha [Goztepe Education and Research Hospital, Department of ENT, Istanbul (Turkey); Istanbul Goztepe Egitim ve Arastirma Hastanesi, KBB Klinigi, Goztepe, Istanbul (Turkey)], E-mail: sozbilgen@yahoo.com; Aksoy, Elif Ayanoglu [Acibadem Bakirkoy Hospital, Department of ENT, Istanbul (Turkey); Acibadem Hastanesi, KBB Boeluemue, Bakirkoey, Istanbul (Turkey)], E-mail: elifayanoglu@yahoo.com

    2010-04-15

    Introduction: Cholesteatoma is a progressively growing process that destroy the neighboring bony structures and treatment is surgical removal. Follow-up is important in the postoperative period, since further surgery is necessary if recurrence is present, but not if granulation tissue is detected. This study evaluates if diffusion-weighted MR imaging alone can be a reliable alternative to CT, without use of contrast agent for follow-up of postoperative patients in detecting recurrent cholesteatoma. Materials and methods: 26 consecutive patients with mastoidectomy reporting for routine follow-up CT after mastoidectomy were included in the study, if there was loss of middle ear aeration on CT examination. MR images were evaluated for loss of aeration and signal intensity changes on diffusion-weighted sequences. Surgical results were compared with imaging findings. Results: Interpretation of MR images were parallel with the loss of aeration detected on CT for all 26 patients. Of the 26 patients examined, 14 were not evaluated as recurrent cholesteatoma and verified with surgery (NPV: 100%). Twelve patients were diagnosed as recurrent cholesteatoma and 11 were surgically diagnosed as recurrent cholesteatoma (PPV: 91.7%). Four of these 11 patients had loss of aeration size greater than the high signal intensity area on DWI, which were surgically confirmed as granulation tissue or fibrosis accompanying recurrent cholesteatoma. Conclusion: Diffusion-weighted MR for suspected recurrent cholesteatoma is a valuable tool to cut costs and prevent unnecessary second-look surgeries. It has the potential to become the MR sequence of choice to differentiate recurrent cholesteatoma from other causes of loss of aeration in patients with mastoidectomy.

  8. OCT4 and SOX2 are reliable markers in detecting stem cells in odontogenic lesions

    Directory of Open Access Journals (Sweden)

    Abhishek Banerjee

    2016-01-01

    Full Text Available Context (Background: Stem cells are a unique subpopulation of cells in the human body with a capacity to initiate differentiation into various cell lines. Tumor stem cells (TSCs are a unique subpopulation of cells that possess the ability to initiate a neoplasm and sustain self-renewal. Epithelial stem cell (ESC markers such as octamer-binding transcription factor 4 (OCT4 and sex-determining region Y (SRY-box 2 (SOX2 are capable of identifying these stem cells expressed during the early stages of tooth development. Aims: To detect the expression of the stem cell markers OCT4 and SOX2 in the normal odontogenic tissues and the odontogenic cysts and tumors. Materials and Methods: Paraffin sections of follicular tissue, radicular cyst, dentigerous cyst, odontogenic keratocyst, ameloblastoma, adenomatoid odontogenic tumor, and ameloblastic carcinoma were obtained from the archives. The sections were subjected to immunohistochemical assay by the use of mouse monoclonal antibodies to OCT4 and SOX2. Statistical Analysis: The results were evaluated by descriptive analysis. Results: The results show the presence of stem cells in the normal and lesional tissues with these stem cell identifying markers. SOX2 was found to be more consistent and reliable in the detection of stem cells. Conclusion: The stem cell expressions are maintained in the tumor transformation of tissue and probably suggest that there is no phenotypic change of stem cells in progression from normal embryonic state to its tumor component. The quantification and localization reveals interesting trends that indicate the probable role of the cells in the pathogenesis of the lesions.

  9. Reliability of magnetic resonance imaging for the detection of hypopituitarism in children with optic nerve hypoplasia.

    Science.gov (United States)

    Ramakrishnaiah, Raghu H; Shelton, Julie B; Glasier, Charles M; Phillips, Paul H

    2014-01-01

    It is essential to identify hypopituitarism in children with optic nerve hypoplasia (ONH) because they are at risk for developmental delay, seizures, or death. The purpose of this study is to determine the reliability of neurohypophyseal abnormalities on magnetic resonance imaging (MRI) for the detection of hypopituitarism in children with ONH. Cross-sectional study. One hundred one children with clinical ONH who underwent MRI of the brain and orbits and a detailed pediatric endocrinologic evaluation. Magnetic resonance imaging studies were performed on 1.5-Tesla scanners. The imaging protocol included sagittal T1-weighted images, axial fast fluid-attenuated inversion-recovery/T2-weighted images, and diffusion-weighted images of the brain. Orbital imaging included fat-saturated axial and coronal images and high-resolution axial T2-weighted images. The MRI studies were reviewed by 2 pediatric neuroradiologists for optic nerve hypoplasia, absent or ectopic posterior pituitary, absent pituitary infundibulum, absent septum pellucidum, migration anomalies, and hemispheric injury. Medical records were reviewed for clinical examination findings and endocrinologic status. All patients underwent a clinical evaluation by a pediatric endocrinologist and a standardized panel of serologic testing that included serum insulin-like growth factor-1, insulin-like growth factor binding protein-3, prolactin, cortisol, adrenocorticotropic hormone, thyroid-stimulating hormone, and free thyroxine levels. Radiologists were masked to patients' endocrinologic status and funduscopic findings. Sensitivity and specificity of MRI findings for the detection of hypopituitarism. Neurohypophyseal abnormalities, including absent pituitary infundibulum, ectopic posterior pituitary bright spot, and absent posterior pituitary bright spot, occurred in 33 children. Magnetic resonance imaging disclosed neurohypophyseal abnormalities in 27 of the 28 children with hypopituitarism (sensitivity, 96%). A

  10. Human reliability-based MC and A models for detecting insider theft

    International Nuclear Information System (INIS)

    Duran, Felicia Angelica; Wyss, Gregory Dane

    2010-01-01

    Material control and accounting (MC and A) safeguards operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. These activities, however, have been difficult to characterize in ways that are compatible with the probabilistic path analysis methods that are used to systematically evaluate the effectiveness of a site's physical protection (security) system (PPS). MC and A activities have many similar characteristics to operator procedures performed in a nuclear power plant (NPP) to check for anomalous conditions. This work applies human reliability analysis (HRA) methods and models for human performance of NPP operations to develop detection probabilities for MC and A activities. This has enabled the development of an extended probabilistic path analysis methodology in which MC and A protections can be combined with traditional sensor data in the calculation of PPS effectiveness. The extended path analysis methodology provides an integrated evaluation of a safeguards and security system that addresses its effectiveness for attacks by both outside and inside adversaries.

  11. How often should we monitor for reliable detection of atrial fibrillation recurrence? Efficiency considerations and implications for study design.

    Directory of Open Access Journals (Sweden)

    Efstratios I Charitos

    Full Text Available Although atrial fibrillation (AF recurrence is unpredictable in terms of onset and duration, current intermittent rhythm monitoring (IRM diagnostic modalities are short-termed and discontinuous. The aim of the present study was to investigate the necessary IRM frequency required to reliably detect recurrence of various AF recurrence patterns.The rhythm histories of 647 patients (mean AF burden: 12 ± 22% of monitored time; 687 patient-years with implantable continuous monitoring devices were reconstructed and analyzed. With the use of computationally intensive simulation, we evaluated the necessary IRM frequency to reliably detect AF recurrence of various AF phenotypes using IRM of various durations.The IRM frequency required for reliable AF detection depends on the amount and temporal aggregation of the AF recurrence (p95% sensitivity of AF recurrence required higher IRM frequencies (>12 24-hour; >6 7-day; >4 14-day; >3 30-day IRM per year; p<0.0001 than currently recommended. Lower IRM frequencies will under-detect AF recurrence and introduce significant bias in the evaluation of therapeutic interventions. More frequent but of shorter duration, IRMs (24-hour are significantly more time effective (sensitivity per monitored time than a fewer number of longer IRM durations (p<0.0001.Reliable AF recurrence detection requires higher IRM frequencies than currently recommended. Current IRM frequency recommendations will fail to diagnose a significant proportion of patients. Shorter duration but more frequent IRM strategies are significantly more efficient than longer IRM durations.Unique identifier: NCT00806689.

  12. Towards achieving a reliable leakage detection and localization algorithm for application in water piping networks: an overview

    CSIR Research Space (South Africa)

    Adedeji, KB

    2017-09-01

    Full Text Available Leakage detection and localization in pipelines has become an important aspect of water management systems. Since monitoring leakage in large-scale water distribution networks (WDNs) is a challenging task, the need to develop a reliable and robust...

  13. Reliable Detection and Smart Deletion of Malassez Counting Chamber Grid in Microscopic White Light Images for Microbiological Applications.

    Science.gov (United States)

    Denimal, Emmanuel; Marin, Ambroise; Guyot, Stéphane; Journaux, Ludovic; Molin, Paul

    2015-08-01

    In biology, hemocytometers such as Malassez slides are widely used and are effective tools for counting cells manually. In a previous work, a robust algorithm was developed for grid extraction in Malassez slide images. This algorithm was evaluated on a set of 135 images and grids were accurately detected in most cases, but there remained failures for the most difficult images. In this work, we present an optimization of this algorithm that allows for 100% grid detection and a 25% improvement in grid positioning accuracy. These improvements make the algorithm fully reliable for grid detection. This optimization also allows complete erasing of the grid without altering the cells, which eases their segmentation.

  14. The reliability, accuracy and minimal detectable difference of a multi-segment kinematic model of the foot-shoe complex.

    Science.gov (United States)

    Bishop, Chris; Paul, Gunther; Thewlis, Dominic

    2013-04-01

    Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot-shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot-shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC=0.75-0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC=0.68-0.99) than the inexperienced rater (ICC=0.38-0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint--MDD90=2.17-9.36°, tarsometatarsal joint--MDD90=1.03-9.29° and the metatarsophalangeal joint--MDD90=1.75-9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Reliability, validity and minimal detectable change of the Mini-BESTest in Greek participants with chronic stroke.

    Science.gov (United States)

    Lampropoulou, Sofia I; Billis, Evdokia; Gedikoglou, Ingrid A; Michailidou, Christina; Nowicky, Alexander V; Skrinou, Dimitra; Michailidi, Fotini; Chandrinou, Danae; Meligkoni, Margarita

    2018-02-23

    This study aimed to investigate the psychometric characteristics of reliability, validity and ability to detect change of a newly developed balance assessment tool, the Mini-BESTest, in Greek patients with stroke. A prospective, observational design study with test-retest measures was conducted. A convenience sample of 21 Greek patients with chronic stroke (14 male, 7 female; age of 63 ± 16 years) was recruited. Two independent examiners administered the scale, for the inter-rater reliability, twice within 10 days for the test-retest reliability. Bland Altman Analysis for repeated measures assessed the absolute reliability and the Standard Error of Measurement (SEM) and the Minimum Detectable Change at 95% confidence interval (MDC 95% ) were established. The Greek Mini-BESTest (Mini-BESTest GR ) was correlated with the Greek Berg Balance Scale (BBS GR ) for assessing the concurrent validity and with the Timed Up and Go (TUG), the Functional Reach Test (FRT) and the Greek Falls Efficacy Scale-International (FES-I GR ) for the convergent validity. The Mini-BESTestGR demonstrated excellent inter-rater reliability (ICC (95%CI) = 0.997 (0.995-0.999, SEM = 0.46) with the scores of two raters within the limits of agreement (mean dif  = -0.143 ± 0.727, p > 0.05) and test-retest reliability (ICC (95%CI) = 0.966 (0.926-0.988), SEM = 1.53). Additionally, the Mini-BESTest GR yielded very strong to moderate correlations with BBS GR (r = 0.924, p reliability and the equally good validity of the Mini-BESTest GR , strongly support its utility in Greek people with chronic stroke. Its ability to identify clinically meaningful changes and falls risk need further investigation.

  16. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models.

    Science.gov (United States)

    Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E

    2018-05-10

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.

  17. Reliable detection of fluence anomalies in EPID-based IMRT pretreatment quality assurance using pixel intensity deviations

    International Nuclear Information System (INIS)

    Gordon, J. J.; Gardner, J. K.; Wang, S.; Siebers, J. V.

    2012-01-01

    Purpose: This work uses repeat images of intensity modulated radiation therapy (IMRT) fields to quantify fluence anomalies (i.e., delivery errors) that can be reliably detected in electronic portal images used for IMRT pretreatment quality assurance. Methods: Repeat images of 11 clinical IMRT fields are acquired on a Varian Trilogy linear accelerator at energies of 6 MV and 18 MV. Acquired images are corrected for output variations and registered to minimize the impact of linear accelerator and electronic portal imaging device (EPID) positioning deviations. Detection studies are performed in which rectangular anomalies of various sizes are inserted into the images. The performance of detection strategies based on pixel intensity deviations (PIDs) and gamma indices is evaluated using receiver operating characteristic analysis. Results: Residual differences between registered images are due to interfraction positional deviations of jaws and multileaf collimator leaves, plus imager noise. Positional deviations produce large intensity differences that degrade anomaly detection. Gradient effects are suppressed in PIDs using gradient scaling. Background noise is suppressed using median filtering. In the majority of images, PID-based detection strategies can reliably detect fluence anomalies of ≥5% in ∼1 mm 2 areas and ≥2% in ∼20 mm 2 areas. Conclusions: The ability to detect small dose differences (≤2%) depends strongly on the level of background noise. This in turn depends on the accuracy of image registration, the quality of the reference image, and field properties. The longer term aim of this work is to develop accurate and reliable methods of detecting IMRT delivery errors and variations. The ability to resolve small anomalies will allow the accuracy of advanced treatment techniques, such as image guided, adaptive, and arc therapies, to be quantified.

  18. A Type-2 fuzzy data fusion approach for building reliable weighted protein interaction networks with application in protein complex detection.

    Science.gov (United States)

    Mehranfar, Adele; Ghadiri, Nasser; Kouhsar, Morteza; Golshani, Ashkan

    2017-09-01

    Detecting the protein complexes is an important task in analyzing the protein interaction networks. Although many algorithms predict protein complexes in different ways, surveys on the interaction networks indicate that about 50% of detected interactions are false positives. Consequently, the accuracy of existing methods needs to be improved. In this paper we propose a novel algorithm to detect the protein complexes in 'noisy' protein interaction data. First, we integrate several biological data sources to determine the reliability of each interaction and determine more accurate weights for the interactions. A data fusion component is used for this step, based on the interval type-2 fuzzy voter that provides an efficient combination of the information sources. This fusion component detects the errors and diminishes their effect on the detection protein complexes. So in the first step, the reliability scores have been assigned for every interaction in the network. In the second step, we have proposed a general protein complex detection algorithm by exploiting and adopting the strong points of other algorithms and existing hypotheses regarding real complexes. Finally, the proposed method has been applied for the yeast interaction datasets for predicting the interactions. The results show that our framework has a better performance regarding precision and F-measure than the existing approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Electronic logic to enhance switch reliability in detecting openings and closures of redundant switches

    Science.gov (United States)

    Cooper, James A.

    1986-01-01

    A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and failsafe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

  20. Reliability of ultrasonography in detecting shoulder disease in patients with rheumatoid arthritis.

    LENUS (Irish Health Repository)

    Bruyn, G A W

    2009-03-01

    To assess the intra and interobserver reproducibility of musculoskeletal ultrasonography (US) among rheumatologists in detecting destructive and inflammatory shoulder abnormalities in patients with rheumatoid arthritis (RA) and to determine the overall agreement between US and MRI.

  1. Reliable fault detection and diagnosis of photovoltaic systems based on statistical monitoring approaches

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Taghezouit, Bilal; Saidi, Ahmed; Hamlati, Mohamed-Elkarim

    2017-01-01

    This study reports the development of an innovative fault detection and diagnosis scheme to monitor the direct current (DC) side of photovoltaic (PV) systems. Towards this end, we propose a statistical approach that exploits the advantages of one

  2. A simple and reliable methodology to detect egg white in art samples

    Indian Academy of Sciences (India)

    2013-04-26

    Apr 26, 2013 ... threshold density values useful for the detection of ovalbumin in samples from ancient works of art. .... slides a mixture of a water solution of dry egg white and the .... ily, facing the problems of sample leakage, background.

  3. Reliability of ultrasonography in detecting shoulder disease in patients with rheumatoid arthritis

    NARCIS (Netherlands)

    Bruyn, G. A. W.; Naredo, E.; Moeller, I.; Moragues, C.; Garrido, J.; de Bock, G. H.; d'Agostino, M-A; Filippucci, E.; Iagnocco, A.; Backhaus, M.; Swen, W. A. A.; Balint, P.; Pineda, C.; Milutinovic, S.; Kane, D.; Kaeley, G.; Narvaez, F. J.; Wakefield, R. J.; Narvaez, J. A.; de Augustin, J.; Schmidt, W. A.; Moller, I.; Swen, N.; de Agustin, J.

    Objective: To assess the intra and interobserver reproducibility of musculoskeletal ultrasonography ( US) among rheumatologists in detecting destructive and inflammatory shoulder abnormalities in patients with rheumatoid arthritis ( RA) and to determine the overall agreement between US and MRI.

  4. Implanted cardiac devices are reliably detected by commercially available metal detectors

    DEFF Research Database (Denmark)

    Holm, Katja Fiedler; Hjortshøj, Søren Pihlkjær; Pehrson, Steen

    2013-01-01

    Explosions of Cardiovascular Implantable Electronic Devices (CIEDs) (pacemakers, defibrillators, and loop recorders) are a well-recognized problem during cremation, due to lithium-iodine batteries. In addition, burial of the deceased with a CIED can present a potential risk for environmental...... contamination. Therefore, detection of CIEDs in the deceased would be of value. This study evaluated a commercially available metal detector for detecting CIEDs....

  5. Experimental Research of Reliability of Plant Stress State Detection by Laser-Induced Fluorescence Method

    Directory of Open Access Journals (Sweden)

    Yury Fedotov

    2016-01-01

    Full Text Available Experimental laboratory investigations of the laser-induced fluorescence spectra of watercress and lawn grass were conducted. The fluorescence spectra were excited by YAG:Nd laser emitting at 532 nm. It was established that the influence of stress caused by mechanical damage, overwatering, and soil pollution is manifested in changes of the spectra shapes. The mean values and confidence intervals for the ratio of two fluorescence maxima near 685 and 740 nm were estimated. It is presented that the fluorescence ratio could be considered a reliable characteristic of plant stress state.

  6. Autism detection in early childhood (ADEC): reliability and validity data for a Level 2 screening tool for autistic disorder.

    Science.gov (United States)

    Nah, Yong-Hwee; Young, Robyn L; Brewer, Neil; Berlingeri, Genna

    2014-03-01

    The Autism Detection in Early Childhood (ADEC; Young, 2007) was developed as a Level 2 clinician-administered autistic disorder (AD) screening tool that was time-efficient, suitable for children under 3 years, easy to administer, and suitable for persons with minimal training and experience with AD. A best estimate clinical Diagnostic and Statistical Manual of Mental Disorders (4th ed., text rev.; DSM-IV-TR; American Psychiatric Association, 2000) diagnosis of AD was made for 70 children using all available information and assessment results, except for the ADEC data. A screening study compared these children on the ADEC with 57 children with other developmental disorders and 64 typically developing children. Results indicated high internal consistency (α = .91). Interrater reliability and test-retest reliability of the ADEC were also adequate. ADEC scores reliably discriminated different diagnostic groups after controlling for nonverbal IQ and Vineland Adaptive Behavior Composite scores. Construct validity (using exploratory factor analysis) and concurrent validity using performance on the Autism Diagnostic Observation Schedule (Lord et al., 2000), the Autism Diagnostic Interview-Revised (Le Couteur, Lord, & Rutter, 2003), and DSM-IV-TR criteria were also demonstrated. Signal detection analysis identified the optimal ADEC cutoff score, with the ADEC identifying all children who had an AD (N = 70, sensitivity = 1.0) but overincluding children with other disabilities (N = 13, specificity ranging from .74 to .90). Together, the reliability and validity data indicate that the ADEC has potential to be established as a suitable and efficient screening tool for infants with AD. 2014 APA

  7. Technical Note: The single particle soot photometer fails to reliably detect PALAS soot nanoparticles

    Directory of Open Access Journals (Sweden)

    M. Gysel

    2012-12-01

    Full Text Available The single particle soot photometer (SP2 uses laser-induced incandescence (LII for the measurement of atmospheric black carbon (BC particles. The BC mass concentration is obtained by combining quantitative detection of BC mass in single particles with a counting efficiency of 100% above its lower detection limit. It is commonly accepted that a particle must contain at least several tenths of a femtogram BC in order to be detected by the SP2.

    Here we show the result that most BC particles from a PALAS spark discharge soot generator remain undetected by the SP2, even if their BC mass, as independently determined with an aerosol particle mass analyser (APM, is clearly above the typical lower detection limit of the SP2. Comparison of counting efficiency and effective density data of PALAS soot with flame generated soot (combustion aerosol standard burner, CAST, fullerene soot and carbon black particles (Cabot Regal 400R reveals that particle morphology can affect the SP2's lower detection limit. PALAS soot particles are fractal-like agglomerates of very small primary particles with a low fractal dimension, resulting in a very low effective density. Such loosely packed particles behave like "the sum of individual primary particles" in the SP2's laser. Accordingly, most PALAS soot particles remain undetected as the SP2's laser intensity is insufficient to heat the primary particles to their vaporisation temperature because of their small size (Dpp ≈ 5–10 nm. Previous knowledge from pulsed laser-induced incandescence indicated that particle morphology might have an effect on the SP2's lower detection limit, however, an increase of the lower detection limit by a factor of ∼5–10, as reported here for PALAS soot, was not expected.

    In conclusion, the SP2's lower detection limit at a certain laser power depends primarily on the total BC mass per particle for compact particles with sufficiently high effective

  8. Test-retest reliability and minimal detectable change of two simplified 3-point balance measures in patients with stroke.

    Science.gov (United States)

    Chen, Yi-Miau; Huang, Yi-Jing; Huang, Chien-Yu; Lin, Gong-Hong; Liaw, Lih-Jiun; Lee, Shih-Chieh; Hsieh, Ching-Lin

    2017-10-01

    The 3-point Berg Balance Scale (BBS-3P) and 3-point Postural Assessment Scale for Stroke Patients (PASS-3P) were simplified from the BBS and PASS to overcome the complex scoring systems. The BBS-3P and PASS-3P were more feasible in busy clinical practice and showed similarly sound validity and responsiveness to the original measures. However, the reliability of the BBS-3P and PASS-3P is unknown limiting their utility and the interpretability of scores. We aimed to examine the test-retest reliability and minimal detectable change (MDC) of the BBS-3P and PASS-3P in patients with stroke. Cross-sectional study. The rehabilitation departments of a medical center and a community hospital. A total of 51 chronic stroke patients (64.7% male). Both balance measures were administered twice 7 days apart. The test-retest reliability of both the BBS-3P and PASS-3P were examined by intraclass correlation coefficients (ICC). The MDC and its percentage over the total score (MDC%) of each measure was calculated for examining the random measurement errors. The ICC values of the BBS-3P and PASS-3P were 0.99 and 0.97, respectively. The MDC% (MDC) of the BBS-3P and PASS-3P were 9.1% (5.1 points) and 8.4% (3.0 points), respectively, indicating that both measures had small and acceptable random measurement errors. Our results showed that both the BBS-3P and the PASS-3P had good test-retest reliability, with small and acceptable random measurement error. These two simplified 3-level balance measures can provide reliable results over time. Our findings support the repeated administration of the BBS-3P and PASS-3P to monitor the balance of patients with stroke. The MDC values can help clinicians and researchers interpret the change scores more precisely.

  9. Reliability of tensiomyography and myotonometry in detecting mechanical and contractile characteristics of the lumbar erector spinae in healthy volunteers.

    Science.gov (United States)

    Lohr, Christine; Braumann, Klaus-Michael; Reer, Ruediger; Schroeder, Jan; Schmidt, Tobias

    2018-04-20

    Tensiomyography™ (TMG) and MyotonPRO ® (MMT) are two non-invasive devices for monitoring muscle contractile and mechanical characteristics. This study aimed to evaluate the test-retest reliability of TMG and MMT parameters for measuring (TMG:) muscle displacement (D m ), contraction time (T c ), and velocity (V c ) and (MMT:) frequency (F), stiffness (S), and decrement (D) of the erector spinae muscles (ES) in healthy adults. A particular focus was set on the establishment of reliability measures for the previously barely evaluated secondary TMG parameter V c . Twenty-four subjects (13 female and 11 male, mean ± SD, 38.0 ± 12.0 years) were measured using TMG and MMT over 2 consecutive days. Absolute and relative reliability was calculated by standard error of measurement (SEM, SEM%), Minimum detectable change (MDC, MDC%), coefficient of variation (CV%) and intraclass correlation coefficient (ICC, 3.1) with a 95% confidence interval (CI). The ICCs for all variables and test-retest intervals ranged from 0.75 to 0.99 indicating a good to excellent relative reliability for both TMG and MMT, demonstrating the lowest values for TMG T c and between-day MMT D (ICC TMG parameter (ICC > 0.95, CV TMG V c could be established successfully. Its further applicability needs to be confirmed in future studies. MMT was found to be more reliable on repeated testing than the two other TMG parameters D m and T c .

  10. Three dimensional quantitative coronary angiography can detect reliably ischemic coronary lesions based on fractional flow reserve.

    Science.gov (United States)

    Chung, Woo-Young; Choi, Byoung-Joo; Lim, Seong-Hoon; Matsuo, Yoshiki; Lennon, Ryan J; Gulati, Rajiv; Sandhu, Gurpreet S; Holmes, David R; Rihal, Charanjit S; Lerman, Amir

    2015-06-01

    Conventional coronary angiography (CAG) has limitations in evaluating lesions producing ischemia. Three dimensional quantitative coronary angiography (3D-QCA) shows reconstructed images of CAG using computer based algorithm, the Cardio-op B system (Paieon Medical, Rosh Ha'ayin, Israel). The aim of this study was to evaluate whether 3D-QCA can reliably predict ischemia assessed by myocardial fractional flow reserve (FFR) < 0.80. 3D-QCA images were reconstructed from CAG which also were evaluated with FFR to assess ischemia. Minimal luminal diameter (MLD), percent diameter stenosis (%DS), minimal luminal area (MLA), and percent area stenosis (%AS) were obtained. The results of 3D-QCA and FFR were compared. A total of 266 patients was enrolled for the present study. FFR for all lesions ranged from 0.57 to 1.00 (0.85 ± 0.09). Measurement of MLD, %DS, MLA, and %AS all were significantly correlated with FFR (r = 0.569, 0609, 0.569, 0.670, respectively, all P < 0.001). In lesions with MLA < 4.0 mm(2), %AS of more than 65.5% had a 80% sensitivity and a 83% specificity to predict FFR < 0.80 (area under curve, AUC was 0.878). 3D-QCA can reliably predict coronary lesions producing ischemia and may be used to guide therapeutic approach for coronary artery disease.

  11. Reliable fault detection and diagnosis of photovoltaic systems based on statistical monitoring approaches

    KAUST Repository

    Harrou, Fouzi

    2017-09-18

    This study reports the development of an innovative fault detection and diagnosis scheme to monitor the direct current (DC) side of photovoltaic (PV) systems. Towards this end, we propose a statistical approach that exploits the advantages of one-diode model and those of the univariate and multivariate exponentially weighted moving average (EWMA) charts to better detect faults. Specifically, we generate array\\'s residuals of current, voltage and power using measured temperature and irradiance. These residuals capture the difference between the measurements and the predictions MPP for the current, voltage and power from the one-diode model, and use them as fault indicators. Then, we apply the multivariate EWMA (MEWMA) monitoring chart to the residuals to detect faults. However, a MEWMA scheme cannot identify the type of fault. Once a fault is detected in MEWMA chart, the univariate EWMA chart based on current and voltage indicators is used to identify the type of fault (e.g., short-circuit, open-circuit and shading faults). We applied this strategy to real data from the grid-connected PV system installed at the Renewable Energy Development Center, Algeria. Results show the capacity of the proposed strategy to monitors the DC side of PV systems and detects partial shading.

  12. Reliability and validity of the KIPPPI: an early detection tool for psychosocial problems in toddlers.

    Directory of Open Access Journals (Sweden)

    Ingrid Kruizinga

    Full Text Available BACKGROUND: The KIPPPI (Brief Instrument Psychological and Pedagogical Problem Inventory is a Dutch questionnaire that measures psychosocial and pedagogical problems in 2-year olds and consists of a KIPPPI Total score, Wellbeing scale, Competence scale, and Autonomy scale. This study examined the reliability, validity, screening accuracy and clinical application of the KIPPPI. METHODS: Parents of 5959 2-year-old children in the Rotterdam area, the Netherlands, were invited to participate in the study. Parents of 3164 children (53.1% of all invited parents completed the questionnaire. The internal consistency was evaluated and in subsamples the test-retest reliability and concurrent validity with regard to the Child Behavioral Checklist (CBCL. Discriminative validity was evaluated by comparing scores of parents who worried about their child's upbringing and parent's that did not. Screening accuracy of the KIPPPI was evaluated against the CBCL by calculating the Receiver Operating Characteristic (ROC curves. The clinical application was evaluated by the relation between KIPPPI scores and the clinical decision made by the child health professionals. RESULTS: Psychometric properties of the KIPPPI Total score, Wellbeing scale, Competence scale and Autonomy scale were respectively: Cronbach's alphas: 0.88, 0.86, 0.83, 0.58. Test-retest correlations: 0.80, 0.76, 0.73, 0.60. Concurrent validity was as hypothesised. The KIPPPI was able to discriminate between parents that worried about their child and parents that did not. Screening accuracy was high (>0.90 for the KIPPPI Total score and for the Wellbeing scale. The KIPPPI scale scores and clinical decision of the child health professional were related (p<0.05, indicating a good clinical application. CONCLUSION: The results in this large-scale study of a diverse general population sample support the reliability, validity and clinical application of the KIPPPI Total score, Wellbeing scale and Competence

  13. Acoustic feedwater heater leak detection: Industry application of low ampersand high frequency detection increases response and reliability

    International Nuclear Information System (INIS)

    Woyshner, W.S.; Bryson, T.; Robertson, M.O.

    1993-01-01

    The Electric Power Research Institute has sponsored research associated with acoustic Feedwater Heater Leak Detection since the early 1980s. Results indicate that this technology is economically beneficial and dependable. Recent research work has employed acoustic sensors and signal conditioning with wider frequency range response and background noise elimination techniques to provide increased accuracy and dependability. Dual frequency sensors have been applied at a few facilities to provide information on this application of dual frequency response. Sensor mounting methods and attenuation due to various mounting configurations are more conclusively understood. These are depicted and discussed in detail. The significance of trending certain plant parameters such as heat cycle flows, heater vent and drain valve position, proper relief valve operation, etc. is also addressed. Test data were collected at various facilities to monitor the effect of varying several related operational parameters. A group of FWHLD Users have been involved from the inception of the project and reports on their latest successes and failures, along with various data depicting early detection of FWHLD tube leaks, will be included. 3 refs., 12 figs., 1 tab

  14. FISHing for bacteria in food--a promising tool for the reliable detection of pathogenic bacteria?

    Science.gov (United States)

    Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha

    2015-04-01

    Foodborne pathogens cause millions of infections every year and are responsible for considerable economic losses worldwide. The current gold standard for the detection of bacterial pathogens in food is still the conventional cultivation following standardized and generally accepted protocols. However, these methods are time-consuming and do not provide fast information about food contaminations and thus are limited in their ability to protect consumers in time from potential microbial hazards. Fluorescence in situ hybridization (FISH) represents a rapid and highly specific technique for whole-cell detection. This review aims to summarize the current data on FISH-testing for the detection of pathogenic bacteria in different food matrices and to evaluate its suitability for the implementation in routine testing. In this context, the use of FISH in different matrices and their pretreatment will be presented, the sensitivity and specificity of FISH tests will be considered and the need for automation shall be discussed as well as the use of technological improvements to overcome current hurdles for a broad application in monitoring food safety. In addition, the overall economical feasibility will be assessed in a rough calculation of costs, and strengths and weaknesses of FISH are considered in comparison with traditional and well-established detection methods. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Multivariate normative comparison, a novel method for more reliably detecting cognitive impairment in HIV infection

    NARCIS (Netherlands)

    Su, Tanja; Schouten, Judith; Geurtsen, Gert J.; Wit, Ferdinand W.; Stolte, Ineke G.; Prins, Maria; Portegies, Peter; Caan, Matthan W. A.; Reiss, Peter; Majoie, Charles B.; Schmand, Ben A.

    2015-01-01

    The objective of this study is to assess whether multivariate normative comparison (MNC) improves detection of HIV-1-associated neurocognitive disorder (HAND) as compared with Frascati and Gisslén criteria. One-hundred and three HIV-1-infected men with suppressed viremia on combination

  16. Reliability of using retinal vascular fractal dimension as a biomarker in the diabetic retinopathy detection

    NARCIS (Netherlands)

    Huang, F.; Dashtbozorg, B.; Zhang, J.; Bekkers, E.J.; Abbasi-Sureshjani, S.; Berendschot, T.T.J.M.; ter Haar Romenij, B.M.

    2016-01-01

    The retinal fractal dimension (FD) is a measure of vasculature branching pattern complexity. FD has been considered as a potential biomarker for the detection of several diseases like diabetes and hypertension. However, conflicting findings were found in the reported literature regarding the

  17. Comparison of specificity and sensitivity of immunochemical and molecular techniques for reliable detection of Erwinia amylovora

    Czech Academy of Sciences Publication Activity Database

    Kokošková, B.; Mráz, Ivan; Hýblová, Jana

    2007-01-01

    Roč. 52, č. 2 (2007), s. 175-182 ISSN 0015-5632 R&D Projects: GA AV ČR(CZ) 1QS500510558 Institutional research plan: CEZ:AV0Z50510513 Keywords : Erwinia amylovora * detection Subject RIV: EE - Microbiology, Virology Impact factor: 0.989, year: 2007

  18. Sensitive and reliable detection of genomic imbalances in human neuroblastomas using comparative genomic hybridisation analysis

    NARCIS (Netherlands)

    van Gele, M.; van Roy, N.; Jauch, A.; Laureys, G.; Benoit, Y.; Schelfhout, V.; de Potter, C. R.; Brock, P.; Uyttebroeck, A.; Sciot, R.; Schuuring, E.; Versteeg, R.; Speleman, F.

    1997-01-01

    Deletions of the short arm of chromosome 1, extra copies of chromosome 17q and MYCN amplification are the most frequently encountered genetic changes in neuroblastomas. Standard techniques for detection of one or more of these genetic changes are karyotyping, FISH analysis and LOH analysis by

  19. Chromogenic in situ hybridization is a reliable assay for detection of ALK rearrangements in adenocarcinomas of the lung.

    Science.gov (United States)

    Schildhaus, Hans-Ulrich; Deml, Karl-Friedrich; Schmitz, Katja; Meiboom, Maren; Binot, Elke; Hauke, Sven; Merkelbach-Bruse, Sabine; Büttner, Reinhard

    2013-11-01

    Reliable detection of anaplastic lymphoma kinase (ALK) rearrangements is a prerequisite for personalized treatment of lung cancer patients, as ALK rearrangements represent a predictive biomarker for the therapy with specific tyrosine kinase inhibitors. Currently, fluorescent in situ hybridization (FISH) is considered to be the standard method for assessing formalin-fixed and paraffin-embedded tissue for ALK inversions and translocations. However, FISH requires a specialized equipment, the signals fade rapidly and it is difficult to detect overall morphology and tumor heterogeneity. Chromogenic in situ hybridization (CISH) has been successfully introduced as an alternative test for the detection of several genetic aberrations. This study validates a newly developed ALK CISH assay by comparing FISH and CISH signal patterns in lung cancer samples with and without ALK rearrangements. One hundred adenocarcinomas of the lung were included in this study, among them 17 with known ALK rearrangement. FISH and CISH were carried out and evaluated according to the manufacturers' recommendations. For both assays, tumors were considered positive if ≥15% of tumor cells showed either isolated 3' signals or break-apart patterns or a combination of both. A subset of tumors was exemplarily examined by using a novel EML4 (echinoderm microtubule-associated protein-like 4) CISH probe. Red, green and fusion CISH signals were clearcut and different signal patterns were easily recognized. The percentage of aberrant tumor cells was statistically highly correlated (PCISH. On the basis of 86 samples that were evaluable by ALK CISH, we found a 100% sensitivity and 100% specificity of this assay. Furthermore, EML4 rearrangements could be recognized by CISH. CISH is a highly reliable, sensitive and specific method for the detection of ALK gene rearrangements in pulmonary adenocarcinomas. Our results suggest that CISH might serve as a suitable alternative to FISH, which is the current gold

  20. Linear SVM-Based Android Malware Detection for Reliable IoT Services

    Directory of Open Access Journals (Sweden)

    Hyo-Sik Ham

    2014-01-01

    Full Text Available Current many Internet of Things (IoT services are monitored and controlled through smartphone applications. By combining IoT with smartphones, many convenient IoT services have been provided to users. However, there are adverse underlying effects in such services including invasion of privacy and information leakage. In most cases, mobile devices have become cluttered with important personal user information as various services and contents are provided through them. Accordingly, attackers are expanding the scope of their attacks beyond the existing PC and Internet environment into mobile devices. In this paper, we apply a linear support vector machine (SVM to detect Android malware and compare the malware detection performance of SVM with that of other machine learning classifiers. Through experimental validation, we show that the SVM outperforms other machine learning classifiers.

  1. Autopiquer - a Robust and Reliable Peak Detection Algorithm for Mass Spectrometry.

    Science.gov (United States)

    Kilgour, David P A; Hughes, Sam; Kilgour, Samantha L; Mackay, C Logan; Palmblad, Magnus; Tran, Bao Quoc; Goo, Young Ah; Ernst, Robert K; Clarke, David J; Goodlett, David R

    2017-02-01

    We present a simple algorithm for robust and unsupervised peak detection by determining a noise threshold in isotopically resolved mass spectrometry data. Solving this problem will greatly reduce the subjective and time-consuming manual picking of mass spectral peaks and so will prove beneficial in many research applications. The Autopiquer approach uses autocorrelation to test for the presence of (isotopic) structure in overlapping windows across the spectrum. Within each window, a noise threshold is optimized to remove the most unstructured data, whilst keeping as much of the (isotopic) structure as possible. This algorithm has been successfully demonstrated for both peak detection and spectral compression on data from many different classes of mass spectrometer and for different sample types, and this approach should also be extendible to other types of data that contain regularly spaced discrete peaks. Graphical Abstract ᅟ.

  2. Test-retest reliability and smallest detectable change of the Bristol Impact of Hypermobility (BIoH) questionnaire.

    Science.gov (United States)

    Palmer, S; Manns, S; Cramp, F; Lewis, R; Clark, E M

    2017-12-01

    The Bristol Impact of Hypermobility (BIoH) questionnaire is a patient-reported outcome measure developed in conjunction with adults with Joint Hypermobility Syndrome (JHS). It has demonstrated strong concurrent validity with the Short Form-36 (SF-36) physical component score but other psychometric properties have yet to be established. This study aimed to determine its test-retest reliability and smallest detectable change (SDC). A test-retest reliability study. Participants were recruited from the Hypermobility Syndromes Association, a patient organisation in the United Kingdom. Recruitment packs were sent to 1080 adults who had given permission to be contacted about research. BIoH and SF-36 questionnaires were administered at baseline and repeated two weeks later. An 11-point global rating of change scale (-5 to +5) was also administered at two weeks. Test-retest analysis and calculation of the SDC was conducted on 'stable' patients (defined as global rating of change -1 to +1). 462 responses were received. 233 patients reported a 'stable' condition and were included in analysis (95% women; mean (SD) age 44.5 (13.9) years; BIoH score 223.6 (54.0)). The BIoH questionnaire demonstrated excellent test-retest reliability (ICC 0.923, 95% CI 0.900-0.940). The SDC was 42 points (equivalent to 19% of the mean baseline score). The SF-36 physical and mental component scores demonstrated poorer test-retest reliability and larger SDCs (as a proportion of the mean baseline scores). The results provide further evidence of the potential of the BIoH questionnaire to underpin research and clinical practice for people with JHS. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Test-Retest Reliability and Minimal Detectable Change of the D2 Test of Attention in Patients with Schizophrenia.

    Science.gov (United States)

    Lee, Posen; Lu, Wen-Shian; Liu, Chin-Hsuan; Lin, Hung-Yu; Hsieh, Ching-Lin

    2017-12-08

    The d2 Test of Attention (D2) is a commonly used measure of selective attention for patients with schizophrenia. However, its test-retest reliability and minimal detectable change (MDC) are unknown in patients with schizophrenia, limiting its utility in both clinical and research settings. The aim of the present study was to examine the test-retest reliability and MDC of the D2 in patients with schizophrenia. A rater administered the D2 on 108 patients with schizophrenia twice at a 1-month interval. Test-retest reliability was determined through the calculation of the intra-class correlation coefficient (ICC). We also carried out Bland-Altman analysis, which included a scatter plot of the differences between test and retest against their mean. Systematic biases were evaluated by use of a paired t-test. The ICCs for the D2 ranged from 0.78 to 0.94. The MDCs (MDC%) of the seven subscores were 102.3 (29.7), 19.4 (85.0), 7.2 (94.6), 21.0 (69.0), 104.0 (33.1), 105.0 (35.8), and 7.8 (47.8), which represented limited-to-acceptable random measurement error. Trends in the Bland-Altman plots of the omissions (E1), commissions (E2), and errors (E) were noted, presenting that the data had heteroscedasticity. According to the results, the D2 had good test-retest reliability, especially in the scores of TN, TN-E, and CP. For the further research, finding a way to improve the administration procedure to reduce random measurement error would be important for the E1, E2, E, and FR subscores. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. A novel method for rapid and reliable detection of complex vertebral malformation and bovine leukocyte adhesion deficiency in Holstein cattle

    Directory of Open Access Journals (Sweden)

    Zhang Yi

    2012-07-01

    Full Text Available Abstract Background Complex vertebral malformation (CVM and bovine leukocyte adhesion deficiency (BLAD are two autosomal recessive lethal genetic defects frequently occurring in Holstein cattle, identifiable by single nucleotide polymorphisms. The objective of this study is to develop a rapid and reliable genotyping assay to screen the active Holstein sires and determine the carrier frequency of CVM and BLAD in Chinese dairy cattle population. Results We developed real-time PCR-based assays for discrimination of wild-type and defective alleles, so that carriers can be detected. Only one step was required after the DNA extraction from the sample and time consumption was about 2 hours. A total of 587 Chinese Holstein bulls were assayed, and fifty-six CVM-carriers and eight BLAD-carriers were identified, corresponding to heterozygote carrier frequencies of 9.54% and 1.36%, respectively. The pedigree analysis showed that most of the carriers could be traced back to the common ancestry, Osborndale Ivanhoe for BLAD and Pennstate Ivanhoe Star for CVM. Conclusions These results demonstrate that real-time PCR is a simple, rapid and reliable assay for BLAD and CVM defective allele detection. The high frequency of the CVM allele suggests that implementing a routine testing system is necessary to gradually eradicate the deleterious gene from the Chinese Holstein population.

  5. Can magnetic resonance imaging at 3.0-Tesla reliably detect patients with endometriosis? Initial results.

    Science.gov (United States)

    Thomeer, Maarten G; Steensma, Anneke B; van Santbrink, Evert J; Willemssen, Francois E; Wielopolski, Piotr A; Hunink, Myriam G; Spronk, Sandra; Laven, Joop S; Krestin, Gabriel P

    2014-04-01

    The aim of this study was to determine whether an optimized 3.0-Tesla magnetic resonance imaging (MRI) protocol is sensitive and specific enough to detect patients with endometriosis. This was a prospective cohort study with consecutive patients. Forty consecutive patients with clinical suspicion of endometriosis underwent 3.0-Tesla MRI, including a T2-weighted high-resolution fast spin echo sequence (spatial resolution=0.75 ×1.2 ×1.5 mm³) and a 3D T1-weighted high-resolution gradient echo sequence (spatial resolution=0.75 ×1.2 × 2.0 mm³). Two radiologists reviewed the dataset with consensus reading. During laparoscopy, which was used as reference standard, all lesions were characterized according to the revised criteria of the American Fertility Society. Patient-level and region-level sensitivities and specificities and lesion-level sensitivities were calculated. Patient-level sensitivity was 42% for stage I (5/12) and 100% for stages II, III and IV (25/25). Patient-level specificity for all stages was 100% (3/3). The region-level sensitivity and specificity was 63% and 97%, respectively. The sensitivity per lesion was 61% (90% for deep lesions, 48% for superficial lesions and 100% for endometriomata). The detection rate of obliteration of the cul-the-sac was 100% (10/10) with no false positive findings. The interreader agreement was substantial to perfect (kappa=1 per patient, 0.65 per lesion and 0.71 for obliteration of the cul-the-sac). An optimized 3.0-Tesla MRI protocol is accurate in detecting stage II to stage IV endometriosis. © 2014 The Authors. Journal of Obstetrics and Gynaecology Research © 2014 Japan Society of Obstetrics and Gynecology.

  6. Lipase-nanoporous gold biocomposite modified electrode for reliable detection of triglycerides.

    Science.gov (United States)

    Wu, Chao; Liu, Xueying; Li, Yufei; Du, Xiaoyu; Wang, Xia; Xu, Ping

    2014-03-15

    For triglycerides biosensor design, protein immobilization is necessary to create the interface between the enzyme and the electrode. In this study, a glassy carbon electrode (GCE) was modified with lipase-nanoporous gold (NPG) biocomposite (denoted as lipase/NPG/GCE). Due to highly conductive, porous, and biocompatible three-dimensional structure, NPG is suitable for enzyme immobilization. In cyclic voltammetry experiments, the lipase/NPG/GCE bioelectrode displayed surface-confined reaction in a phosphate buffer solution. Linear responses were obtained for tributyrin concentrations ranging from 50 to 250 mg dl(-1) and olive oil concentrations ranging from 10 to 200 mg dl(-1). The value of apparent Michaelis-Menten constant for tributyrin was 10.67 mg dl(-1) and the detection limit was 2.68 mg dl(-1). Further, the lipase/NPG/GCE bioelectrode had strong anti-interference ability against urea, glucose, cholesterol, and uric acid as well as a long shelf-life. For the detection of triglycerides in human serum, the values given by the lipase/NPG/GCE bioelectrode were in good agreement with those of an automatic biochemical analyzer. These properties along with a long self-life make the lipase/NPG/GCE bioelectrode an excellent choice for the construction of triglycerides biosensor. © 2013 Elsevier B.V. All rights reserved.

  7. Delta flow: An accurate, reliable system for detecting kicks and loss of circulation during drilling

    Energy Technology Data Exchange (ETDEWEB)

    Speers, J.M.; Gehrig, G.F.

    1987-12-01

    A system to monitor drilling-fluid flow rate has been developed that detects kicks and lost returns in floating, fixed-platform, and land-base drilling operations. The system uses flowmeters that monitor the flow rates of drilling fluids entering the borehole through the standpipe and leaving the well through the return flowline. These readings are processed in a computer-based, data-acquisition system to form a filtered delta-flow signal that identified the occurrence of downhole fluid gains or losses. The system is designed to trip an alarm when a gain or loss exceeds 25 gal/min (1.6 dm/sup 3//s), even in a floating drilling environment. This sensitivity will generally keep gains or losses to less than 5 bbl (0.8 m/sup 3/).

  8. A rapid and reliable determination of doxycycline hyclate by HPLC with UV detection in pharmaceutical samples

    Directory of Open Access Journals (Sweden)

    SNEZANA S. MITIC

    2008-06-01

    Full Text Available An accurate, sensitive and reproducible high performance liquid chromatographic (HPLC method for the quantification of doxycycline hyclate in pharmaceutical samples has been developed and validated. The drug and the standard were eluted from a Lichrosorb RP-8 (250 mm´4.6 mm, 10 mm particle size at 20 °C with a mobile phase consisting of methanol, acetonitrile and 0.010 M aqueous solution of oxalic acid (2:3:5, v/v/v. The flow rate was 1.25 ml min-1. A UV detector set at 350 nm was used to monitor the effluent. Each analysis required no longer than 4 min. The limits of detection and quantification were 1.15 and 3.84 μg ml-1, respectively. Recoveries for different concentrations ranged from 99.58 to 101.93 %.

  9. Reliability of Using Retinal Vascular Fractal Dimension as a Biomarker in the Diabetic Retinopathy Detection.

    Science.gov (United States)

    Huang, Fan; Dashtbozorg, Behdad; Zhang, Jiong; Bekkers, Erik; Abbasi-Sureshjani, Samaneh; Berendschot, Tos T J M; Ter Haar Romeny, Bart M

    2016-01-01

    The retinal fractal dimension (FD) is a measure of vasculature branching pattern complexity. FD has been considered as a potential biomarker for the detection of several diseases like diabetes and hypertension. However, conflicting findings were found in the reported literature regarding the association between this biomarker and diseases. In this paper, we examine the stability of the FD measurement with respect to (1) different vessel annotations obtained from human observers, (2) automatic segmentation methods, (3) various regions of interest, (4) accuracy of vessel segmentation methods, and (5) different imaging modalities. Our results demonstrate that the relative errors for the measurement of FD are significant and FD varies considerably according to the image quality, modality, and the technique used for measuring it. Automated and semiautomated methods for the measurement of FD are not stable enough, which makes FD a deceptive biomarker in quantitative clinical applications.

  10. Detecting recurrent major depressive disorder within primary care rapidly and reliably using short questionnaire measures.

    Science.gov (United States)

    Thapar, Ajay; Hammerton, Gemma; Collishaw, Stephan; Potter, Robert; Rice, Frances; Harold, Gordon; Craddock, Nicholas; Thapar, Anita; Smith, Daniel J

    2014-01-01

    Major depressive disorder (MDD) is often a chronic disorder with relapses usually detected and managed in primary care using a validated depression symptom questionnaire. However, for individuals with recurrent depression the choice of which questionnaire to use and whether a shorter measure could suffice is not established. To compare the nine-item Patient Health Questionnaire (PHQ-9), the Beck Depression Inventory, and the Hospital Anxiety and Depression Scale against shorter PHQ-derived measures for detecting episodes of DSM-IV major depression in primary care patients with recurrent MDD. Diagnostic accuracy study of adults with recurrent depression in primary care predominantly from Wales Scores on each of the depression questionnaire measures were compared with the results of a semi-structured clinical diagnostic interview using Receiver Operating Characteristic curve analysis for 337 adults with recurrent MDD. Concurrent questionnaire and interview data were available for 272 participants. The one-month prevalence rate of depression was 22.2%. The area under the curve (AUC) and positive predictive value (PPV) at the derived optimal cut-off value for the three longer questionnaires were comparable (AUC = 0.86-0.90, PPV = 49.4-58.4%) but the AUC for the PHQ-9 was significantly greater than for the PHQ-2. However, by supplementing the PHQ-2 score with items on problems concentrating and feeling slowed down or restless, the AUC (0.91) and the PPV (55.3%) were comparable with those for the PHQ-9. A novel four-item PHQ-based questionnaire measure of depression performs equivalently to three longer depression questionnaires in identifying depression relapse in patients with recurrent MDD.

  11. Seismic Azimuthal Anisotropy of the Lower Paleozoic Shales in Northern Poland: can we reliably detect it?

    Science.gov (United States)

    Cyz, Marta; Malinowski, Michał

    2017-04-01

    Analysis of the azimuthal anisotropy is an important aspect of characterization the Lower Paleozoic shale play in northern Poland, since it can be used to map pre-existing fracture networks or help in optimal placement of the horizontal wells. Previous studies employed Velocity versus Azimuth (VVAz) method and found that this anisotropy is weak - on the order of 1-2%, only locally - close to major fault zones - being higher (ca. 7%). This is consistent with the recent re-interpretation of the cross-dipole sonic data, which indicates average shear wave anisotropy of 1%. The problem with the VVAz method is that it requires good definition of the interval, for which the analysis is made and it should be minimum 100 ms thick. In our case, the target intervals are thin - upper reservoir (Lower Silurian Jantar formation) is 15 m thick, lower reservoir (Upper Ordovician Sasino formation) is 25 m thick. Therefore, we prefer to use the Amplitude vs Azimuth (AVAz) method, which can be applied on a single horizon (e.g. the base of the reservoir). However, the AVAz method depends critically on the quality of the seismic data and preservation of amplitudes during processing. On top of the above mentioned issues, physical properties of the Lower Paleozoic shales from Poland seem to be unfavourable for detecting azimuthal anisotropy. For example, for both target formations, parameter g=(Vs/Vp)2 is close to 0.32, which implies that the anisotropy expressed by the anisotropic gradient in the dry (i.e. gas-filled fractures) case is close to zero. In case of e.g. the Bakken Shale, g is much higher (0.38-0.4), leading to a detectable anisotropic signature even in the dry case. Modelling of the synthetic AVAz response performed using available well data suggested that anisotropic gradient in the wet (fluid-filled) case should be detectable even in case of the weak anisotropy (1-2%). This scenario is consistent with the observation, that the studied area is located in the liquid

  12. Detecting inflammation in the unprepared pediatric colon - how reliable is magnetic resonance enterography?

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Joy L.; Watson, Tom A. [Great Ormond Street Hospital for Children NHS Foundation Trust, Department of Radiology, London (United Kingdom); Lozinsky, Adriana Chebar; Kiparissi, Fevronia; Shah, Neil [Great Ormond Street Hospital for Children NHS Foundation Trust, Department of Gastroenterology, London (United Kingdom)

    2016-05-15

    Pediatric inflammatory bowel disease frequently affects the colon. MR enterography is used to assess the small bowel but it also depicts the colon. To compare the accuracy of MR enterography and direct visualization at endoscopy in assessing the colon in pediatric inflammatory bowel disease. We included children with inflammatory bowel disease who had undergone both MR enterography and endoscopy, and we restrospectively assessed the imaging and endoscopic findings. We scored the colonic appearance at MR using a total colon score. We then compared scores for the whole colon and for its individual segments with endoscopy and histology. We included 15 children. An elevated MR colonic segmental score predicted the presence of active inflammation on biopsy with a specificity of 90% (95% confidence interval [CI] 79.5-96.2%) and sensitivity of 60% (CI 40.6-77.3%); this compares reasonably with the predictive values for findings at colonoscopy - specificity 85% (CI 73.4 - 92.9%) and sensitivity 53.3% (CI 34.3%-71.6%). Accuracy did not change significantly with increasing bowel distension. MR-derived scores had comparable accuracy to those derived during visualization at colonoscopy for detecting biopsy-proven inflammation in our patient group. MR enterography might prove useful in guiding biopsy or monitoring treatment response. Collapse of a colonic segment did not impair assessment of inflammation. (orig.)

  13. Can the comet assay be used reliably to detect nanoparticle-induced genotoxicity?

    Science.gov (United States)

    Karlsson, Hanna L; Di Bucchianico, Sebastiano; Collins, Andrew R; Dusinska, Maria

    2015-03-01

    The comet assay is a sensitive method to detect DNA strand breaks as well as oxidatively damaged DNA at the level of single cells. Today the assay is commonly used in nano-genotoxicology. In this review we critically discuss possible interactions between nanoparticles (NPs) and the comet assay. Concerns for such interactions have arisen from the occasional observation of NPs in the "comet head", which implies that NPs may be present while the assay is being performed. This could give rise to false positive or false negative results, depending on the type of comet assay endpoint and NP. For most NPs, an interaction that substantially impacts the comet assay results is unlikely. For photocatalytically active NPs such as TiO2 , on the other hand, exposure to light containing UV can lead to increased DNA damage. Samples should therefore not be exposed to such light. By comparing studies in which both the comet assay and the micronucleus assay have been used, a good consistency between the assays was found in general (69%); consistency was even higher when excluding studies on TiO2 NPs (81%). The strong consistency between the comet and micronucleus assays for a range of different NPs-even though the two tests measure different endpoints-implies that both can be trusted in assessing the genotoxicity of NPs, and that both could be useful in a standard battery of test methods. © 2014 Wiley Periodicals, Inc.

  14. Robust and reliable banknote authentification and print flaw detection with opto-acoustical sensor fusion methods

    Science.gov (United States)

    Lohweg, Volker; Schaede, Johannes; Türke, Thomas

    2006-02-01

    The authenticity checking and inspection of bank notes is a high labour intensive process where traditionally every note on every sheet is inspected manually. However with the advent of more and more sophisticated security features, both visible and invisible, and the requirement of cost reduction in the printing process, it is clear that automation is required. As more and more print techniques and new security features will be established, total quality security, authenticity and bank note printing must be assured. Therefore, this factor necessitates amplification of a sensorial concept in general. We propose a concept for both authenticity checking and inspection methods for pattern recognition and classification for securities and banknotes, which is based on the concept of sensor fusion and fuzzy interpretation of data measures. In the approach different methods of authenticity analysis and print flaw detection are combined, which can be used for vending or sorting machines, as well as for printing machines. Usually only the existence or appearance of colours and their textures are checked by cameras. Our method combines the visible camera images with IR-spectral sensitive sensors, acoustical and other measurements like temperature and pressure of printing machines.

  15. Detecting inflammation in the unprepared pediatric colon - how reliable is magnetic resonance enterography?

    International Nuclear Information System (INIS)

    Barber, Joy L.; Watson, Tom A.; Lozinsky, Adriana Chebar; Kiparissi, Fevronia; Shah, Neil

    2016-01-01

    Pediatric inflammatory bowel disease frequently affects the colon. MR enterography is used to assess the small bowel but it also depicts the colon. To compare the accuracy of MR enterography and direct visualization at endoscopy in assessing the colon in pediatric inflammatory bowel disease. We included children with inflammatory bowel disease who had undergone both MR enterography and endoscopy, and we restrospectively assessed the imaging and endoscopic findings. We scored the colonic appearance at MR using a total colon score. We then compared scores for the whole colon and for its individual segments with endoscopy and histology. We included 15 children. An elevated MR colonic segmental score predicted the presence of active inflammation on biopsy with a specificity of 90% (95% confidence interval [CI] 79.5-96.2%) and sensitivity of 60% (CI 40.6-77.3%); this compares reasonably with the predictive values for findings at colonoscopy - specificity 85% (CI 73.4 - 92.9%) and sensitivity 53.3% (CI 34.3%-71.6%). Accuracy did not change significantly with increasing bowel distension. MR-derived scores had comparable accuracy to those derived during visualization at colonoscopy for detecting biopsy-proven inflammation in our patient group. MR enterography might prove useful in guiding biopsy or monitoring treatment response. Collapse of a colonic segment did not impair assessment of inflammation. (orig.)

  16. Simultaneous amplification of two bacterial genes: more reliable method of Helicobacter pylori detection in microbial rich dental plaque samples.

    Science.gov (United States)

    Chaudhry, Saima; Idrees, Muhammad; Izhar, Mateen; Butt, Arshad Kamal; Khan, Ayyaz Ali

    2011-01-01

    Polymerase Chain reaction (PCR) assay is considered superior to other methods for detection of Helicobacter pylori (H. pylori) in oral cavity; however, it also has limitations when sample under study is microbial rich dental plaque. The type of gene targeted and number of primers used for bacterial detection in dental plaque samples can have a significant effect on the results obtained as there are a number of closely related bacterial species residing in plaque biofilm. Also due to high recombination rate of H. pylori some of the genes might be down regulated or absent. The present study was conducted to determine the frequency of H. pylori colonization of dental plaque by simultaneously amplifying two genes of the bacterium. One hundred dental plaque specimens were collected from dyspeptic patients before their upper gastrointestinal endoscopy and presence of H. pylori was determined through PCR assay using primers targeting two different genes of the bacterium. Eighty-nine of the 100 samples were included in final analysis. With simultaneous amplification of two bacterial genes 51.6% of the dental plaque samples were positive for H. pylori while this prevalence increased to 73% when only one gene amplification was used for bacterial identification. Detection of H. pylori in dental plaque samples is more reliable when two genes of the bacterium are simultaneously amplified as compared to one gene amplification only.

  17. The sensitivity of computed tomography (CT) scans in detecting trauma: are CT scans reliable enough for courtroom testimony?

    Science.gov (United States)

    Molina, D Kimberley; Nichols, Joanna J; Dimaio, Vincent J M

    2007-09-01

    Rapid and accurate recognition of traumatic injuries is extremely important in emergency room and surgical settings. Emergency departments depend on computed tomography (CT) scans to provide rapid, accurate injury assessment. We conducted an analysis of all traumatic deaths autopsied at the Bexar County Medical Examiner's Office in which perimortem medical imaging (CT scan) was performed to assess the reliability of the CT scan in detecting trauma with sufficient accuracy for courtroom testimony. Cases were included in the study if an autopsy was conducted, a CT scan was performed within 24 hours before death, and there was no surgical intervention. Analysis was performed to assess the correlation between the autopsy and CT scan results. Sensitivity, specificity, positive predictive value, and negative predictive value were defined for the CT scan based on the autopsy results. The sensitivity of the CT scan ranged from 0% for cerebral lacerations, cervical vertebral body fractures, cardiac injury, and hollow viscus injury to 75% for liver injury. This study reveals that CT scans are an inadequate detection tool for forensic pathologists, where a definitive diagnosis is required, because they have a low level of accuracy in detecting traumatic injuries. CT scans may be adequate for clinicians in the emergency room setting, but are inadequate for courtroom testimony. If the evidence of trauma is based solely on CT scan reports, there is a high possibility of erroneous accusations, indictments, and convictions.

  18. Water-soluble upper GI based on clinical findings is reliable to detect anastomotic leaks after laparoscopic gastric bypass.

    Science.gov (United States)

    Katasani, V G; Leeth, R R; Tishler, D S; Leath, T D; Roy, B P; Canon, C L; Vickers, S M; Clements, R H

    2005-11-01

    Anastomotic leak after laparoscopic Roux-en-Y gastric bypass (LGB) is a major complication that must be recognized and treated early for best results. There is controversy in the literature regarding the reliability of upper GI series (UGI) in diagnosing leaks. LGB was performed in patients meeting NIH criteria for the surgical treatment of morbid obesity. All leaks identified at the time of surgery were repaired with suture and retested. Drains were placed at the surgeon's discretion. Postoperatively, UGI was performed by an experienced radiologist if there was a clinical suspicion of leak. From September 2001 until October 2004, a total of 553 patients (age 40.4 +/- 9.2 years, BMI 48.6 +/- 7.2) underwent LGB at UAB. Seventy-eight per cent (431 of 553) of patients had no clinical evidence suggesting anastomotic leak and were managed expectantly. Twenty-two per cent (122 of 553) of patients met at least one inclusion criteria for leak and underwent UGI. Four of 122 patients (3.2%) had a leak, two from anastomosis and two from the perforation of the stapled end of the Roux limb. No patient returned to the operating room without a positive UGI. High clinical suspicion and selectively performed UGI based on clinical evidence is reliable in detecting leaks.

  19. Reliability of the MicroScan WalkAway PC21 panel in identifying and detecting oxacillin resistance in clinical coagulase-negative staphylococci strains.

    Science.gov (United States)

    Olendzki, A N; Barros, E M; Laport, M S; Dos Santos, K R N; Giambiagi-Demarval, M

    2014-01-01

    The purpose of this study was to determine the reliability of the MicroScan WalkAway PosCombo21 (PC21) system for the identification of coagulase-negative staphylococci (CNS) strains and the detection of oxacillin resistance. Using molecular and phenotypic methods, 196 clinical strains were evaluated. The automated system demonstrated 100 % reliability for the identification of the clinical strains Staphylococcus haemolyticus, Staphylococcus hominis and Staphylococcus cohnii; 98.03 % reliability for the identification of Staphylococcus epidermidis; 70 % reliability for the identification of Staphylococcus lugdunensis; 40 % reliability for the identification of Staphylococcus warneri; and 28.57 % reliability for the identification of Staphylococcus capitis, but no reliability for the identification of Staphylococcus auricularis, Staphylococcus simulans and Staphylococcus xylosus. We concluded that the automated system provides accurate results for the more common CNS species but often fails to accurately identify less prevalent species. For the detection of oxacillin resistance, the automated system showed 100 % specificity and 90.22 % sensitivity. Thus, the PC21 panel detects oxacillin-resistant strains, but is limited by the heteroresistance that is observed when using most phenotypic methods.

  20. Ambient Pressure Laser Desorption—Chemical Ionization Mass Spectrometry for Fast and Reliable Detection of Explosives, Drugs, and Their Precursors

    Directory of Open Access Journals (Sweden)

    René Reiss

    2018-06-01

    Full Text Available Fast and reliable information is crucial for first responders to draw correct conclusions at crime scenes. An ambient pressure laser desorption (APLD mass spectrometer is introduced for this scenario, which enables detecting substances on surfaces without sample pretreatment. It is especially useful for substances with low vapor pressure and thermolabile ones. The APLD allows for the separation of desorption and ionization into two steps and, therefore, both can be optimized separately. Within this work, an improved version of the developed system is shown that achieves limits of detection (LOD down to 500 pg while remaining fast and flexible. Furthermore, realistic scenarios are applied to prove the usability of this system in real-world issues. For this purpose, post-blast residues of a bomb from the Second World War were analyzed, and the presence of PETN was proven without sample pretreatment. In addition, the analyzable substance range could be expanded by various drugs and drug precursors. Thus, the presented instrumentation can be utilized for an increased number of forensically important compound classes without changing the setup. Drug precursors revealed a LOD ranging from 6 to 100 ng. Drugs such as cocaine hydrochloride, heroin, (3,4-methylendioxy-methamphetamine hydrochloride (MDMA hydrochloride, and others exhibit a LOD between 10 to 200 ng.

  1. Is air-displacement plethysmography a reliable method of detecting ongoing changes in percent body fat within obese children involved in a weight management program?

    DEFF Research Database (Denmark)

    Ewane, Cecile; McConkey, Stacy A; Kreiter, Clarence D

    2010-01-01

    (percent body fat) over time. The gold standard method, hydrodensitometry, has severe limitations for the pediatric population. OBJECTIVE: This study examines the reliability of air-displacement plethysmography (ADP) in detecting percent body fat changes within obese children over time. METHODS: Percent...... body fat by ADP, weight, and body mass index (BMI) were measured for eight obese children aged 5-12 years enrolled in a weight management program over a 12-month period. These measurements were taken at initial evaluation, 1.5 months, 3 months, 6 months, and 12 months to monitor the progress...... of the subjects and detect any changes in these measures over time. Statistical analysis was used to determine the reliability of the data collected. RESULTS: The reliability estimate for percent body fat by ADP was 0.78. This was much lower than the reliability of BMI, 0.98, and weight measurements, 0...

  2. 3-lead electrocardiogram is more reliable than pulse oximetry to detect bradycardia during stabilisation at birth of very preterm infants.

    Science.gov (United States)

    Iglesias, Beatriz; Rodrí Guez, Marí A José; Aleo, Esther; Criado, Enrique; Martí Nez-Orgado, Jose; Arruza, Luis

    2018-05-01

    Current neonatal resuscitation guidelines suggest the use of ECG in the delivery room (DR) to assess heart rate (HR). However, reliability of ECG compared with pulse oximetry (PO) in a situation of bradycardia has not been specifically investigated. The objective of the present study was to compare HR monitoring using ECG or PO in a situation of bradycardia (HR <100 beats per minute (bpm)) during preterm stabilisation in the DR. Video recordings of resuscitations of infants <32 weeks of gestation were reviewed. HR readings in a situation of bradycardia (<100 bpm) at any moment during stabilisation were registered with both devices every 5 s from birth. A total of 29 episodes of bradycardia registered by the ECG in 39 video recordings were included in the analysis (n=29). PO did not detect the start of these events in 20 cases (69%). PO detected the start and the end of bradycardia later than the ECG (median (IQR): 5 s (0-10) and 5 s (0-7.5), respectively). A decline in PO accuracy was observed as bradycardia progressed so that by the end of the episode PO offered significantly lower HR readings than ECG. PO detects the start and recovery of bradycardia events slower and less accurately than ECG during stabilisation at birth of very preterm infants. ECG use in this scenario may contribute to an earlier initiation of resuscitation manoeuvres and to avoid unnecessary prolongation of resuscitation efforts after recovery. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  4. Reliable allele detection using SNP-based PCR primers containing Locked Nucleic Acid: application in genetic mapping

    Directory of Open Access Journals (Sweden)

    Trognitz Friederike

    2007-02-01

    Full Text Available Abstract Background The diploid, Solanum caripense, a wild relative of potato and tomato, possesses valuable resistance to potato late blight and we are interested in the genetic base of this resistance. Due to extremely low levels of genetic variation within the S. caripense genome it proved impossible to generate a dense genetic map and to assign individual Solanum chromosomes through the use of conventional chromosome-specific SSR, RFLP, AFLP, as well as gene- or locus-specific markers. The ease of detection of DNA polymorphisms depends on both frequency and form of sequence variation. The narrow genetic background of close relatives and inbreds complicates the detection of persisting, reduced polymorphism and is a challenge to the development of reliable molecular markers. Nonetheless, monomorphic DNA fragments representing not directly usable conventional markers can contain considerable variation at the level of single nucleotide polymorphisms (SNPs. This can be used for the design of allele-specific molecular markers. The reproducible detection of allele-specific markers based on SNPs has been a technical challenge. Results We present a fast and cost-effective protocol for the detection of allele-specific SNPs by applying Sequence Polymorphism-Derived (SPD markers. These markers proved highly efficient for fingerprinting of individuals possessing a homogeneous genetic background. SPD markers are obtained from within non-informative, conventional molecular marker fragments that are screened for SNPs to design allele-specific PCR primers. The method makes use of primers containing a single, 3'-terminal Locked Nucleic Acid (LNA base. We demonstrate the applicability of the technique by successful genetic mapping of allele-specific SNP markers derived from monomorphic Conserved Ortholog Set II (COSII markers mapped to Solanum chromosomes, in S. caripense. By using SPD markers it was possible for the first time to map the S. caripense alleles

  5. Mammographic casting-type calcification associated with small screen-detected invasive breast cancers: is this a reliable prognostic indicator?

    International Nuclear Information System (INIS)

    Peacock, C.; Given-Wilson, R.M.; Duffy, S.W.

    2004-01-01

    AIM: The aim of the present study was to establish whether mammographic casting-type calcification associated with small screen-detected invasive breast cancers is a reliable prognostic indicator. METHODS AND MATERIALS: We retrospectively identified 50 consecutive women diagnosed with an invasive cancer less than 15 mm who showed associated casting calcification on their screening mammograms. Controls were identified that showed no microcalcification and were matched for tumour size, histological type and lymph node status. A minimum of 5 years follow-up was obtained, noting recurrence and outcome. Conditional and unconditional logistic regression, depending on the outcome variable, were used to analyse the data, taking the matched design into account in both cases. Where small numbers prohibited the use of logistic regression, Fisher's exact test was used. RESULTS: Five deaths from breast cancer occurred out of the 50 cases, of which three were lymph node positive, two were lymph node negative and none were grade 3. None of the 78 control cases died from breast cancer. The difference in breast cancer death rates was significant by Fisher's exact test (p=0.02). Risk of recurrence was also significantly increased in the casting cases (OR=3.55, 95% CI 1.02-12.33, p=0.046). CONCLUSION: Although the overall outcome for small screen-detected breast cancers is good, our study suggests that casting calcification is a poorer prognostic factor. The advantage of a mammographic feature as an independent prognostic indicator lies in early identification of high-risk patients, allowing optimization of management

  6. Sampling inspection for the evaluation of time-dependent reliability of deteriorating systems under imperfect defect detection

    International Nuclear Information System (INIS)

    Kuniewski, Sebastian P.; Weide, Johannes A.M. van der; Noortwijk, Jan M. van

    2009-01-01

    The paper presents a sampling-inspection strategy for the evaluation of time-dependent reliability of deteriorating systems, where the deterioration is assumed to initiate at random times and at random locations. After initiation, defects are weakening the system's resistance. The system becomes unacceptable when at least one defect reaches a critical depth. The defects are assumed to initiate at random times modeled as event times of a non-homogeneous Poisson process (NHPP) and to develop according to a non-decreasing time-dependent gamma process. The intensity rate of the NHPP is assumed to be a combination of a known time-dependent shape function and an unknown proportionality constant. When sampling inspection (i.e. inspection of a selected subregion of the system) results in a number of defect initiations, Bayes' theorem can be used to update prior beliefs about the proportionality constant of the NHPP intensity rate to the posterior distribution. On the basis of a time- and space-dependent Poisson process for the defect initiation, an adaptive Bayesian model for sampling inspection is developed to determine the predictive probability distribution of the time to failure. A potential application is, for instance, the inspection of a large vessel or pipeline suffering pitting/localized corrosion in the oil industry. The possibility of imperfect defect detection is also incorporated in the model.

  7. Reliability, standard error, and minimum detectable change of clinical pressure pain threshold testing in people with and without acute neck pain.

    Science.gov (United States)

    Walton, David M; Macdermid, Joy C; Nielson, Warren; Teasell, Robert W; Chiasson, Marco; Brown, Lauren

    2011-09-01

    Clinical measurement. To evaluate the intrarater, interrater, and test-retest reliability of an accessible digital algometer, and to determine the minimum detectable change in normal healthy individuals and a clinical population with neck pain. Pressure pain threshold testing may be a valuable assessment and prognostic indicator for people with neck pain. To date, most of this research has been completed using algometers that are too resource intensive for routine clinical use. Novice raters (physiotherapy students or clinical physiotherapists) were trained to perform algometry testing over 2 clinically relevant sites: the angle of the upper trapezius and the belly of the tibialis anterior. A convenience sample of normal healthy individuals and a clinical sample of people with neck pain were tested by 2 different raters (all participants) and on 2 different days (healthy participants only). Intraclass correlation coefficient (ICC), standard error of measurement, and minimum detectable change were calculated. A total of 60 healthy volunteers and 40 people with neck pain were recruited. Intrarater reliability was almost perfect (ICC = 0.94-0.97), interrater reliability was substantial to near perfect (ICC = 0.79-0.90), and test-retest reliability was substantial (ICC = 0.76-0.79). Smaller change was detectable in the trapezius compared to the tibialis anterior. This study provides evidence that novice raters can perform digital algometry with adequate reliability for research and clinical use in people with and without neck pain.

  8. The reliability and accuracy of two methods for proximal caries detection and depth on directly visible proximal surfaces: an in vitro study

    DEFF Research Database (Denmark)

    Ekstrand, K R; Alloza, Alvaro Luna; Promisiero, L

    2011-01-01

    This study aimed to determine the reliability and accuracy of the ICDAS and radiographs in detecting and estimating the depth of proximal lesions on extracted teeth. The lesions were visible to the naked eye. Three trained examiners scored a total of 132 sound/carious proximal surfaces from 106 p...

  9. Improvement of Matrix Converter Drive Reliability by Online Fault Detection and a Fault-Tolerant Switching Strategy

    DEFF Research Database (Denmark)

    Nguyen-Duy, Khiem; Liu, Tian-Hua; Chen, Der-Fa

    2011-01-01

    The matrix converter system is becoming a very promising candidate to replace the conventional two-stage ac/dc/ac converter, but system reliability remains an open issue. The most common reliability problem is that a bidirectional switch has an open-switch fault during operation. In this paper, a...

  10. Reliability and minimal detectable change of a modified passive neck flexion test in patients with chronic nonspecific neck pain and asymptomatic subjects.

    Science.gov (United States)

    López-de-Uralde-Villanueva, Ibai; Acuyo-Osorio, Mario; Prieto-Aldana, María; La Touche, Roy

    2017-04-01

    The Passive Neck Flexion Test (PNFT) can diagnose meningitis and potential spinal disorders. Little evidence is available concerning the use of a modified version of the PNFT (mPNFT) in patients with chronic nonspecific neck pain (CNSNP). To assess the reliability of the mPNFT in subjects with and without CNSNP. The secondary objective was to assess the differences in the symptoms provoked by the mPNFT between these two populations. We used repeated measures concordance design for the main objective and cross-sectional design for the secondary objective. A total of 30 asymptomatic subjects and 34 patients with CNSNP were recruited. The following measures were recorded: the range of motion at the onset of symptoms (OS-mPNFT), the range of motion at the submaximal pain (SP-mPNFT), and evoked pain intensity on the mPNFT (VAS-mPNFT). Good to excellent reliability was observed for OS-mPNFT and SP-mPNFT in the asymptomatic group (intra-examiner reliability: 0.95-0.97; inter-examiner reliability: 0.86-0.90; intra-examiner test-retest reliability: 0.84-0.87). In the CNSNP group, a good to excellent reliability was obtained for the OS-mPNFT (intra-examiner reliability: 0.89-0.96; inter-examiner reliability: 0.83-0.86; intra-examiner test-retest reliability: 0.83-0.85) and the SP-PNFT (intra-examiner reliability: 0.94-0.98; inter-examiner reliability: 0.80-0.82; intra-examiner test-retest reliability: 0.88-0.91). The CNSNP group showed statistically significant differences in OS-mPNFT (t = 4.92; P reliable tool regardless of the examiner and the time factor. Patients with CNSNP have a decrease range of motion and more pain than asymptomatic subjects in the mPNFT. This exceeds the minimal detectable changes for OS-mPNFT and VAS-mPNFT. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Psychometric properties of the Need for Recovery after work scale: test-retest reliability and sensitivity to detect change

    NARCIS (Netherlands)

    de Croon, E. M.; Sluiter, J. K.; Frings-Dresen, M. H. W.

    2006-01-01

    BACKGROUND: Monitoring worker health and evaluating occupational healthcare interventions requires sensitive instruments that are reliable over time. The Need for Recovery scale (NFR), which quantifies workers' difficulties in recovering from work related exertions, may be a relevant instrument in

  12. Reliable Maintanace of Wireless Sensor Networks for Event-detection Applications%事件检测型传感器网络的可靠性维护

    Institute of Scientific and Technical Information of China (English)

    胡四泉; 杨金阳; 王俊峰

    2011-01-01

    The reliability maintannace of the wireless sensor network is a key point to keep the alarm messages delivered reliably to the monitor center on time in a event-detection application. Based on the unreliable links in the wireless sensor network and the network charateristics of an event detection application,MPRRM,a multiple path redundant reliability maintanace algoritm was proposed in this paper. Both analytical and simulation results show that the MPRRM algorithm is superior to the previous published solutions in the metrics of reliability, false positive rate, latency and message overhead.%传感器网络(Wireless Sensor Networks,WSN)的事件检测型应用中,如何通过可靠性维护来保证在检测到事件时报警信息能及时、可靠地传输到监控主机至关重要.通过对不可靠的无线链路和网络传输的分析,提出多路冗余可靠性维护算法MPRRM.通过解析方法和仿真分析证明,该算法在可靠性、误报率、延迟和消息开销量上比同类算法具有优势.

  13. Pharyngeal pH alone is not reliable for the detection of pharyngeal reflux events: A study with oesophageal and pharyngeal pH-impedance monitoring

    Science.gov (United States)

    Desjardin, Marie; Roman, Sabine; des Varannes, Stanislas Bruley; Gourcerol, Guillaume; Coffin, Benoit; Ropert, Alain; Mion, François

    2013-01-01

    Background Pharyngeal pH probes and pH-impedance catheters have been developed for the diagnosis of laryngo-pharyngeal reflux. Objective To determine the reliability of pharyngeal pH alone for the detection of pharyngeal reflux events. Methods 24-h pH-impedance recordings performed in 45 healthy subjects with a bifurcated probe for detection of pharyngeal and oesophageal reflux events were reviewed. Pharyngeal pH drops to below 4 and 5 were analysed for the simultaneous occurrence of pharyngeal reflux, gastro-oesophageal reflux, and swallows, according to impedance patterns. Results Only 7.0% of pharyngeal pH drops to below 5 identified with impedance corresponded to pharyngeal reflux, while 92.6% were related to swallows and 10.2 and 13.3% were associated with proximal and distal gastro-oesophageal reflux events, respectively. Of pharyngeal pH drops to below 4, 13.2% were related to pharyngeal reflux, 87.5% were related to swallows, and 18.1 and 21.5% were associated with proximal and distal gastro-oesophageal reflux events, respectively. Conclusions This study demonstrates that pharyngeal pH alone is not reliable for the detection of pharyngeal reflux and that adding distal oesophageal pH analysis is not helpful. The only reliable analysis should take into account impedance patterns demonstrating the presence of pharyngeal reflux event preceded by a distal and proximal reflux event within the oesophagus. PMID:24917995

  14. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  15. Tumorsize dependent detection rate of endorectal MRI of prostate cancer-A histopathologic correlation with whole-mount sections in 70 patients with prostate cancer

    International Nuclear Information System (INIS)

    Roethke, Matthias C.; Lichy, Matthias P.; Jurgschat, Leo; Hennenlotter, Joerg; Vogel, Ulrich; Schilling, David; Stenzl, Arnulf; Claussen, Claus D.; Schlemmer, Heinz-Peter

    2011-01-01

    Purpose: To evaluate the value of T2w endorectal MRI (eMRI) for correct detection of tumor foci within the prostate regarding tumor size. Materials and Methods: 70 patients with histologically proven prostate cancer were examined with T2w eMRI before radical prostatectomy at a 1.5 T scanner. For evaluation of eMRI, two radiologists evaluated each tumor focus within the gland. After radical prostatectomy, the prostates were prepared as whole-mount sections, according to transversal T2w eMRI. For each slice, tumor surroundings were marked and compared with eMRI. Based on whole-mount section, 315 slices were evaluated and 533 tumor lesions were documented. Results: Based on the T2w eMRI, 213 tumor lesions were described. In 137/213, histology could prove these lesions. EMRI was able to visualize 0/56 lesions with a maximum size of 2 cm 50/56 (89%). False positive eMRI findings were: 2 cm n = 2. Conclusion: T2w eMRI cannot exclude prostate cancer with lesions smaller 10 mm and 0.4 cm 3 respectively. The detection rate for lesions more than 20 mm (1.6 cm 3 ) is to be considered as high.

  16. Myth of the Master Detective: Reliability of Interpretations for Kaufman's "Intelligent Testing" Approach to the WISC-III.

    Science.gov (United States)

    Macmann, Gregg M.; Barnett, David W.

    1997-01-01

    Used computer simulation to examine the reliability of interpretations for Kaufman's "intelligent testing" approach to the Wechsler Intelligence Scale for Children (3rd ed.) (WISC-III). Findings indicate that factor index-score differences and other measures could not be interpreted with confidence. Argues that limitations of IQ testing…

  17. A System of Deception and Fraud Detection Using Reliable Linguistic Cues Including Hedging, Disfluencies, and Repeated Phrases

    Science.gov (United States)

    Humpherys, Sean LaMarc

    2010-01-01

    Given the increasing problem of fraud, crime, and national security threats, assessing credibility is a recurring research topic in Information Systems and in other disciplines. Decision support systems can help. But the success of the system depends on reliable cues that can distinguish deceptive/truthful behavior and on a proven classification…

  18. Web-based tools can be used reliably to detect patients with major depressive disorder and subsyndromal depressive symptoms

    Directory of Open Access Journals (Sweden)

    Tsai Shih-Jen

    2007-04-01

    Full Text Available Abstract Background Although depression has been regarded as a major public health problem, many individuals with depression still remain undetected or untreated. Despite the potential for Internet-based tools to greatly improve the success rate of screening for depression, their reliability and validity has not been well studied. Therefore the aim of this study was to evaluate the test-retest reliability and criterion validity of a Web-based system, the Internet-based Self-assessment Program for Depression (ISP-D. Methods The ISP-D to screen for major depressive disorder (MDD, minor depressive disorder (MinD, and subsyndromal depressive symptoms (SSD was developed in traditional Chinese. Volunteers, 18 years and older, were recruited via the Internet and then assessed twice on the online ISP-D system to investigate the test-retest reliability of the test. They were subsequently prompted to schedule face-to-face interviews. The interviews were performed by the research psychiatrists using the Mini-International Neuropsychiatric Interview and the diagnoses made according to DSM-IV diagnostic criteria were used for the statistics of criterion validity. Kappa (κ values were calculated to assess test-retest reliability. Results A total of 579 volunteer subjects were administered the test. Most of the subjects were young (mean age: 26.2 ± 6.6 years, female (77.7%, single (81.6%, and well educated (61.9% college or higher. The distributions of MDD, MinD, SSD and no depression specified were 30.9%, 7.4%, 15.2%, and 46.5%, respectively. The mean time to complete the ISP-D was 8.89 ± 6.77 min. One hundred and eighty-four of the respondents completed the retest (response rate: 31.8%. Our analysis revealed that the 2-week test-retest reliability for ISP-D was excellent (weighted κ = 0.801. Fifty-five participants completed the face-to-face interview for the validity study. The sensitivity, specificity, positive, and negative predictive values for major

  19. The reliability and internal consistency of one-shot and flicker change detection for measuring individual differences in visual working memory capacity.

    Science.gov (United States)

    Pailian, Hrag; Halberda, Justin

    2015-04-01

    We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.

  20. Numerical and structural genomic aberrations are reliably detectable in tissue microarrays of formalin-fixed paraffin-embedded tumor samples by fluorescence in-situ hybridization.

    Directory of Open Access Journals (Sweden)

    Heike Horn

    Full Text Available Few data are available regarding the reliability of fluorescence in-situ hybridization (FISH, especially for chromosomal deletions, in high-throughput settings using tissue microarrays (TMAs. We performed a comprehensive FISH study for the detection of chromosomal translocations and deletions in formalin-fixed and paraffin-embedded (FFPE tumor specimens arranged in TMA format. We analyzed 46 B-cell lymphoma (B-NHL specimens with known karyotypes for translocations of IGH-, BCL2-, BCL6- and MYC-genes. Locus-specific DNA probes were used for the detection of deletions in chromosome bands 6q21 and 9p21 in 62 follicular lymphomas (FL and six malignant mesothelioma (MM samples, respectively. To test for aberrant signals generated by truncation of nuclei following sectioning of FFPE tissue samples, cell line dilutions with 9p21-deletions were embedded into paraffin blocks. The overall TMA hybridization efficiency was 94%. FISH results regarding translocations matched karyotyping data in 93%. As for chromosomal deletions, sectioning artefacts occurred in 17% to 25% of cells, suggesting that the proportion of cells showing deletions should exceed 25% to be reliably detectable. In conclusion, FISH represents a robust tool for the detection of structural as well as numerical aberrations in FFPE tissue samples in a TMA-based high-throughput setting, when rigorous cut-off values and appropriate controls are maintained, and, of note, was superior to quantitative PCR approaches.

  1. Breast-i Is an Effective and Reliable Adjunct Screening Tool for Detecting Early Tumour Related Angiogenesis of Breast Cancers in Low Resource Sub-Saharan Countries

    Directory of Open Access Journals (Sweden)

    Frank Naku Ghartey

    2018-01-01

    Full Text Available Background. What cheaper alternative breast screening procedures are available to younger women in addition to clinical breast examination (CBE in Sub-Saharan countries? In 2009, we first described BreastLight for screening and reported high sensitivity at detecting breast cancer. Due to limitations of BreastLight, we have since 2014 been using the more technologically advanced Breast-i to screen 2204 women to find cheaper screening alternatives. Methodology. First, the participant lies down for CBE and then, in a darkened room, Breast-i was placed underneath each breast and trained personnel confirm vein pattern and look out for dark spot(s to ascertain the presence of suspicious angiogenic lesion(s. Results. CBE detected 153 palpable breast masses and Breast-i, which detects angiogenesis, confirmed 136. However, Breast-i detected 22 more cases of which 7 had angiogenesis but were not palpable and 15 were missed by CBE due to large breast size. Overall confirmed cases were 26, with Breast-i detecting 7 cases missed by CBE. Breast-i and CBE gave sensitivities of 92.3% and 73%, respectively. Conclusion. Breast-i with its high sensitivity to angiogenesis, reliability, and affordability will be an effective adjunct detection device that can be used effectively to increase early detection in younger women, thereby increasing treatment success.

  2. Test-retest reliability and agreement of the SPI-Questionnaire to detect symptoms of digital ischemia in elite volleyball players.

    Science.gov (United States)

    van de Pol, Daan; Zacharian, Tigran; Maas, Mario; Kuijer, P Paul F M

    2017-06-01

    The Shoulder posterior circumflex humeral artery Pathology and digital Ischemia - questionnaire (SPI-Q) has been developed to enable periodic surveillance of elite volleyball players, who are at risk for digital ischemia. Prior to implementation, assessing reliability is mandatory. Therefore, the test-retest reliability and agreement of the SPI-Q were evaluated among the population at risk. A questionnaire survey was performed with a 2-week interval among 65 elite male volleyball players assessing symptoms of cold, pale and blue digits in the dominant hand during or after practice or competition using a 4-point Likert scale (never, sometimes, often and always). Kappa (κ) and percentage of agreement (POA) were calculated for individual symptoms, and to distinguish symptomatic and asymptomatic players. For the individual symptoms, κ ranged from "poor" (0.25) to "good" (0.63), and POA ranged from "moderate" (78%) to "good" (97%). To classify symptomatic players, the SPI-Q showed "good" reliability (κ = 0.83; 95%CI 0.69-0.97) and "good" agreement (POA = 92%). The current study has proven the SPI-Q to be reliable for detecting elite male indoor volleyball players with symptoms of digital ischemia.

  3. The reliability, minimal detectable change and concurrent validity of a gravity-based bubble inclinometer and iphone application for measuring standing lumbar lordosis.

    Science.gov (United States)

    Salamh, Paul A; Kolber, Morey

    2014-01-01

    To investigate the reliability, minimal detectable change (MDC90) and concurrent validity of a gravity-based bubble inclinometer (inclinometer) and iPhone® application for measuring standing lumbar lordosis. Two investigators used both an inclinometer and an iPhone® with an inclinometer application to measure lumbar lordosis of 30 asymptomatic participants. ICC models 3,k and 2,k were used for the intrarater and interrater analysis, respectively. Good interrater and intrarater reliability was present for the inclinometer with Intraclass Correlation Coefficients (ICC) of 0.90 and 0.85, respectively and the iPhone® application with ICC values of 0.96 and 0.81. The minimal detectable change (MDC90) indicates that a change greater than or equal to 7° and 6° is needed to exceed the threshold of error using the iPhone® and inclinometer, respectively. The concurrent validity between the two instruments was good with a Pearson product-moment coefficient of correlation (r) of 0.86 for both raters. Ninety-five percent limits of agreement identified differences ranging from 9° greater in regards to the iPhone® to 8° less regarding the inclinometer. Both the inclinometer and iPhone® application possess good interrater reliability, intrarater reliability and concurrent validity for measuring standing lumbar lordosis. This investigation provides preliminary evidence to suggest that smart phone applications may offer clinical utility comparable to inclinometry for quantifying standing lumbar lordosis. Clinicians should recognize potential individual differences when using these devices interchangeably.

  4. The Validity and Reliability of the Mini-Mental State Examination-2 for Detecting Mild Cognitive Impairment and Alzheimer's Disease in a Korean Population.

    Directory of Open Access Journals (Sweden)

    Min Jae Baek

    Full Text Available To examine the validity and reliability of the MMSE-2 for assessing patients with mild cognitive impairment (MCI and Alzheimer's disease (AD in a Korean population. Specifically, the usefulness of the MMSE-2 as a screening measure for detecting early cognitive change, which has not been detectable through the MMSE, was examined.Two-hundred and twenty-six patients with MCI, 97 patients with AD, and 91 healthy older adults were recruited. All participants consented to examination with the MMSE-2, the MMSE, and other detailed neuropsychological assessments.The MMSE-2 performed well in discriminating participants across Clinical Dementia Rating (CDR stages and CDR-Sum of Boxes (CDR-SOB, and it showed excellent internal consistency, high test-retest reliability, high interrater reliability, and good concurrent validity with the MMSE and other detailed neuropsychological assessments. The MMSE-2 was divided into two factors (tests that are sensitive to decline in cognitive functions vs. tests that are not sensitive to decline in cognitive functions in normal cognitive aging. Moreover, the MMSE-2 was divided into two factors (tests related overall cognitive functioning other than memory vs. tests related to episodic memory in patients with AD. Finally, the MMSE-2 was divided into three factors (tests related to working memory and frontal lobe functioning vs. tests related to verbal memory vs. tests related to orientation and immediate recall in patients with MCI. The sensitivity and specificity of the three versions of the MMSE-2 were relatively high in discriminating participants with normal cognitive aging from patients with MCI and AD.The MMSE-2 is a valid and reliable cognitive screening instrument for assessing cognitive impairment in a Korean population, but its ability to distinguish patients with MCI from those with normal cognitive aging may not be as highly sensitive as expected.

  5. The Validity and Reliability of the Mini-Mental State Examination-2 for Detecting Mild Cognitive Impairment and Alzheimer's Disease in a Korean Population.

    Science.gov (United States)

    Baek, Min Jae; Kim, Karyeong; Park, Young Ho; Kim, SangYun

    To examine the validity and reliability of the MMSE-2 for assessing patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD) in a Korean population. Specifically, the usefulness of the MMSE-2 as a screening measure for detecting early cognitive change, which has not been detectable through the MMSE, was examined. Two-hundred and twenty-six patients with MCI, 97 patients with AD, and 91 healthy older adults were recruited. All participants consented to examination with the MMSE-2, the MMSE, and other detailed neuropsychological assessments. The MMSE-2 performed well in discriminating participants across Clinical Dementia Rating (CDR) stages and CDR-Sum of Boxes (CDR-SOB), and it showed excellent internal consistency, high test-retest reliability, high interrater reliability, and good concurrent validity with the MMSE and other detailed neuropsychological assessments. The MMSE-2 was divided into two factors (tests that are sensitive to decline in cognitive functions vs. tests that are not sensitive to decline in cognitive functions) in normal cognitive aging. Moreover, the MMSE-2 was divided into two factors (tests related overall cognitive functioning other than memory vs. tests related to episodic memory) in patients with AD. Finally, the MMSE-2 was divided into three factors (tests related to working memory and frontal lobe functioning vs. tests related to verbal memory vs. tests related to orientation and immediate recall) in patients with MCI. The sensitivity and specificity of the three versions of the MMSE-2 were relatively high in discriminating participants with normal cognitive aging from patients with MCI and AD. The MMSE-2 is a valid and reliable cognitive screening instrument for assessing cognitive impairment in a Korean population, but its ability to distinguish patients with MCI from those with normal cognitive aging may not be as highly sensitive as expected.

  6. Brain GABA Detection in vivo with the J-editing 1H MRS Technique: A Comprehensive Methodological Evaluation of Sensitivity Enhancement, Macromolecule Contamination and Test-Retest Reliability

    Science.gov (United States)

    Shungu, Dikoma C.; Mao, Xiangling; Gonzales, Robyn; Soones, Tacara N.; Dyke, Jonathan P.; van der Veen, Jan Willem; Kegeles, Lawrence S.

    2016-01-01

    Abnormalities in brain γ-aminobutyric acid (GABA) have been implicated in various neuropsychiatric and neurological disorders. However, in vivo GABA detection by proton magnetic resonance spectroscopy (1H MRS) presents significant challenges arising from low brain concentration, overlap by much stronger resonances, and contamination by mobile macromolecule (MM) signals. This study addresses these impediments to reliable brain GABA detection with the J-editing difference technique on a 3T MR system in healthy human subjects by (a) assessing the sensitivity gains attainable with an 8-channel phased-array head coil, (b) determining the magnitude and anatomic variation of the contamination of GABA by MM, and (c) estimating the test-retest reliability of measuring GABA with this method. Sensitivity gains and test-retest reliability were examined in the dorsolateral prefrontal cortex (DLPFC), while MM levels were compared across three cortical regions: the DLPFC, the medial prefrontal cortex (MPFC) and the occipital cortex (OCC). A 3-fold higher GABA detection sensitivity was attained with the 8-channel head coil compared to the standard single-channel head coil in DLPFC. Despite significant anatomic variation in GABA+MM and MM across the three brain regions (p GABA+MM was relatively stable across the three voxels, ranging from 41% to 49%, a non-significant regional variation (p = 0.58). The test-retest reliability of GABA measurement, expressed either as ratios to voxel tissue water (W) or total creatine, was found to be very high for both the single-channel coil and the 8-channel phased-array coil. For the 8-channel coil, for example, Pearson’s correlation coefficient of test vs. retest for GABA/W was 0.98 (R2 = 0.96, p = 0.0007), the percent coefficient of variation (CV) was 1.25%, and the intraclass correlation coefficient (ICC) was 0.98. Similar reliability was also found for the co-edited resonance of combined glutamate and glutamine (Glx) for both coils. PMID

  7. Reliability considerations of NDT by probability of detection (POD). Determination using ultrasound phased array. Results from a project in frame of the German nuclear safety research program

    International Nuclear Information System (INIS)

    Kurz, Jochen H.; Dugan, Sandra; Juengert, Anne

    2013-01-01

    Reliable assessment procedures are an important aspect of maintenance concepts. Non-destructive testing (NDT) methods are an essential part of a variety of maintenance plans. Fracture mechanical assessments require knowledge of flaw dimensions, loads and material parameters. NDT methods are able to acquire information on all of these areas. However, it has to be considered that the level of detail information depends on the case investigated and therefore on the applicable methods. Reliability aspects of NDT methods are of importance if quantitative information is required. Different design concepts e.g. the damage tolerance approach in aerospace already include reliability criteria of NDT methods applied in maintenance plans. NDT is also an essential part during construction and maintenance of nuclear power plants. In Germany, type and extent of inspection are specified in Safety Standards of the Nuclear Safety Standards Commission (KTA). Only certified inspections are allowed in the nuclear industry. The qualification of NDT is carried out in form of performance demonstrations of the inspection teams and the equipment, witnessed by an authorized inspector. The results of these tests are mainly statements regarding the detection capabilities of certain artificial flaws. In other countries, e.g. the U.S., additional blind tests on test blocks with hidden and unknown flaws may be required, in which a certain percentage of these flaws has to be detected. The knowledge of the probability of detection (POD) curves of specific flaws in specific testing conditions is often not present. This paper shows the results of a research project designed for POD determination of ultrasound phased array inspections of real and artificial cracks. The continuative objective of this project was to generate quantitative POD results. The distribution of the crack sizes of the specimens and the inspection planning is discussed, and results of the ultrasound inspections are presented. In

  8. Test-Retest Reliability and Minimal Detectable Change of Randomized Dichotic Digits in Learning-Disabled Children: Implications for Dichotic Listening Training.

    Science.gov (United States)

    Mahdavi, Mohammad Ebrahim; Pourbakht, Akram; Parand, Akram; Jalaie, Shohreh

    2018-03-01

    Evaluation of dichotic listening to digits is a common part of many studies for diagnosis and managing auditory processing disorders in children. Previous researchers have verified test-retest relative reliability of dichotic digits results in normal children and adults. However, detecting intervention-related changes in the ear scores after dichotic listening training requires information regarding trial-to-trial typical variation of individual ear scores that is estimated using indices of absolute reliability. Previous studies have not addressed absolute reliability of dichotic listening results. To compare the results of the Persian randomized dichotic digits test (PRDDT) and its relative and absolute indices of reliability between typical achieving (TA) and learning-disabled (LD) children. A repeated measures observational study. Fifteen LD children were recruited from a previously performed study with age range of 7-12 yr. The control group consisted of 15 TA schoolchildren with age range of 8-11 yr. The Persian randomized dichotic digits test was administered on the children under free recall condition in two test sessions 7-12 days apart. We compared the average of the ear scores and ear advantage between TA and LD children. Relative indices of reliability included Pearson's correlation and intraclass correlation (ICC 2,1 ) coefficients and absolute reliability was evaluated by calculation of standard error of measurement (SEM) and minimal detectable change (MDC) using the raw ear scores. The Pearson correlation coefficient indicated that in both groups of children the ear scores of test and retest sessions were strongly and positively (greater than +0.8) correlated. The ear scores showed excellent ICC coefficient of consistency (0.78-0.82) and fair to excellent ICC coefficient of absolute agreement (0.62-0.74) in TA children and excellent ICC coefficients of consistency and absolute agreement in LD children (0.76-0.87). SEM and SEM% of the ear scores in TA

  9. Probabilistic reliability analyses to detect weak points in secondary-side residual heat removal systems of KWU PWR plants

    International Nuclear Information System (INIS)

    Schilling, R.

    1984-01-01

    Requirements made by Federal German licensing authorities called for the analysis of the second-side residual heat removal systems of new PWR plants with regard to availability, possible weak points and the balanced nature of the overall system for different incident sequences. Following a description of the generic concept and the process and safety-related systems for steam generator feed and main steam discharge, the reliability of the latter is analyzed for the small break LOCA and emergency power mode incidents, weak points in the process systems are identified, remedial measures of a system-specific and test-strategic nature are presented and their contribution to improving system availability is quantified. A comparison with the results of the German Risk Study on Nuclear Power Plants (GRS) shows a distinct reduction in core meltdown frequency. (orig.)

  10. Reliability analyses to detect weak points in secondary-side residual heat removal systems of KWU PWR plants

    International Nuclear Information System (INIS)

    Schilling, R.

    1983-01-01

    Requirements made by Federal German licensing authorities called for the analysis of the secondary-side residual heat removal systems of new PWR plants with regard to availability, possible weak points and the balanced nature of the overall system for different incident sequences. Following a description of the generic concept and the process and safety-related systems for steam generator feed and main steam discharge, the reliability of the latter is analyzed for the small break LOCA and emergency power mode incidents, weak points in the process systems identified, remedial measures of a system-specific and test-strategic nature presented and their contribution to improving system availability quantified. A comparison with the results of the German Risk Study on Nuclear Power Plants (GRS) shows a distinct reduction in core meltdown frequency. (orig.)

  11. Automatic Trip Detection with the Dutch Mobile Mobility Panel: Towards Reliable Multiple-Week Trip Registration for Large Samples

    NARCIS (Netherlands)

    Thomas, Tom; Geurs, Karst T.; Koolwaaij, Johan; Bijlsma, Marcel E.

    2018-01-01

    This paper examines the accuracy of trip and mode choice detection of the last wave of the Dutch Mobile Mobility Panel, a large-scale three-year, smartphone-based travel survey. Departure and arrival times, origins, destinations, modes, and travel purposes were recorded during a four week period in

  12. Methods and Reliability of Radiographic Vertebral Fracture Detection in Older Men: The Osteoporotic Fractures in Men Study

    Science.gov (United States)

    Cawthon, Peggy M.; Haslam, Jane; Fullman, Robin; Peters, Katherine W.; Black, Dennis; Ensrud, Kristine E.; Cummings, Steven R.; Orwoll, Eric S.; Barrett-Connor, Elizabeth; Marshall, Lynn; Steiger, Peter; Schousboe, John T.

    2014-01-01

    We describe the methods and reliability of radiographic vertebral fracture assessment in MrOS, a cohort of community dwelling men aged ≥65 yrs. Lateral spine radiographs were obtained at Visit 1 (2000-2) and 4.6 years later (Visit 2). Using a workflow tool (SpineAnalyzer™, Optasia Medical), a physician reader completed semi-quantitative (SQ) scoring. Prior to SQ scoring, technicians performed “triage” to reduce physician reader workload, whereby clearly normal spine images were eliminated from SQ scoring with all levels assumed to be SQ=0 (no fracture, “triage negative”); spine images with any possible fracture or abnormality were passed to the physician reader as “triage positive” images. Using a quality assurance sample of images (n=20 participants; 8 with baseline only and 12 with baseline and follow-up images) read multiple times, we calculated intra-reader kappa statistics and percent agreement for SQ scores. A subset of 494 participants' images were read regardless of triage classification to calculate the specificity and sensitivity of triage. Technically adequate images were available for 5958 of 5994 participants at Visit 1, and 4399 of 4423 participants at Visit 2. Triage identified 3215 (53.9%) participants with radiographs that required further evaluation by the physician reader. For prevalent fractures at Visit 1 (SQ≥1), intra-reader kappa statistics ranged from 0.79-0.92; percent agreement ranged from 96.9%-98.9%; sensitivity of the triage was 96.8% and specificity of triage was 46.3%. In conclusion, SQ scoring had excellent intra-rater reliability in our study. The triage process reduces expert reader workload without hindering the ability to identify vertebral fractures. PMID:25003811

  13. Interrater and Test-Retest Reliability and Minimal Detectable Change of the Balance Evaluation Systems Test (BESTest) and Subsystems With Community-Dwelling Older Adults.

    Science.gov (United States)

    Wang-Hsu, Elizabeth; Smith, Susan S

    2017-01-10

    Falls are a common cause of injuries and hospital admissions in older adults. Balance limitation is a potentially modifiable factor contributing to falls. The Balance Evaluation Systems Test (BESTest), a clinical balance measure, categorizes balance into 6 underlying subsystems. Each of the subsystems is scored individually and summed to obtain a total score. The reliability of the BESTest and its individual subsystems has been reported in patients with various neurological disorders and cancer survivors. However, the reliability and minimal detectable change (MDC) of the BESTest with community-dwelling older adults have not been reported. The purposes of our study were to (1) determine the interrater and test-retest reliability of the BESTest total and subsystem scores; and (2) estimate the MDC of the BESTest and its individual subsystem scores with community-dwelling older adults. We used a prospective cohort methodological design. Community-dwelling older adults (N = 70; aged 70-94 years; mean = 85.0 [5.5] years) were recruited from a senior independent living community. Trained testers (N = 3) administered the BESTest. All participants were tested with the BESTest by the same tester initially and then retested 7 to 14 days later. With 32 of the participants, a second tester concurrently scored the retest for interrater reliability. Testers were blinded to each other's scores. Intraclass correlation coefficients [ICC(2,1)] were used to determine the interrater and test-retest reliability. Test-retest reliability was also analyzed using method error and the associated coefficients of variation (CVME). MDC was calculated using standard error of measurement. Interrater reliability (N = 32) of the BESTest total score was ICC(2, 1) = 0.97 (95% confidence interval [CI], 0.94-0.99). The ICCs for the individual subsystem scores ranged from 0.85 to 0.94. Test-retest reliability (N = 70) of the BESTest total score was ICC(2,1) = 0.93 (95% CI, 0.89-0.96). ICCs for the

  14. Reliability, Validity, and Minimal Detectable Change of Balance Evaluation Systems Test and Its Short Versions in Older Cancer Survivors: A Pilot Study.

    Science.gov (United States)

    Huang, Min H; Miller, Kara; Smith, Kristin; Fredrickson, Kayle; Shilling, Tracy

    2016-01-01

    Cancer is primarily a disease of older adults. About 77% of all cancers are diagnosed in persons aged 55 years and older. Cancer and its treatment can cause diverse sequelae impacting body systems underlying balance control. No study has examined the psychometric properties of balance assessment tools in older cancer survivors, presenting a significant challenge in the selection of outcome measures for clinicians treating this fast-growing population. This study aimed to determine the reliability, validity, and minimal detectable change (MDC) of the Balance Evaluation System Test (BESTest), Mini-Balance Evaluation Systems Test (Mini-BESTest), and Brief-Balance Evaluation Systems Test (Brief-BESTest) in community-dwelling older cancer survivors. This study was a cross-sectional design. Twenty breast and 8 prostate cancer survivors participated [age (SD) = 68.4 (8.13) years]. The BESTest and Activity-specific Balance Confidence (ABC) Scale were administered during the first session. Scores of Mini-BESTest and Brief-BESTest were extracted on the basis of the scores of BESTest. The BESTest was repeated within 1 to 2 weeks by the same rater to determine the test-retest reliability. For the analysis of the inter-rater reliability, 21 participants were randomly selected to be evaluated by 2 raters. A primary rater administered the test. The 2 raters independently and concurrently scored the performance of the participants. Each rater recorded the ratings separately on the scoring sheet. No discussion among the raters was allowed throughout the testing. Intraclass correlation coefficients (ICCs), standard error of measurement, minimal detectable change (MDC), and Bland-Altman plots were calculated. Concurrent validity of these balance tests with the ABC Scale was examined using the Spearman correlation. The BESTest, Mini-BESTest, and Brief-BESTest had high test-retest (ICC = 0.90-0.94) and interrater reliability (ICC = 0.86-0.96), small standard error of measurement (0

  15. Reliability, validity, and minimal detectable change of the push-off test scores in assessing upper extremity weight-bearing ability.

    Science.gov (United States)

    Mehta, Saurabh P; George, Hannah R; Goering, Christian A; Shafer, Danielle R; Koester, Alan; Novotny, Steven

    2017-11-01

    Clinical measurement study. The push-off test (POT) was recently conceived and found to be reliable and valid for assessing weight bearing through injured wrist or elbow. However, further research with larger sample can lend credence to the preliminary findings supporting the use of the POT. This study examined the interrater reliability, construct validity, and measurement error for the POT in patients with wrist conditions. Participants with musculoskeletal (MSK) wrist conditions were recruited. The performance on the POT, grip isometric strength of wrist extensors was assessed. The shortened version of the Disabilities of the Arm, Shoulder and Hand and numeric pain rating scale were completed. The intraclass correlation coefficient assessed interrater reliability of the POT. Pearson correlation coefficients (r) examined the concurrent relationships between the POT and other measures. The standard error of measurement and the minimal detectable change at 90% confidence interval were assessed as measurement error and index of true change for the POT. A total of 50 participants with different elbow or wrist conditions (age: 48.1 ± 16.6 years) were included in this study. The results of this study strongly supported the interrater reliability (intraclass correlation coefficient: 0.96 and 0.93 for the affected and unaffected sides, respectively) of the POT in patients with wrist MSK conditions. The POT showed convergent relationships with the grip strength on the injured side (r = 0.89) and the wrist extensor strength (r = 0.7). The POT showed smaller standard error of measurement (1.9 kg). The minimal detectable change at 90% confidence interval for the POT was 4.4 kg for the sample. This study provides additional evidence to support the reliability and validity of the POT. This is the first study that provides the values for the measurement error and true change on the POT scores in patients with wrist MSK conditions. Further research should examine the

  16. How reliable are the sup 14 C-urea breath test and specific serology for the detection of gastric campylobacter

    Energy Technology Data Exchange (ETDEWEB)

    Husebye, E; O' Leary, D; Skar, V; Melby, K [Ullevaal Sykehus, Oslo (Norway)

    1990-01-01

    Detection of gastric campylobacter by the {sup 14}C-urea breath test and serology were correlated to biopsy culture in 25 unselected outpatients referred for gastroscopy. All the 17 culture-positive patients had positive {sup 14}C-urea breath test, and 16 had positive serology. Of eight culture-negative patients, six patients had negative breath test and seven negative serology. A high degree of reproducibility was found when two subsequent breath tests were performed in 11 healthy volunteers. The breath test values obtained at 10 min showed a strong correlation to the accumulated values within 30 min. Breath sampling once, 10 min after intake of 2.5 {mu}Ci {sup 14}C-urea, seems sufficient for the detection of gastric campylobacter. The {sup 14}C-urea breath test correlates well with biopsy culture and provides a sensitive tool for the detection of gastric campylobacter. Serology also corresponds well with biopsy culture and should provide a useful tool for epidemiologic studies. 22 refs., 4 figs., 1 tab.

  17. Reliability versus reproducibility

    International Nuclear Information System (INIS)

    Lautzenheiser, C.E.

    1976-01-01

    Defect detection and reproducibility of results are two separate but closely related subjects. It is axiomatic that a defect must be detected from examination to examination or reproducibility of results is very poor. On the other hand, a defect can be detected on each of subsequent examinations for higher reliability and still have poor reproducibility of results

  18. Validity and reliability of methods for the detection of secondary caries around amalgam restorations in primary teeth

    Directory of Open Access Journals (Sweden)

    Mariana Minatel Braga

    2010-03-01

    Full Text Available Secondary caries has been reported as the main reason for restoration replacement. The aim of this in vitro study was to evaluate the performance of different methods - visual inspection, laser fluorescence (DIAGNOdent, radiography and tactile examination - for secondary caries detection in primary molars restored with amalgam. Fifty-four primary molars were photographed and 73 suspect sites adjacent to amalgam restorations were selected. Two examiners evaluated independently these sites using all methods. Agreement between examiners was assessed by the Kappa test. To validate the methods, a caries-detector dye was used after restoration removal. The best cut-off points for the sample were found by a Receiver Operator Characteristic (ROC analysis, and the area under the ROC curve (Az, and the sensitivity, specificity and accuracy of the methods were calculated for enamel (D2 and dentine (D3 thresholds. These parameters were found for each method and then compared by the McNemar test. The tactile examination and visual inspection presented the highest inter-examiner agreement for the D2 and D3 thresholds, respectively. The visual inspection also showed better performance than the other methods for both thresholds (Az = 0.861 and Az = 0.841, respectively. In conclusion, the visual inspection presented the best performance for detecting enamel and dentin secondary caries in primary teeth restored with amalgam.

  19. Method to improve reliability of a fuel cell system using low performance cell detection at low power operation

    Science.gov (United States)

    Choi, Tayoung; Ganapathy, Sriram; Jung, Jaehak; Savage, David R.; Lakshmanan, Balasubramanian; Vecasey, Pamela M.

    2013-04-16

    A system and method for detecting a low performing cell in a fuel cell stack using measured cell voltages. The method includes determining that the fuel cell stack is running, the stack coolant temperature is above a certain temperature and the stack current density is within a relatively low power range. The method further includes calculating the average cell voltage, and determining whether the difference between the average cell voltage and the minimum cell voltage is greater than a predetermined threshold. If the difference between the average cell voltage and the minimum cell voltage is greater than the predetermined threshold and the minimum cell voltage is less than another predetermined threshold, then the method increments a low performing cell timer. A ratio of the low performing cell timer and a system run timer is calculated to identify a low performing cell.

  20. Reliability of cortical lesion detection on double inversion recovery MRI applying the MAGNIMS-Criteria in multiple sclerosis patients within a 16-months period.

    Directory of Open Access Journals (Sweden)

    Tobias Djamsched Faizy

    Full Text Available In patients with multiple sclerosis (MS, Double Inversion Recovery (DIR magnetic resonance imaging (MRI can be used to identify cortical lesions (CL. We sought to evaluate the reliability of CL detection on DIR longitudinally at multiple subsequent time-points applying the MAGNIMs scoring criteria for CLs.26 MS patients received a 3T-MRI (Siemens, Skyra with DIR at 12 time-points (TP within a 16 months period. Scans were assessed in random order by two different raters. Both raters separately marked all CLs on each scan and total lesion numbers were obtained for each scan-TP and patient. After a retrospective re-evaluation, the number of consensus CLs (conL was defined as the total number of CLs, which both raters finally agreed on. CLs volumes, relative signal intensities and CLs localizations were determined. Both ratings (conL vs. non-consensus scoring were compared for further analysis.A total number of n = 334 CLs were identified by both raters in 26 MS patients with a first agreement of both raters on 160 out of 334 of the CLs found (κ = 0.48. After the retrospective re-evaluation, consensus agreement increased to 233 out of 334 CL (κ = 0.69. 93.8% of conL were visible in at least 2 consecutive TP. 74.7% of the conL were visible in all 12 consecutive TP. ConL had greater mean lesion volumes and higher mean signal intensities compared to lesions that were only detected by one of the raters (p<0.05. A higher number of CLs in the frontal, parietal, temporal and occipital lobe were identified by both raters than the number of those only identified by one of the raters (p<0.05.After a first assessment, slightly less than a half of the CL were considered as reliably detectable on longitudinal DIR images. A retrospective re-evaluation notably increased the consensus agreement. However, this finding is narrowed, considering the fact that retrospective evaluation steps might not be practicable in clinical routine. Lesions that were not reliably

  1. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  2. Reliability of sickness certificates in detecting potential sick leave reduction by modifying working conditions: a clinical epidemiology study

    Directory of Open Access Journals (Sweden)

    Johnsen Roar

    2004-03-01

    Full Text Available Abstract Background Medical sickness certificates are generally the main source for information when scrutinizing the need for aimed intervention strategies to avoid or reduce the individual and community side effects of sick leave. This study explored the value of medical sickness certificates related to daily work in Norwegian National Insurance Offices to identify sick-listed persons, where modified working conditions might reduce the ongoing sick leave. Methods The potential for reducing the ongoing sick leave by modifying working conditions was individually assessed on routine sickness certificates in 999 consecutive sick leave episodes by four Norwegian National Insurance collaborators, two with and two without formal medical competence. The study took place in Northern Norway in 1997 and 1998. Agreement analysed with differences against mean, kappa, and proportional-agreement analysis within and between groups of assessors was used in the judgement. Agreements between the assessors and the self-assessment of sick-listed subjects were additionally analysed in 159 sick-leave episodes. Results Both sick-listed subjects and National Insurance collaborators anticipated a potential reduction in sick leave in 20–30% of cases, and in another 20% the potential was assessed as possible. The chance corrected agreements, however, were poor (k Conclusion Information in medical sickness certificates proved ineffective in detecting cases where modified working conditions may reduce sick leave, and focusing on medical certificates may prevent identification of needed interventions. Strategies on how to communicate directly with sick-listed subjects would enable social authorities to exploit more of the sick leave reduction potential by modifying the working conditions than strategies on improving medical information.

  3. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  4. Complete validation of a unique digestion assay to detect Trichinella larvae in horse meat demonstrates the reliability of this assay for meeting food safety and trade requirements.

    Science.gov (United States)

    Forbes, L B; Hill, D E; Parker, S; Tessaro, S V; Gamble, H R; Gajadhar, A A

    2008-03-01

    A tissue digestion assay using a double separatory funnel procedure for the detection of Trichinella larvae in horse meat was validated for application in food safety programs and trade. The assay consisted of a pepsin-HCl digestion step to release larvae from muscle tissue and two sequential sedimentation steps in separatory funnels to recover and concentrate larvae for detection with a stereomicroscope. With defined critical control points, the assay was conducted within a quality assurance system compliant with International Organization for Standardization-International Electrotechnical Commission (ISO/IEC) 17025 guidelines. Samples used in the validation were obtained from horses experimentally infected with Trichinella spiralis to obtain a range of muscle larvae densities. One-, 5-, and 10-g samples of infected tissue were combined with 99, 95, and 90 g, respectively, of known negative horse tissue to create a 100-g sample for testing. Samples of 5 and 10 g were more likely to be positive than were 1-g samples when larval densities were less than three larvae per gram (lpg). This difference is important because ingested meat with 1 lpg is considered the threshold for clinical disease in humans. Using a 5-g sample size, all samples containing 1.3 to 2 lpg were detected, and 60 to 100% of samples with infected horse meat containing 0.1 to 0.7 lpg were detected. In this study, the double separatory funnel digestion assay was efficient and reliable for its intended use in food safety and trade. This procedure is the only digestion assay for Trichinella in horse meat that has been validated as consistent and effective at critical levels of sensitivity.

  5. Assessing the Accuracy and Reliability of Root Crack and Fracture Detection in Teeth Using Sweep Imaging with Fourier Transform (SWIFT) Magnetic Resonance Imaging (MRI)

    Science.gov (United States)

    Schuurmans, Tyler J.

    Introduction: Magnetic Resonance Imaging (MRI) has the potential to aid in determining the presence and extent of cracks/fractures in teeth due to more advantageous contrast, without ionizing radiation. An MRI technique called Sweep Imaging with Fourier Transform (SWIFT) has overcome many of the inherent difficulties of conventional MRI with detecting fast-relaxing signals from densely mineralized dental tissues. The objectives of this in vitro investigation were to develop MRI criteria for root crack/fracture identification in teeth and to establish intra- and inter-rater reliabilities and corresponding sensitivity and specificity values for the detection of tooth-root cracks/fractures in SWIFT MRI and limited field of view (FOV) CBCT. Materials and Methods: MRI-based criteria for crack/fracture appearance was developed by an MRI physicist and 6 dentists, including 3 endodontists and 1 Oral and Maxillofacial (OMF) radiologist. Twenty-nine human adult teeth previously extracted following clinical diagnosis by a board-certified endodontist of a root crack/fracture were frequency-matched to 29 non-cracked controls. Crack/fracture status confirmation was performed with magnified visual inspection, transillumination and vital staining. Samples were scanned with two 3D imaging modalities: 1) SWIFT MRI (10 teeth/scan) via a custom oral radiofrequency (RF) coil and a 90cm, 4-T magnet; 2) Limited FOV CBCT (1 tooth/scan) via a Carestream (CS) 9000 (Rochester, NY). Following a training period, a blinded 4-member panel (3 endodontists, 1 OMF radiologist) evaluated the images with a proportion randomly re-tested to establish intra-rater reliability. Overall observer agreement was measured using Cohen's kappa and levels of agreement judged using the criteria of Landis and Koch. Sensitivity and specificity were computed with 95% confidence interval (CI); statistical significance was set at alpha ≤ 0.05. Results: MRI-based crack/fracture criteria were defined as 1-2 sharply

  6. A construction of standardized near infrared hyper-spectral teeth database: a first step in the development of reliable diagnostic tool for quantification and early detection of caries

    Science.gov (United States)

    Bürmen, Miran; Usenik, Peter; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2011-03-01

    Dental caries is a disease characterized by demineralization of enamel crystals leading to the penetration of bacteria into the dentin and pulp. If left untreated, the disease can lead to pain, infection and tooth loss. Early detection of enamel demineralization resulting in increased enamel porosity, commonly known as white spots, is a difficult diagnostic task. Several papers reported on near infrared (NIR) spectroscopy to be a potentially useful noninvasive spectroscopic technique for early detection of caries lesions. However, the conducted studies were mostly qualitative and did not include the critical assessment of the spectral variability of the sound and carious dental tissues and influence of the water content. Such assessment is essential for development and validation of reliable qualitative and especially quantitative diagnostic tools based on NIR spectroscopy. In order to characterize the described spectral variability, a standardized diffuse reflectance hyper-spectral database was constructed by imaging 12 extracted human teeth with natural lesions of various degrees in the spectral range from 900 to 1700 nm with spectral resolution of 10 nm. Additionally, all the teeth were imaged by digital color camera. The influence of water content on the acquired spectra was characterized by monitoring the teeth during the drying process. The images were assessed by an expert, thereby obtaining the gold standard. By analyzing the acquired spectra we were able to accurately model the spectral variability of the sound dental tissues and identify the advantages and limitations of NIR hyper-spectral imaging.

  7. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  8. A knowledge-based operator advisor system for integration of fault detection, control, and diagnosis to enhance the safe and reliable operation of nuclear power plants

    International Nuclear Information System (INIS)

    Bhatnagar, R.

    1989-01-01

    A Knowledged-Based Operator Advisor System has been developed for enhancing the complex task of maintaining safe and reliable operation of nuclear power plants. The operator's activities have been organized into the four tasks of data interpretation for abstracting high level information from sensor data, plant state monitoring for identification of faults, plan execution for controlling the faults, and diagnosis for determination of root causes of faults. The Operator Advisor System is capable of identifying the abnormal functioning of the plant in terms of: (1) deviations from normality, (2) pre-enumerated abnormal events, and (3) safety threats. The classification of abnormal functioning into the three categories of deviations from normality, abnormal events, and safety threats allows the detection of faults at three levels of: (1) developing faults, (2) developed faults, and (3) safety threatening faults. After the identification of abnormal functioning the system will identify the procedures to be executed to mitigate the consequences of abnormal functioning and will help the operator by displaying the procedure steps and monitoring the success of actions taken. The system also is capable of diagnosing the root causes of abnormal functioning. The identification, and diagnosis of root causes of abnormal functioning are done in parallel to the task of procedure execution, allowing the detection of more critical safety threats while executing procedures to control abnormal events

  9. FMR1 CGG repeat expansion mutation detection and linked haplotype analysis for reliable and accurate preimplantation genetic diagnosis of fragile X syndrome.

    Science.gov (United States)

    Rajan-Babu, Indhu-Shree; Lian, Mulias; Cheah, Felicia S H; Chen, Min; Tan, Arnold S C; Prasath, Ethiraj B; Loh, Seong Feei; Chong, Samuel S

    2017-07-19

    Fragile X mental retardation 1 (FMR1) full-mutation expansion causes fragile X syndrome. Trans-generational fragile X syndrome transmission can be avoided by preimplantation genetic diagnosis (PGD). We describe a robust PGD strategy that can be applied to virtually any couple at risk of transmitting fragile X syndrome. This novel strategy utilises whole-genome amplification, followed by triplet-primed polymerase chain reaction (TP-PCR) for robust detection of expanded FMR1 alleles, in parallel with linked multi-marker haplotype analysis of 13 highly polymorphic microsatellite markers located within 1 Mb of the FMR1 CGG repeat, and the AMELX/Y dimorphism for gender identification. The assay was optimised and validated on single lymphoblasts isolated from fragile X reference cell lines, and applied to a simulated PGD case and a clinical in vitro fertilisation (IVF)-PGD case. In the simulated PGD case, definitive diagnosis of the expected results was achieved for all 'embryos'. In the clinical IVF-PGD case, delivery of a healthy baby girl was achieved after transfer of an expansion-negative blastocyst. FMR1 TP-PCR reliably detects presence of expansion mutations and obviates reliance on informative normal alleles for determining expansion status in female embryos. Together with multi-marker haplotyping and gender determination, misdiagnosis and diagnostic ambiguity due to allele dropout is minimised, and couple-specific assay customisation can be avoided.

  10. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  11. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  12. Fast Metabolite Identification in Nuclear Magnetic Resonance Metabolomic Studies: Statistical Peak Sorting and Peak Overlap Detection for More Reliable Database Queries.

    Science.gov (United States)

    Hoijemberg, Pablo A; Pelczer, István

    2018-01-05

    A lot of time is spent by researchers in the identification of metabolites in NMR-based metabolomic studies. The usual metabolite identification starts employing public or commercial databases to match chemical shifts thought to belong to a given compound. Statistical total correlation spectroscopy (STOCSY), in use for more than a decade, speeds the process by finding statistical correlations among peaks, being able to create a better peak list as input for the database query. However, the (normally not automated) analysis becomes challenging due to the intrinsic issue of peak overlap, where correlations of more than one compound appear in the STOCSY trace. Here we present a fully automated methodology that analyzes all STOCSY traces at once (every peak is chosen as driver peak) and overcomes the peak overlap obstacle. Peak overlap detection by clustering analysis and sorting of traces (POD-CAST) first creates an overlap matrix from the STOCSY traces, then clusters the overlap traces based on their similarity and finally calculates a cumulative overlap index (COI) to account for both strong and intermediate correlations. This information is gathered in one plot to help the user identify the groups of peaks that would belong to a single molecule and perform a more reliable database query. The simultaneous examination of all traces reduces the time of analysis, compared to viewing STOCSY traces by pairs or small groups, and condenses the redundant information in the 2D STOCSY matrix into bands containing similar traces. The COI helps in the detection of overlapping peaks, which can be added to the peak list from another cross-correlated band. POD-CAST overcomes the generally overlooked and underestimated presence of overlapping peaks and it detects them to include them in the search of all compounds contributing to the peak overlap, enabling the user to accelerate the metabolite identification process with more successful database queries and searching all tentative

  13. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  14. Reliability training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  15. [Reliability for detection of developmental problems using the semaphore from the Child Development Evaluation test: Is a yellow result different from a red result?

    Science.gov (United States)

    Rizzoli-Córdoba, Antonio; Ortega-Ríosvelasco, Fernando; Villasís-Keever, Miguel Ángel; Pizarro-Castellanos, Mariel; Buenrostro-Márquez, Guillermo; Aceves-Villagrán, Daniel; O'Shea-Cuevas, Gabriel; Muñoz-Hernández, Onofre

    The Child Development Evaluation (CDE) is a screening tool designed and validated in Mexico for detecting developmental problems. The result is expressed through a semaphore. In the CDE test, both yellow and red results are considered positive, although a different intervention is proposed for each. The aim of this work was to evaluate the reliability of the CDE test to discriminate between children with yellow/red result based on the developmental domain quotient (DDQ) obtained through the Battelle Development Inventory, 2nd edition (in Spanish) (BDI-2). The information was obtained for the study from the validation. Children with a normal (green) result in the CDE were excluded. Two different cut-off points of the DDQ were used (BDI-2): social: 20.1% vs. 28.9%; and adaptive: 6.9% vs. 20.4%. The semaphore result yellow/red allows identifying different magnitudes of delay in developmental domains or subdomains, supporting the recommendation of different interventions for each one. Copyright © 2014 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.

  16. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  17. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  18. The sensitivity, specificity and reliability of the GALS (gait, arms, legs and spine) examination when used by physiotherapists and physiotherapy students to detect rheumatoid arthritis.

    Science.gov (United States)

    Beattie, Karen A; Macintyre, Norma J; Pierobon, Jessica; Coombs, Jennifer; Horobetz, Diana; Petric, Alexis; Pimm, Mara; Kean, Walter; Larché, Maggie J; Cividino, Alfred

    2011-09-01

    To evaluate the sensitivity, specificity and reliability of the gait, arms, legs and spine (GALS) examination to detect signs and symptoms of rheumatoid arthritis when used by physiotherapy students and physiotherapists. Two physiotherapy students and two physiotherapists were trained to perform the GALS examination by viewing an instructional DVD and attending a workshop. Two rheumatologists familiar with the GALS examination also participated in the workshop. All healthcare professionals performed the GALS examination on 25 participants with rheumatoid arthritis recruited through a rheumatology practice and 23 participants without any arthritides recruited from a primary care centre. Each participant was assessed by one rheumatologist, one physiotherapist and one physiotherapy student. Abnormalities of gait, arms, legs and spine, including their location and description, were recorded, along with whether or not a diagnosis of rheumatoid arthritis was suspected. Healthcare professionals understood the study's objective to be their agreement on GALS findings and were unaware that half of the participants had rheumatoid arthritis. Sensitivity, specificity and likelihood ratios were calculated to determine the ability of the GALS examination to screen for rheumatoid arthritis. Using rheumatologists' findings on the study day as the standard for comparison, sensitivity and specificity were 71 to 86% and 69 to 93%, respectively. Positive likelihood ratios ranged from 2.74 to 10.18, while negative likelihood ratios ranged from 0.21 to 0.38. The GALS examination may be a useful tool for physiotherapists to rule out rheumatoid arthritis in a direct access setting. Differences in duration and type of experience of each healthcare professional may contribute to the variation in results. The merits of introducing the GALS examination into physiotherapy curricula and practice should be explored. Copyright © 2010 Chartered Society of Physiotherapy. Published by Elsevier Ltd

  19. Human reliability

    International Nuclear Information System (INIS)

    Bubb, H.

    1992-01-01

    This book resulted from the activity of Task Force 4.2 - 'Human Reliability'. This group was established on February 27th, 1986, at the plenary meeting of the Technical Reliability Committee of VDI, within the framework of the joint committee of VDI on industrial systems technology - GIS. It is composed of representatives of industry, representatives of research institutes, of technical control boards and universities, whose job it is to study how man fits into the technical side of the world of work and to optimize this interaction. In a total of 17 sessions, information from the part of ergonomy dealing with human reliability in using technical systems at work was exchanged, and different methods for its evaluation were examined and analyzed. The outcome of this work was systematized and compiled in this book. (orig.) [de

  20. Microelectronics Reliability

    Science.gov (United States)

    2017-01-17

    inverters  connected in a chain. ................................................. 5  Figure 3  Typical graph showing frequency versus square root of...developing an experimental  reliability estimating methodology that could both illuminate the  lifetime  reliability of advanced devices,  circuits and...or  FIT of the device. In other words an accurate estimate of the device  lifetime  was found and thus the  reliability  that  can  be  conveniently

  1. The objective of this program is to develop innovative DNA detection technologies to achieve fast microbial community assessment. The specific approaches are (1) to develop inexpensive and reliable sequence-proof hybridization DNA detection technology (2) to develop quantitative DNA hybridization technology for microbial community assessment and (3) to study the microbes which have demonstrated the potential to have nuclear waste bioremediation

    International Nuclear Information System (INIS)

    Chen, Chung H.

    2004-01-01

    The objective of this program is to develop innovative DNA detection technologies to achieve fast microbial community assessment. The specific approaches are (1) to develop inexpensive and reliable sequence-proof hybridization DNA detection technology (2) to develop quantitative DNA hybridization technology for microbial community assessment and (3) to study the microbes which have demonstrated the potential to have nuclear waste bioremediation

  2. Redefining reliability

    International Nuclear Information System (INIS)

    Paulson, S.L.

    1995-01-01

    Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company's experiment in direct marketing of natural gas

  3. Enhanced reliability and accuracy for field deployable bioforensic detection and discrimination of Xylella fastidiosa subsp. pauca, causal agent of citrus variegated chlorosis using razor ex technology and TaqMan quantitative PCR.

    Science.gov (United States)

    Ouyang, Ping; Arif, Mohammad; Fletcher, Jacqueline; Melcher, Ulrich; Ochoa Corona, Francisco Manuel

    2013-01-01

    A reliable, accurate and rapid multigene-based assay combining real time quantitative PCR (qPCR) and a Razor Ex BioDetection System (Razor Ex) was validated for detection of Xylella fastidiosa subsp. pauca (Xfp, a xylem-limited bacterium that causes citrus variegated chlorosis [CVC]). CVC, which is exotic to the United States, has spread through South and Central America and could significantly impact U.S. citrus if it arrives. A method for early, accurate and sensitive detection of Xfp in plant tissues is needed by plant health officials for inspection of products from quarantined locations, and by extension specialists for detection, identification and management of disease outbreaks and reservoir hosts. Two sets of specific PCR primers and probes, targeting Xfp genes for fimbrillin and the periplasmic iron-binding protein were designed. A third pair of primers targeting the conserved cobalamin synthesis protein gene was designed to detect all possible X. fastidiosa (Xf) strains. All three primer sets detected as little as 1 fg of plasmid DNA carrying X. fastidiosa target sequences and genomic DNA of Xfp at as little as 1 - 10 fg. The use of Razor Ex facilitates a rapid (about 30 min) in-field assay capability for detection of all Xf strains, and for specific detection of Xfp. Combined use of three primer sets targeting different genes increased the assay accuracy and broadened the range of detection. To our knowledge, this is the first report of a field-deployable rapid and reliable bioforensic detection and discrimination method for a bacterial phytopathogen based on multigene targets.

  4. Issues in cognitive reliability

    International Nuclear Information System (INIS)

    Woods, D.D.; Hitchler, M.J.; Rumancik, J.A.

    1984-01-01

    This chapter examines some problems in current methods to assess reactor operator reliability at cognitive tasks and discusses new approaches to solve these problems. The two types of human failures are errors in the execution of an intention and errors in the formation/selection of an intention. Topics considered include the types of description, error correction, cognitive performance and response time, the speed-accuracy tradeoff function, function based task analysis, and cognitive task analysis. One problem of human reliability analysis (HRA) techniques in general is the question of what are the units of behavior whose reliability are to be determined. A second problem for HRA is that people often detect and correct their errors. The use of function based analysis, which maps the problem space for plant control, is recommended

  5. Validity and reliability of 3D US for the detection of erosions in patients with rheumatoid arthritis using MRI as the gold standard

    DEFF Research Database (Denmark)

    Ellegaard, K; Bliddal, H; Møller Døhn, U

    2014-01-01

    PURPOSE: To test the reliability and validity of a 3D US erosion score in RA using MRI as the gold standard. MATERIALS AND METHODS: RA patients were examined with 3D US and 3 T MRI over the 2nd and 3rd metacarpophalangeal joints. 3D blocks were evaluated by two investigators. The erosions were...... estimated according to a semi-quantitative score (SQS) (0 - 3) and a quantitative score (QS) (mm²). MRI was evaluated according to the RAMRIS score. For the estimation of reliability, intra-class correlation coefficients (ICC) were used. Validity was tested using Spearman's rho (rs). The sensitivity...... and specificity were also calculated. RESULTS: 28 patients with RA were included. The ICC for the inter-observer reliability in the QS was 0.41 and 0.13 for the metacarpal bone and phalangeal bone, respectively, and 0.86 and 0.16, respectively, in the SQS.  The ICC for the intra-observer reliability in the QS...

  6. An Introduction To Reliability

    International Nuclear Information System (INIS)

    Park, Kyoung Su

    1993-08-01

    This book introduces reliability with definition of reliability, requirement of reliability, system of life cycle and reliability, reliability and failure rate such as summary, reliability characteristic, chance failure, failure rate which changes over time, failure mode, replacement, reliability in engineering design, reliability test over assumption of failure rate, and drawing of reliability data, prediction of system reliability, conservation of system, failure such as summary and failure relay and analysis of system safety.

  7. The influence of different error estimates in the detection of postoperative cognitive dysfunction using reliable change indices with correction for practice effects.

    Science.gov (United States)

    Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A

    2007-02-01

    The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.

  8. A novel molecular diagnostic tool for improved sensitivity and reliability detection of “Candidatus Liberibacter asiaticus”, bacterium associated with huanglongbing (HLB) bacterium Candidatus Liberibacter.

    Science.gov (United States)

    Sensitive and accurate detection is a prerequisite for efficient management and regulatory responses to prevent the introduction and spread of HLB-associated “Candidatus Liberibacter species to unaffected areas. To improve the current detection limit of HLB-associated “Ca. Liberibacter” spp, we deve...

  9. Computed tomography for the detection of distal radioulnar joint instability: normal variation and reliability of four CT scoring systems in 46 patients

    Energy Technology Data Exchange (ETDEWEB)

    Wijffels, Mathieu; Krijnen, Pieta; Schipper, Inger [Leiden University Medical Center, Department of Surgery-Trauma Surgery, P.O. Box 9600, Leiden (Netherlands); Stomp, Wouter; Reijnierse, Monique [Leiden University Medical Center, Department of Radiology, P.O. Box 9600, Leiden (Netherlands)

    2016-11-15

    The diagnosis of distal radioulnar joint (DRUJ) instability is clinically challenging. Computed tomography (CT) may aid in the diagnosis, but the reliability and normal variation for DRUJ translation on CT have not been established in detail. The aim of this study was to evaluate inter- and intraobserver agreement and normal ranges of CT scoring methods for determination of DRUJ translation in both posttraumatic and uninjured wrists. Patients with a conservatively treated, unilateral distal radius fracture were included. CT scans of both wrists were evaluated independently, by two readers using the radioulnar line method, subluxation ratio method, epicenter method and radioulnar ratio method. The inter- and intraobserver agreement was assessed and normal values were determined based on the uninjured wrists. Ninety-two wrist CTs (mean age: 56.5 years, SD: 17.0, mean follow-up 4.2 years, SD: 0.5) were evaluated. Interobserver agreement was best for the epicenter method [ICC = 0.73, 95 % confidence interval (CI) 0.65-0.79]. Intraobserver agreement was almost perfect for the radioulnar line method (ICC = 0.82, 95 % CI 0.77-0.87). Each method showed a wide normal range for normal DRUJ translation. Normal range for the epicenter method is -0.35 to -0.06 in pronation and -0.11 to 0.19 in supination. DRUJ translation on CT in pro- and supination can be reliably evaluated in both normal and posttraumatic wrists, however with large normal variation. The epicenter method seems the most reliable. Scanning of both wrists might be helpful to prevent the radiological overdiagnosis of instability. (orig.)

  10. Specifically colorimetric recognition of calcium, strontium, and barium ions using 2-mercaptosuccinic acid-functionalized gold nanoparticles and its use in reliable detection of calcium ion in water.

    Science.gov (United States)

    Zhang, Jia; Wang, Yong; Xu, Xiaowen; Yang, Xiurong

    2011-10-07

    A colorimetric probe based on 2-mercaptosuccinic acid-functionalized gold nanoparticles has been developed to exhibit selectivity towards Ca(2+), Sr(2+), and Ba(2+) ions over other metallic cations under specified conditions and finds its practical application in detecting Ca(2+) levels in water.

  11. Theoretical and experimental work on steam generator integrity and reliability with particular reference to leak development and detection. United Kingdom status report. October 1983

    International Nuclear Information System (INIS)

    Smedley, J.A.; Edge, D.M.

    1984-01-01

    This paper reviews the experimental and theoretical work in the UK on the characteristics of sodium-water reactions and describes work on the development of leak detection systems. A review of the operating experience with the PFR steam generators and the protection philosophy used on PFR is also given and the design studies for the Commercial Demonstration Fast Reactor (CDFR) are described

  12. Reliability of nucleic acid amplification methods for detection of Chlamydia trachomatis in urine: results of the first international collaborative quality control study among 96 laboratories

    NARCIS (Netherlands)

    R.P.A.J. Verkooyen (Roel); G.T. Noordhoek; P.E. Klapper; J. Reid; J. Schirm; G.M. Cleator; M. Ieven; G. Hoddevik

    2003-01-01

    textabstractThe first European Quality Control Concerted Action study was organized to assess the ability of laboratories to detect Chlamydia trachomatis in a panel of urine samples by nucleic acid amplification tests (NATs). The panel consisted of lyophilized urine samples,

  13. Can non-destructive inspection be reliable

    International Nuclear Information System (INIS)

    Silk, M.G.; Stoneham, A.M.; Temple, J.A.G.

    1988-01-01

    The paper on inspection is based on the book ''The reliability of non-destructive inspection: assessing the assessment of structures under stress'' by the present authors (published by Adam Hilger 1987). Emphasis is placed on the reliability of inspection and whether cracks in welds or flaws in components can be detected. The need for non-destructive testing and the historical attitudes to non-destructive testing are outlined, along with the case of failure. Factors influencing reliable inspection are discussed, and defect detection trials involving round robin tests are described. The development of reliable inspection techniques and the costs of reliability and unreliability are also examined. (U.K.)

  14. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  15. Neck Flexor and Extensor Muscle Endurance in Subclinical Neck Pain: Intrarater Reliability, Standard Error of Measurement, Minimal Detectable Change, and Comparison With Asymptomatic Participants in a University Student Population.

    Science.gov (United States)

    Lourenço, Ana S; Lameiras, Carina; Silva, Anabela G

    2016-01-01

    The aims of this study were to assess intrarater reliability and to calculate the standard error of measurement (SEM) and minimal detectable change (MDC) for deep neck flexor and neck extensor muscle endurance tests, and compare the results between individuals with and without subclinical neck pain. Participants were students of the University of Aveiro reporting subclinical neck pain and asymptomatic participants matched for sex and age to the neck pain group. Data on endurance capacity of the deep neck flexors and neck extensors were collected by a blinded assessor using the deep neck flexor endurance test and the extensor endurance test, respectively. Intraclass correlation coefficients (ICCs), SEM, and MDC were calculated for measurements taken within a session by the same assessor. Differences between groups for endurance capacity were investigated using a Mann-Whitney U test. The deep neck flexor endurance test (ICC = 0.71; SEM = 6.91 seconds; MDC = 19.15 seconds) and neck extensor endurance test (ICC = 0.73; SEM = 9.84 minutes; MDC = 2.34 minutes) are reliable. No significant differences were found between participants with and without neck pain for both tests of muscle endurance (P > .05). The endurance capacity of the deep neck flexors and neck extensors can be reliably measured in participants with subclinical neck pain. However, the wide SEM and MDC might limit the sensitivity of these tests. Copyright © 2016. Published by Elsevier Inc.

  16. Melting curve analysis after T allele enrichment (MelcaTle as a highly sensitive and reliable method for detecting the JAK2V617F mutation.

    Directory of Open Access Journals (Sweden)

    Soji Morishita

    Full Text Available Detection of the JAK2V617F mutation is essential for diagnosing patients with classical myeloproliferative neoplasms (MPNs. However, detection of the low-frequency JAK2V617F mutation is a challenging task due to the necessity of discriminating between true-positive and false-positive results. Here, we have developed a highly sensitive and accurate assay for the detection of JAK2V617F and named it melting curve analysis after T allele enrichment (MelcaTle. MelcaTle comprises three steps: 1 two cycles of JAK2V617F allele enrichment by PCR amplification followed by BsaXI digestion, 2 selective amplification of the JAK2V617F allele in the presence of a bridged nucleic acid (BNA probe, and 3 a melting curve assay using a BODIPY-FL-labeled oligonucleotide. Using this assay, we successfully detected nearly a single copy of the JAK2V617F allele, without false-positive signals, using 10 ng of genomic DNA standard. Furthermore, MelcaTle showed no positive signals in 90 assays screening healthy individuals for JAK2V617F. When applying MelcaTle to 27 patients who were initially classified as JAK2V617F-positive on the basis of allele-specific PCR analysis and were thus suspected as having MPNs, we found that two of the patients were actually JAK2V617F-negative. A more careful clinical data analysis revealed that these two patients had developed transient erythrocytosis of unknown etiology but not polycythemia vera, a subtype of MPNs. These findings indicate that the newly developed MelcaTle assay should markedly improve the diagnosis of JAK2V617F-positive MPNs.

  17. Using a thermoluminescent dosimeter to evaluate the location reliability of the highest–skin dose area detected by treatment planning in radiotherapy for breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Li-Min, E-mail: limin.sun@yahoo.com [Department of Radiation Oncology, Zuoying Branch of Kaohsiung Armed Forces General Hospital, Kaohsiung City, Taiwan (China); Huang, Chih-Jen [Department of Radiation Oncology, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung City, Taiwan (China); Faculty of Medicine, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung City, Taiwan (China); College of Medicine, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung City, Taiwan (China); Chen, Hsiao-Yun [Department of Radiation Oncology, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung City, Taiwan (China); Meng, Fan-Yun [Department of General Surgery, Zuoying Branch of Kaohsiung Armed Forces General Hospital, Kaohsiung City, Taiwan (China); Lu, Tsung-Hsien [Department of Radiation Oncology, Zuoying Branch of Kaohsiung Armed Forces General Hospital, Kaohsiung City, Taiwan (China); Tsao, Min-Jen [Department of General Surgery, Zuoying Branch of Kaohsiung Armed Forces General Hospital, Kaohsiung City, Taiwan (China)

    2014-01-01

    Acute skin reaction during adjuvant radiotherapy for breast cancer is an inevitable process, and its severity is related to the skin dose. A high–skin dose area can be speculated based on the isodose distribution shown on a treatment planning. To determine whether treatment planning can reflect high–skin dose location, 80 patients were collected and their skin doses in different areas were measured using a thermoluminescent dosimeter to locate the highest–skin dose area in each patient. We determined whether the skin dose is consistent with the highest-dose area estimated by the treatment planning of the same patient. The χ{sup 2} and Fisher exact tests revealed that these 2 methods yielded more consistent results when the highest-dose spots were located in the axillary and breast areas but not in the inframammary area. We suggest that skin doses shown on the treatment planning might be a reliable and simple alternative method for estimating the highest skin doses in some areas.

  18. Split-bolus single-phase cardiac multidetector computed tomography for reliable detection of left atrial thrombus. Comparison to transesophageal echocardiography

    Energy Technology Data Exchange (ETDEWEB)

    Staab, W.; Zwaka, P.A.; Sohns, J.M.; Schwarz, A.; Lotz, J. [University Medical Center Goettingen Univ. (Germany). Inst. for Diagnostic and Interventional Radiology; Sohns, C.; Vollmann, D.; Zabel, M.; Hasenfuss, G. [Goettingen Univ. (Germany). Dept. of Cardiology and Pneumology; Schneider, S. [Goettingen Univ. (Germany). Dept. of Medical Statistics

    2014-11-15

    Evaluation of a new cardiac MDCT protocol using a split-bolus contrast injection protocol and single MDCT scan for reliable diagnosis of LA/LAA thrombi in comparison to TEE, optimizing radiation exposure and use of contrast agent. A total of 182 consecutive patients with drug refractory AF scheduled for PVI (62.6% male, mean age: 64.1 ± 10.2 years) underwent routine diagnostic work including TEE and cardiac MDCT for the evaluation of LA/LAA anatomy and thrombus formation between November 2010 and March 2012. Contrast media injection was split into a pre-bolus of 30 ml and main bolus of 70 ml iodinated contrast agent separated by a short time delay. In this study, split-bolus cardiac MDCT identified 14 of 182 patients with filling defects of the LA/LAA. In all of these 14 patients, abnormalities were found in TEE. All 5 of the 14 patients with thrombus formation in cardiac MDCT were confirmed by TEE. MDCT was 100% accurate for thrombus, with strong but not perfect overall results for SEC equivalent on MDCT.

  19. Using a thermoluminescent dosimeter to evaluate the location reliability of the highest–skin dose area detected by treatment planning in radiotherapy for breast cancer

    International Nuclear Information System (INIS)

    Sun, Li-Min; Huang, Chih-Jen; Chen, Hsiao-Yun; Meng, Fan-Yun; Lu, Tsung-Hsien; Tsao, Min-Jen

    2014-01-01

    Acute skin reaction during adjuvant radiotherapy for breast cancer is an inevitable process, and its severity is related to the skin dose. A high–skin dose area can be speculated based on the isodose distribution shown on a treatment planning. To determine whether treatment planning can reflect high–skin dose location, 80 patients were collected and their skin doses in different areas were measured using a thermoluminescent dosimeter to locate the highest–skin dose area in each patient. We determined whether the skin dose is consistent with the highest-dose area estimated by the treatment planning of the same patient. The χ 2 and Fisher exact tests revealed that these 2 methods yielded more consistent results when the highest-dose spots were located in the axillary and breast areas but not in the inframammary area. We suggest that skin doses shown on the treatment planning might be a reliable and simple alternative method for estimating the highest skin doses in some areas

  20. Evaluation of an Immunochromatographic Test for Rapid and Reliable Serodiagnosis of Human Tularemia and Detection of Francisella tularensis-Specific Antibodies in Sera from Different Mammalian Species ▿

    Science.gov (United States)

    Splettstoesser, W.; Guglielmo-Viret, V.; Seibold, E.; Thullier, P.

    2010-01-01

    Tularemia is a highly contagious infectious zoonosis caused by the bacterial agent Francisella tularensis. Serology is still considered to be a cornerstone in tularemia diagnosis due to the low sensitivity of bacterial culture and the lack of standardization in PCR methodology for the direct identification of the pathogen. We developed a novel immunochromatographic test (ICT) to efficiently detect F. tularensis-specific antibodies in sera from humans and other mammalian species (nonhuman primate, pig, and rabbit). This new tool requires none or minimal laboratory equipment, and the results are obtained within 15 min. When compared to the method of microagglutination, which was shown to be more specific than the enzyme-linked immunosorbent assay, the ICT had a sensitivity of 98.3% (58 positive sera were tested) and a specificity of 96.5% (58 negative sera were tested) on human sera. On animal sera, the overall sensitivity was 100% (22 positive sera were tested) and specificity was also 100% (70 negative sera were tested). This rapid test preferentially detects IgG antibodies that may occur early in the course of human tularemia, but further evaluation with human sera is important to prove that the ICT can be a valuable field test to support a presumptive diagnosis of tularemia. The ICT can also be a useful tool to monitor successful vaccination with subunit vaccines or live vaccine strains containing lipopolysaccharide (e.g., LVS) and to detect seropositive individuals or animals in outbreak situations or in the context of epidemiologic surveillance programs in areas of endemicity as recently recommended by the World Health Organization. PMID:20220165

  1. A simple and reliable method to detect gamma irradiated lentil (Lens culinaris Medik.) seeds by germination efficiency and seedling growth test

    International Nuclear Information System (INIS)

    Chaudhuri, Sadhan K.

    2002-01-01

    Germination efficiency and root/shoot length of germinated seedling is proposed to identify irradiated lentil seeds. Germination percentage was reduced above 0.2 kGy and lentil seeds were unable to germinate above 1.0 kGy dose. The critical dose that prevented the root elongation varied from 0.1 to 0.5 kGy. The sensitivity of lentil seeds to gamma irradiation was inversely proportional to moisture content of the seeds. Radiation effects could be detected in seeds even 12 months storage after gamma irradiation

  2. Revisiting the STEC Testing Approach: Using espK and espV to Make Enterohemorrhagic Escherichia coli (EHEC) Detection More Reliable in Beef

    OpenAIRE

    Delannoy, Sabine; Chaves, Byron D.; Ison, Sarah A.; Webb, Hattie E.; Beutin, Lothar; Delaval, José; Billet, Isabelle; Fach, Patrick

    2016-01-01

    Current methods for screening Enterohemorrhagic Escherichia coli (EHEC) O157 and non-O157 in beef enrichments typically rely on the molecular detection of stx, eae, and serogroup-specific wzx or wzy gene fragments. As these genetic markers can also be found in some non-EHEC strains, a number of ‘false positive’ results are obtained. Here, we explore the suitability of five novel molecular markers, espK, espV, ureD, Z2098, and CRISPRO26:H11 as candidates for a more accurate screening of EHEC s...

  3. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  4. Making the most of RNA-seq: Pre-processing sequencing data with Opossum for reliable SNP variant detection [version 2; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Laura Oikkonen

    2017-03-01

    Full Text Available Identifying variants from RNA-seq (transcriptome sequencing data is a cost-effective and versatile complement to whole-exome (WES and whole-genome sequencing (WGS analysis. RNA-seq (transcriptome sequencing is primarily considered a method of gene expression analysis but it can also be used to detect DNA variants in expressed regions of the genome. However, current variant callers do not generally behave well with RNA-seq data due to reads encompassing intronic regions. We have developed a software programme called Opossum to address this problem. Opossum pre-processes RNA-seq reads prior to variant calling, and although it has been designed to work specifically with Platypus, it can be used equally well with other variant callers such as GATK HaplotypeCaller. In this work, we show that using Opossum in conjunction with either Platypus or GATK HaplotypeCaller maintains precision and improves the sensitivity for SNP detection compared to the GATK Best Practices pipeline. In addition, using it in combination with Platypus offers a substantial reduction in run times compared to the GATK pipeline so it is ideal when there are only limited time or computational resources available.

  5. Linear-after-the-exponential polymerase chain reaction and allied technologies. Real-time detection strategies for rapid, reliable diagnosis from single cells.

    Science.gov (United States)

    Pierce, Kenneth E; Wangh, Lawrence J

    2007-01-01

    Accurate detection of gene sequences in single cells is the ultimate challenge to polymerase chain reaction (PCR) sensitivity. Unfortunately, commonly used conventional and real-time PCR techniques are often too unreliable at that level to provide the accuracy needed for clinical diagnosis. Here we provide details of linear-after-the-exponential-PCR (LATE-PCR), a method similar to asymmetric PCR in the use of primers at different concentrations, but with novel design criteria to ensure high efficiency and specificity. Compared with conventional PCR, LATE-PCR increases the signal strength and allele discrimination capability of oligonucleotide probes such as molecular beacons and reduces variability among replicate samples. The analysis of real-time kinetics of LATE-PCR signals provides a means for improving the accuracy of single cell genetic diagnosis.

  6. Application of process monitoring to anomaly detection in nuclear material processing systems via system-centric event interpretation of data from multiple sensors of varying reliability

    International Nuclear Information System (INIS)

    Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao; Carlson, Reed B.; Yoo, Tae-Sic

    2017-01-01

    Highlights: • Process monitoring can strengthen nuclear safeguards and material accountancy. • Assessment is conducted at a system-centric level to improve safeguards effectiveness. • Anomaly detection is improved by integrating process and operation relationships. • Decision making is benefited from using sensor and event sequence information. • Formal framework enables optimization of sensor and data processing resources. - Abstract: In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a system-centric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologies within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.

  7. Making the most of RNA-seq: Pre-processing sequencing data with Opossum for reliable SNP variant detection [version 1; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Laura Oikkonen

    2017-01-01

    Full Text Available Identifying variants from RNA-seq (transcriptome sequencing data is a cost-effective and versatile alternative to whole-genome sequencing. However, current variant callers do not generally behave well with RNA-seq data due to reads encompassing intronic regions. We have developed a software programme called Opossum to address this problem. Opossum pre-processes RNA-seq reads prior to variant calling, and although it has been designed to work specifically with Platypus, it can be used equally well with other variant callers such as GATK HaplotypeCaller. In this work, we show that using Opossum in conjunction with either Platypus or GATK HaplotypeCaller maintains precision and improves the sensitivity for SNP detection compared to the GATK Best Practices pipeline. In addition, using it in combination with Platypus offers a substantial reduction in run times compared to the GATK pipeline so it is ideal when there are only limited time or computational resources available.

  8. AMSAA Reliability Growth Guide

    National Research Council Canada - National Science Library

    Broemm, William

    2000-01-01

    ... has developed reliability growth methodology for all phases of the process, from planning to tracking to projection. The report presents this methodology and associated reliability growth concepts.

  9. Rapid, reliable, and sensitive detection of adenosine deaminase activity by UHPLC-Q-Orbitrap HRMS and its application to inhibitory activity evaluation of traditional Chinese medicines.

    Science.gov (United States)

    Qi, Shenglan; Guan, Huida; Deng, Gang; Yang, Tao; Cheng, Xuemei; Liu, Wei; Liu, Ping; Wang, Changhong

    2018-05-10

    analytes were stable under the investigated conditions. The developed method was successfully applied to the detection of the inhibitory activity of ADA from traditional Chinese medicines. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Test-retest reliability and minimal detectable change scores for sit-to-stand-to-sit tests, the six-minute walk test, the one-leg heel-rise test, and handgrip strength in people undergoing hemodialysis.

    Science.gov (United States)

    Segura-Ortí, Eva; Martínez-Olmos, Francisco José

    2011-08-01

    Determining the relative and absolute reliability of outcomes of physical performance tests for people undergoing hemodialysis is necessary to discriminate between the true effects of exercise interventions and the inherent variability of this cohort. The aims of this study were to assess the relative reliability of sit-to-stand-to-sit tests (the STS-10, which measures the time [in seconds] required to complete 10 full stands from a sitting position, and the STS-60, which measures the number of repetitions achieved in 60 seconds), the Six-Minute Walk Test (6MWT), the one-leg heel-rise test, and the handgrip strength test and to calculate minimal detectable change (MDC) scores in people undergoing hemodialysis. This study was a prospective, nonexperimental investigation. Thirty-nine people undergoing hemodialysis at 2 clinics in Spain were contacted. Study participants performed the STS-10 (n=37), the STS-60 (n=37), and the 6MWT (n=36). At one of the settings, the participants also performed the one-leg heel-rise test (n=21) and the handgrip strength test (n=12) on both the right and the left sides. Participants attended 2 testing sessions 1 to 2 weeks apart. High intraclass correlation coefficients (≥.88) were found for all tests, suggesting good relative reliability. The MDC scores at 90% confidence intervals were as follows: 8.4 seconds for the STS-10, 4 repetitions for the STS-60, 66.3 m for the 6MWT, 3.4 kg for handgrip strength (force-generating capacity), 3.7 repetitions for the one-leg heel-rise test with the right leg, and 5.2 repetitions for the one-leg heel-rise test with the left leg. Limitations A limited sample of patients was used in this study. The STS-16, STS-60, 6MWT, one-leg heel rise test, and handgrip strength test are reliable outcome measures. The MDC scores at 90% confidence intervals for these tests will help to determine whether a change is due to error or to an intervention.

  11. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  12. Reliability of the Fermilab Antiproton Source

    International Nuclear Information System (INIS)

    Harms, E. Jr.

    1993-05-01

    This paper reports on the reliability of the Fermilab Antiproton source since it began operation in 1985. Reliability of the complex as a whole as well as subsystem performance is summarized. Also discussed is the trending done to determine causes of significant machine downtime and actions taken to reduce the incidence of failure. Finally, results of a study to detect previously unidentified reliability limitations are presented

  13. Reliability of Power Electronic Converter Systems

    DEFF Research Database (Denmark)

    -link capacitance in power electronic converter systems; wind turbine systems; smart control strategies for improved reliability of power electronics system; lifetime modelling; power module lifetime test and state monitoring; tools for performance and reliability analysis of power electronics systems; fault...... for advancing the reliability, availability, system robustness, and maintainability of PECS at different levels of complexity. Drawing on the experience of an international team of experts, this book explores the reliability of PECS covering topics including an introduction to reliability engineering in power...... electronic converter systems; anomaly detection and remaining-life prediction for power electronics; reliability of DC-link capacitors in power electronic converters; reliability of power electronics packaging; modeling for life-time prediction of power semiconductor modules; minimization of DC...

  14. Reliability data banks

    International Nuclear Information System (INIS)

    Cannon, A.G.; Bendell, A.

    1991-01-01

    Following an introductory chapter on Reliability, what is it, why it is needed, how it is achieved and measured, the principles of reliability data bases and analysis methodologies are the subject of the next two chapters. Achievements due to the development of data banks are mentioned for different industries in the next chapter, FACTS, a comprehensive information system for industrial safety and reliability data collection in process plants are covered next. CREDO, the Central Reliability Data Organization is described in the next chapter and is indexed separately, as is the chapter on DANTE, the fabrication reliability Data analysis system. Reliability data banks at Electricite de France and IAEA's experience in compiling a generic component reliability data base are also separately indexed. The European reliability data system, ERDS, and the development of a large data bank come next. The last three chapters look at 'Reliability data banks, - friend foe or a waste of time'? and future developments. (UK)

  15. Suncor maintenance and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Little, S. [Suncor Energy, Calgary, AB (Canada)

    2006-07-01

    Fleet maintenance and reliability at Suncor Energy was discussed in this presentation, with reference to Suncor Energy's primary and support equipment fleets. This paper also discussed Suncor Energy's maintenance and reliability standard involving people, processes and technology. An organizational maturity chart that graphed organizational learning against organizational performance was illustrated. The presentation also reviewed the maintenance and reliability framework; maintenance reliability model; the process overview of the maintenance and reliability standard; a process flow chart of maintenance strategies and programs; and an asset reliability improvement process flow chart. An example of an improvement initiative was included, with reference to a shovel reliability review; a dipper trip reliability investigation; bucket related failures by type and frequency; root cause analysis of the reliability process; and additional actions taken. Last, the presentation provided a graph of the results of the improvement initiative and presented the key lessons learned. tabs., figs.

  16. Smallest detectable change and test-retest reliability of a self-reported outcome measure: Results of the Center for Epidemiologic Studies Depression Scale, General Self-Efficacy Scale, and 12-item General Health Questionnaire.

    Science.gov (United States)

    Ohno, Shotaro; Takahashi, Kana; Inoue, Aimi; Takada, Koki; Ishihara, Yoshiaki; Tanigawa, Masaru; Hirao, Kazuki

    2017-12-01

    This study aims to examine the smallest detectable change (SDC) and test-retest reliability of the Center for Epidemiologic Studies Depression Scale (CES-D), General Self-Efficacy Scale (GSES), and 12-item General Health Questionnaire (GHQ-12). We tested 154 young adults at baseline and 2 weeks later. We calculated the intra-class correlation coefficients (ICCs) for test-retest reliability with a two-way random effects model for agreement. We then calculated the standard error of measurement (SEM) for agreement using the ICC formula. The SEM for agreement was used to calculate SDC values at the individual level (SDC ind ) and group level (SDC group ). The study participants included 137 young adults. The ICCs for all self-reported outcome measurement scales exceeded 0.70. The SEM of CES-D was 3.64, leading to an SDC ind of 10.10 points and SDC group of 0.86 points. The SEM of GSES was 1.56, leading to an SDC ind of 4.33 points and SDC group of 0.37 points. The SEM of GHQ-12 with bimodal scoring was 1.47, leading to an SDC ind of 4.06 points and SDC group of 0.35 points. The SEM of GHQ-12 with Likert scoring was 2.44, leading to an SDC ind of 6.76 points and SDC group of 0.58 points. To confirm that the change was not a result of measurement error, a score of self-reported outcome measurement scales would need to change by an amount greater than these SDC values. This has important implications for clinicians and epidemiologists when assessing outcomes. © 2017 John Wiley & Sons, Ltd.

  17. Measurement Error, Reliability, and Minimum Detectable Change in the Mini-Mental State Examination, Montreal Cognitive Assessment, and Color Trails Test among Community Living Middle-Aged and Older Adults.

    Science.gov (United States)

    Feeney, Joanne; Savva, George M; O'Regan, Claire; King-Kallimanis, Bellinda; Cronin, Hilary; Kenny, Rose Anne

    2016-05-31

    Knowing the reliability of cognitive tests, particularly those commonly used in clinical practice, is important in order to interpret the clinical significance of a change in performance or a low score on a single test. To report the intra-class correlation (ICC), standard error of measurement (SEM) and minimum detectable change (MDC) for the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Color Trails Test (CTT) among community dwelling older adults. 130 participants aged 55 and older without severe cognitive impairment underwent two cognitive assessments between two and four months apart. Half the group changed rater between assessments and half changed time of day. Mean (standard deviation) MMSE was 28.1 (2.1) at baseline and 28.4 (2.1) at repeat. Mean (SD) MoCA increased from 24.8 (3.6) to 25.2 (3.6). There was a rater effect on CTT, but not on the MMSE or MoCA. The SEM of the MMSE was 1.0, leading to an MDC (based on a 95% confidence interval) of 3 points. The SEM of the MoCA was 1.5, implying an MDC95 of 4 points. MoCA (ICC = 0.81) was more reliable than MMSE (ICC = 0.75), but all tests examined showed substantial within-patient variation. An individual's score would have to change by greater than or equal to 3 points on the MMSE and 4 points on the MoCA for the rater to be confident that the change was not due to measurement error. This has important implications for epidemiologists and clinicians in dementia screening and diagnosis.

  18. The Accelerator Reliability Forum

    CERN Document Server

    Lüdeke, Andreas; Giachino, R

    2014-01-01

    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  19. Reliability testing of failed fuel location system

    International Nuclear Information System (INIS)

    Vieru, G.

    1996-01-01

    This paper presents the experimental reliability tests performed in order to prove the reliability parameters for Failed Fuel Location System (FFLS), equipment used to detect in which channel of a particular heat transport loop a fuel failure is located, and to find in which channel what particular bundle pair is failed. To do so, D20 samples from each reactor channel are sequentially monitored to detect a comparatively high level of delayed neutron activity. 15 refs, 8 figs, 2 tabs

  20. Business of reliability

    Science.gov (United States)

    Engel, Pierre

    1999-12-01

    The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

  1. Human Reliability Program Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  2. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  3. Reliability of software

    International Nuclear Information System (INIS)

    Kopetz, H.

    1980-01-01

    Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de

  4. Reliable Design Versus Trust

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  5. Pocket Handbook on Reliability

    Science.gov (United States)

    1975-09-01

    exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future

  6. Developing Reliable Life Support for Mars

    Science.gov (United States)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and

  7. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  8. Reliability in engineering '87

    International Nuclear Information System (INIS)

    Tuma, M.

    1987-01-01

    The participants heard 51 papers dealing with the reliability of engineering products. Two of the papers were incorporated in INIS, namely ''Reliability comparison of two designs of low pressure regeneration of the 1000 MW unit at the Temelin nuclear power plant'' and ''Use of probability analysis of reliability in designing nuclear power facilities.''(J.B.)

  9. Application of solvent-assisted dispersive solid phase extraction as a new, fast, simple and reliable preconcentration and trace detection of lead and cadmium ions in fruit and water samples.

    Science.gov (United States)

    Behbahani, Mohammad; Ghareh Hassanlou, Parmoon; Amini, Mostafa M; Omidi, Fariborz; Esrafili, Ali; Farzadkia, Mehdi; Bagheri, Akbar

    2015-11-15

    In this research, a new sample treatment technique termed solvent-assisted dispersive solid phase extraction (SA-DSPE) was developed. The new method was based on the dispersion of the sorbent into the sample to maximize the contact surface. In this approach, the dispersion of the sorbent at a very low milligram level was achieved by injecting a mixture solution of the sorbent and disperser solvent into the aqueous sample. Thereby, a cloudy solution formed. The cloudy solution resulted from the dispersion of the fine particles of the sorbent in the bulk aqueous sample. After extraction, the cloudy solution was centrifuged and the enriched analytes in the sediment phase dissolved in ethanol and determined by flame atomic absorption spectrophotometer. Under the optimized conditions, the detection limit for lead and cadmium ions was 1.2 μg L(-1) and 0.2 μg L(-1), respectively. Furthermore, the preconcentration factor was 299.3 and 137.1 for cadmium and lead ions, respectively. SA-DSPE was successfully applied for trace determination of lead and cadmium in fruit (Citrus limetta, Kiwi and pomegranate) and water samples. Finally, the introduced sample preparation method can be used as a simple, rapid, reliable, selective and sensitive method for flame atomic absorption spectrophotometric determination of trace levels of lead and cadmium ions in fruit and water samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Reliability of chromogenic in situ hybridization for epidermal growth factor receptor gene copy number detection in non-small-cell lung carcinomas: a comparison with fluorescence in situ hybridization study.

    Science.gov (United States)

    Yoo, Seol Bong; Lee, Hyun Ju; Park, Jung Ok; Choe, Gheeyoung; Chung, Doo Hyun; Seo, Jeong-Wook; Chung, Jin-Haeng

    2010-03-01

    Fluorescence in situ hybridization (FISH) has been known to be the most representative and standardized test for assessing gene amplification. However, FISH requires a fluorescence microscope, the signals are labile and rapidly fade over time. Recently, chromogenic in situ hybridization (CISH) has emerged as a potential alternative to FISH. The aim of this study is to test the reliability of CISH technique for the detection of epidermal growth factor receptor (EGFR) gene amplification in non-small-cell lung carcinomas (NSCLC), to compare CISH results with FISH. A total of 277 formalin-fixed and paraffin embedded NSCLC tissue samples were retrieved from the surgical pathology archives at Seoul National University Bundang Hospital. CISH and FISH examinations were performed to test EGFR gene amplification status. There was high concordance in the assessment of EGFR gene copy number between CISH and FISH tests (Kappa coefficient=0.83). Excellent concordance was shown between two observers on the interpretation of the CISH results (Kappa coefficient=0.90). In conclusion, CISH result is highly reproducible, accurate and practical method to determine EGFR gene amplification in NSCLC. In addition, CISH allows a concurrent analysis of histological features of the tumors and gene copy numbers.

  11. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  12. Human factor reliability program

    International Nuclear Information System (INIS)

    Knoblochova, L.

    2017-01-01

    The human factor's reliability program was at Slovenske elektrarne, a.s. (SE) nuclear power plants. introduced as one of the components Initiatives of Excellent Performance in 2011. The initiative's goal was to increase the reliability of both people and facilities, in response to 3 major areas of improvement - Need for improvement of the results, Troubleshooting support, Supporting the achievement of the company's goals. The human agent's reliability program is in practice included: - Tools to prevent human error; - Managerial observation and coaching; - Human factor analysis; -Quick information about the event with a human agent; -Human reliability timeline and performance indicators; - Basic, periodic and extraordinary training in human factor reliability(authors)

  13. Finite element reliability analysis of fatigue life

    International Nuclear Information System (INIS)

    Harkness, H.H.; Belytschko, T.; Liu, W.K.

    1992-01-01

    Fatigue reliability is addressed by the first-order reliability method combined with a finite element method. Two-dimensional finite element models of components with cracks in mode I are considered with crack growth treated by the Paris law. Probability density functions of the variables affecting fatigue are proposed to reflect a setting where nondestructive evaluation is used, and the Rosenblatt transformation is employed to treat non-Gaussian random variables. Comparisons of the first-order reliability results and Monte Carlo simulations suggest that the accuracy of the first-order reliability method is quite good in this setting. Results show that the upper portion of the initial crack length probability density function is crucial to reliability, which suggests that if nondestructive evaluation is used, the probability of detection curve plays a key role in reliability. (orig.)

  14. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  15. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  16. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  17. Sample-to-SNP kit: a reliable, easy and fast tool for the detection of HFE p.H63D and p.C282Y variations associated to hereditary hemochromatosis.

    Science.gov (United States)

    Nielsen, Peter B; Petersen, Maja S; Ystaas, Viviana; Andersen, Rolf V; Hansen, Karin M; Blaabjerg, Vibeke; Refstrup, Mette

    2012-10-01

    Classical hereditary hemochromatosis involves the HFE-gene and diagnostic analysis of the DNA variants HFE p.C282Y (c.845G>A; rs1800562) and HFE p.H63D (c.187C>G; rs1799945). The affected protein alters the iron homeostasis resulting in iron overload in various tissues. The aim of this study was to validate the TaqMan-based Sample-to-SNP protocol for the analysis of the HFE-p.C282Y and p.H63D variants with regard to accuracy, usefulness and reproducibility compared to an existing SNP protocol. The Sample-to-SNP protocol uses an approach where the DNA template is made accessible from a cell lysate followed by TaqMan analysis. Besides the HFE-SNPs other eight SNPs were used as well. These SNPs were: Coagulation factor II-gene F2 c.20210G>A, Coagulation factor V-gene F5 p.R506Q (c.1517G>A; rs121917732), Mitochondria SNP: mt7028 G>A, Mitochondria SNP: mt12308 A>G, Proprotein convertase subtilisin/kexin type 9-gene PCSK9 p.R46L (c.137G>T), Plutathione S-transferase pi 1-gene GSTP1 p.I105V (c313A>G; rs1695), LXR g.-171 A>G, ZNF202 g.-118 G>T. In conclusion the Sample-to-SNP kit proved to be an accurate, reliable, robust, easy to use and rapid TaqMan-based SNP detection protocol, which could be quickly implemented in a routine diagnostic or research facility. Copyright © 2012. Published by Elsevier B.V.

  18. Operational safety reliability research

    International Nuclear Information System (INIS)

    Hall, R.E.; Boccio, J.L.

    1986-01-01

    Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant

  19. Hawaii Electric System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  20. Hawaii electric system reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  1. Improving machinery reliability

    CERN Document Server

    Bloch, Heinz P

    1998-01-01

    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  2. LED system reliability

    NARCIS (Netherlands)

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.

    2011-01-01

    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific

  3. Integrated system reliability analysis

    DEFF Research Database (Denmark)

    Gintautas, Tomas; Sørensen, John Dalsgaard

    Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...

  4. Reliability of neural encoding

    DEFF Research Database (Denmark)

    Alstrøm, Preben; Beierholm, Ulrik; Nielsen, Carsten Dahl

    2002-01-01

    The reliability with which a neuron is able to create the same firing pattern when presented with the same stimulus is of critical importance to the understanding of neuronal information processing. We show that reliability is closely related to the process of phaselocking. Experimental results f...

  5. Design reliability engineering

    International Nuclear Information System (INIS)

    Buden, D.; Hunt, R.N.M.

    1989-01-01

    Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design process that will be more complete and less expensive. The intent is to integrate the best features of design, reliability analysis, and expert systems to design highly reliable systems to meet stressing needs. Taken into account are the large uncertainties that exist in materials, design models, and fabrication techniques. Expert systems are a convenient method to integrate into the design process a complete definition of all elements that should be considered and an opportunity to integrate the design process with reliability, safety, test engineering, maintenance and operator training. 1 fig

  6. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  7. Highly reliable TOFD UT Technique

    International Nuclear Information System (INIS)

    Acharya, G.D.; Trivedi, S.A.R.; Pai, K.B.

    2003-01-01

    The high performance of the time of flight diffraction technique (TOFD) with regard to the detection capabilities of weld defects such as crack, slag, lack of fusion has led to a rapidly increasing acceptance of the technique as a pre?service inspection tool. Since the early 1990s TOFD has been applied to several projects, where it replaced the commonly used radiographic testing. The use of TOM lead to major time savings during new build and replacement projects. At the same time the TOFD technique was used as base line inspection, which enables monitoring in the future for critical welds, but also provides documented evidence for life?time. The TOFD technique as the ability to detect and simultaneously size flows of nearly any orientation within the weld and heat affected zone. TOM is recognized as a reliable, proven technique for detection and sizing of defects and proven to be a time saver, resulting in shorter shutdown periods and construction project times. Thus even in cases where inspection price of TOFD per welds is higher, in the end it will result in significantly lower overall costs and improve quality. This paper deals with reliability, economy, acceptance criteria and field experience. It also covers comparative study between radiography technique Vs. TOFD. (Author)

  8. A reliability program approach to operational safety

    International Nuclear Information System (INIS)

    Mueller, C.J.; Bezella, W.A.

    1985-01-01

    A Reliability Program (RP) model based on proven reliability techniques is being formulated for potential application in the nuclear power industry. Methods employed under NASA and military direction, commercial airline and related FAA programs were surveyed and a review of current nuclear risk-dominant issues conducted. The need for a reliability approach to address dependent system failures, operating and emergency procedures and human performance, and develop a plant-specific performance data base for safety decision making is demonstrated. Current research has concentrated on developing a Reliability Program approach for the operating phase of a nuclear plant's lifecycle. The approach incorporates performance monitoring and evaluation activities with dedicated tasks that integrate these activities with operation, surveillance, and maintenance of the plant. The detection, root-cause evaluation and before-the-fact correction of incipient or actual systems failures as a mechanism for maintaining plant safety is a major objective of the Reliability Program. (orig./HP)

  9. Space Vehicle Reliability Modeling in DIORAMA

    Energy Technology Data Exchange (ETDEWEB)

    Tornga, Shawn Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-12

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  10. Reliability of construction materials

    International Nuclear Information System (INIS)

    Merz, H.

    1976-01-01

    One can also speak of reliability with respect to materials. While for reliability of components the MTBF (mean time between failures) is regarded as the main criterium, this is replaced with regard to materials by possible failure mechanisms like physical/chemical reaction mechanisms, disturbances of physical or chemical equilibrium, or other interactions or changes of system. The main tasks of the reliability analysis of materials therefore is the prediction of the various failure reasons, the identification of interactions, and the development of nondestructive testing methods. (RW) [de

  11. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  12. Reliability and mechanical design

    International Nuclear Information System (INIS)

    Lemaire, Maurice

    1997-01-01

    A lot of results in mechanical design are obtained from a modelisation of physical reality and from a numerical solution which would lead to the evaluation of needs and resources. The goal of the reliability analysis is to evaluate the confidence which it is possible to grant to the chosen design through the calculation of a probability of failure linked to the retained scenario. Two types of analysis are proposed: the sensitivity analysis and the reliability analysis. Approximate methods are applicable to problems related to reliability, availability, maintainability and safety (RAMS)

  13. RTE - 2013 Reliability Report

    International Nuclear Information System (INIS)

    Denis, Anne-Marie

    2014-01-01

    RTE publishes a yearly reliability report based on a standard model to facilitate comparisons and highlight long-term trends. The 2013 report is not only stating the facts of the Significant System Events (ESS), but it moreover underlines the main elements dealing with the reliability of the electrical power system. It highlights the various elements which contribute to present and future reliability and provides an overview of the interaction between the various stakeholders of the Electrical Power System on the scale of the European Interconnected Network. (author)

  14. Approach to reliability assessment

    International Nuclear Information System (INIS)

    Green, A.E.; Bourne, A.J.

    1975-01-01

    Experience has shown that reliability assessments can play an important role in the early design and subsequent operation of technological systems where reliability is at a premium. The approaches to and techniques for such assessments, which have been outlined in the paper, have been successfully applied in variety of applications ranging from individual equipments to large and complex systems. The general approach involves the logical and systematic establishment of the purpose, performance requirements and reliability criteria of systems. This is followed by an appraisal of likely system achievment based on the understanding of different types of variational behavior. A fundamental reliability model emerges from the correlation between the appropriate Q and H functions for performance requirement and achievement. This model may cover the complete spectrum of performance behavior in all the system dimensions

  15. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  16. Structural systems reliability analysis

    International Nuclear Information System (INIS)

    Frangopol, D.

    1975-01-01

    For an exact evaluation of the reliability of a structure it appears necessary to determine the distribution densities of the loads and resistances and to calculate the correlation coefficients between loads and between resistances. These statistical characteristics can be obtained only on the basis of a long activity period. In case that such studies are missing the statistical properties formulated here give upper and lower bounds of the reliability. (orig./HP) [de

  17. Reliability and maintainability

    International Nuclear Information System (INIS)

    1994-01-01

    Several communications in this conference are concerned with nuclear plant reliability and maintainability; their titles are: maintenance optimization of stand-by Diesels of 900 MW nuclear power plants; CLAIRE: an event-based simulation tool for software testing; reliability as one important issue within the periodic safety review of nuclear power plants; design of nuclear building ventilation by the means of functional analysis; operation characteristic analysis for a power industry plant park, as a function of influence parameters

  18. Reliability data book

    International Nuclear Information System (INIS)

    Bento, J.P.; Boerje, S.; Ericsson, G.; Hasler, A.; Lyden, C.O.; Wallin, L.; Poern, K.; Aakerlund, O.

    1985-01-01

    The main objective for the report is to improve failure data for reliability calculations as parts of safety analyses for Swedish nuclear power plants. The work is based primarily on evaluations of failure reports as well as information provided by the operation and maintenance staff of each plant. In the report are presented charts of reliability data for: pumps, valves, control rods/rod drives, electrical components, and instruments. (L.E.)

  19. Multidisciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  20. Analysis and Application of Reliability

    International Nuclear Information System (INIS)

    Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju

    1999-05-01

    This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.

  1. The reliability of commonly used electrophysiology measures.

    Science.gov (United States)

    Brown, K E; Lohse, K R; Mayer, I M S; Strigaro, G; Desikan, M; Casula, E P; Meunier, S; Popa, T; Lamy, J-C; Odish, O; Leavitt, B R; Durr, A; Roos, R A C; Tabrizi, S J; Rothwell, J C; Boyd, L A; Orth, M

    Electrophysiological measures can help understand brain function both in healthy individuals and in the context of a disease. Given the amount of information that can be extracted from these measures and their frequent use, it is essential to know more about their inherent reliability. To understand the reliability of electrophysiology measures in healthy individuals. We hypothesized that measures of threshold and latency would be the most reliable and least susceptible to methodological differences between study sites. Somatosensory evoked potentials from 112 control participants; long-latency reflexes, transcranial magnetic stimulation with resting and active motor thresholds, motor evoked potential latencies, input/output curves, and short-latency sensory afferent inhibition and facilitation from 84 controls were collected at 3 visits over 24 months at 4 Track-On HD study sites. Reliability was assessed using intra-class correlation coefficients for absolute agreement, and the effects of reliability on statistical power are demonstrated for different sample sizes and study designs. Measures quantifying latencies, thresholds, and evoked responses at high stimulator intensities had the highest reliability, and required the smallest sample sizes to adequately power a study. Very few between-site differences were detected. Reliability and susceptibility to between-site differences should be evaluated for electrophysiological measures before including them in study designs. Levels of reliability vary substantially across electrophysiological measures, though there are few between-site differences. To address this, reliability should be used in conjunction with theoretical calculations to inform sample size and ensure studies are adequately powered to detect true change in measures of interest. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Safety and reliability criteria

    International Nuclear Information System (INIS)

    O'Neil, R.

    1978-01-01

    Nuclear power plants and, in particular, reactor pressure boundary components have unique reliability requirements, in that usually no significant redundancy is possible, and a single failure can give rise to possible widespread core damage and fission product release. Reliability may be required for availability or safety reasons, but in the case of the pressure boundary and certain other systems safety may dominate. Possible Safety and Reliability (S and R) criteria are proposed which would produce acceptable reactor design. Without some S and R requirement the designer has no way of knowing how far he must go in analysing his system or component, or whether his proposed solution is likely to gain acceptance. The paper shows how reliability targets for given components and systems can be individually considered against the derived S and R criteria at the design and construction stage. Since in the case of nuclear pressure boundary components there is often very little direct experience on which to base reliability studies, relevant non-nuclear experience is examined. (author)

  3. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  4. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  5. Improving the safety and reliability of Monju

    International Nuclear Information System (INIS)

    Itou, Kazumoto; Maeda, Hiroshi; Moriyama, Masatoshi

    1998-01-01

    Comprehensive safety review has been performed at Monju to determine why the Monju secondary sodium leakage accident occurred. We investigated how to improve the situation based on the results of the safety review. The safety review focused on five aspects of whether the facilities for dealing with the sodium leakage accident were adequate: the reliability of the detection method, the reliability of the method for preventing the spread of the sodium leakage accident, whether the documented operating procedures are adequate, whether the quality assurance system, program, and actions were properly performed and so on. As a result, we established for Monju a better method of dealing with sodium leakage accidents, rapid detection of sodium leakage, improvement of sodium drain facilities, and way to reduce damage to Monju systems after an accident. We also improve the operation procedures and quality assurance actions to increase the safety and reliability of Monju. (author)

  6. Reliability issues in PACS

    Science.gov (United States)

    Taira, Ricky K.; Chan, Kelby K.; Stewart, Brent K.; Weinberg, Wolfram S.

    1991-07-01

    Reliability is an increasing concern when moving PACS from the experimental laboratory to the clinical environment. Any system downtime may seriously affect patient care. The authors report on the several classes of errors encountered during the pre-clinical release of the PACS during the past several months and present the solutions implemented to handle them. The reliability issues discussed include: (1) environmental precautions, (2) database backups, (3) monitor routines of critical resources and processes, (4) hardware redundancy (networks, archives), and (5) development of a PACS quality control program.

  7. Reliability Parts Derating Guidelines

    Science.gov (United States)

    1982-06-01

    226-30, October 1974. 66 I, 26. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser Engineering and...Vol. R-23, No. 4, 226-30, October 1974. 28. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser...opnatien ot 󈨊 deg C, mounted on a 4-inach square 0.250~ inch thick al~loy alum~nusi panel.. This mounting technique should be L~ ken into cunoidur~tiou

  8. Columbus safety and reliability

    Science.gov (United States)

    Longhurst, F.; Wessels, H.

    1988-10-01

    Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.

  9. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  10. Designing reliability into accelerators

    International Nuclear Information System (INIS)

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the ''factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis

  11. Proof tests on reliability

    International Nuclear Information System (INIS)

    Mishima, Yoshitsugu

    1983-01-01

    In order to obtain public understanding on nuclear power plants, tests should be carried out to prove the reliability and safety of present LWR plants. For example, the aseismicity of nuclear power plants must be verified by using a large scale earthquake simulator. Reliability test began in fiscal 1975, and the proof tests on steam generators and on PWR support and flexure pins against stress corrosion cracking have already been completed, and the results have been internationally highly appreciated. The capacity factor of the nuclear power plant operation in Japan rose to 80% in the summer of 1983, and considering the period of regular inspection, it means the operation of almost full capacity. Japanese LWR technology has now risen to the top place in the world after having overcome the defects. The significance of the reliability test is to secure the functioning till the age limit is reached, to confirm the correct forecast of deteriorating process, to confirm the effectiveness of the remedy to defects and to confirm the accuracy of predicting the behavior of facilities. The reliability of nuclear valves, fuel assemblies, the heat affected zones in welding, reactor cooling pumps and electric instruments has been tested or is being tested. (Kako, I.)

  12. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  13. Reliability of Plastic Slabs

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    1989-01-01

    In the paper it is shown how upper and lower bounds for the reliability of plastic slabs can be determined. For the fundamental case it is shown that optimal bounds of a deterministic and a stochastic analysis are obtained on the basis of the same failure mechanisms and the same stress fields....

  14. Reliability based structural design

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2014-01-01

    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A

  15. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  16. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  17. Parametric Mass Reliability Study

    Science.gov (United States)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  18. Reliability Approach of a Compressor System using Reliability Block ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009. [3] P. O'Connor ...

  19. Reliability in the utility computing era: Towards reliable Fog computing

    DEFF Research Database (Denmark)

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.

    2013-01-01

    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  20. RTE - Reliability report 2016

    International Nuclear Information System (INIS)

    2017-06-01

    Every year, RTE produces a reliability report for the past year. This document lays out the main factors that affected the electrical power system's operational reliability in 2016 and the initiatives currently under way intended to ensure its reliability in the future. Within a context of the energy transition, changes to the European interconnected network mean that RTE has to adapt on an on-going basis. These changes include the increase in the share of renewables injecting an intermittent power supply into networks, resulting in a need for flexibility, and a diversification in the numbers of stakeholders operating in the energy sector and changes in the ways in which they behave. These changes are dramatically changing the structure of the power system of tomorrow and the way in which it will operate - particularly the way in which voltage and frequency are controlled, as well as the distribution of flows, the power system's stability, the level of reserves needed to ensure supply-demand balance, network studies, assets' operating and control rules, the tools used and the expertise of operators. The results obtained in 2016 are evidence of a globally satisfactory level of reliability for RTE's operations in somewhat demanding circumstances: more complex supply-demand balance management, cross-border schedules at interconnections indicating operation that is closer to its limits and - most noteworthy - having to manage a cold spell just as several nuclear power plants had been shut down. In a drive to keep pace with the changes expected to occur in these circumstances, RTE implemented numerous initiatives to ensure high levels of reliability: - maintaining investment levels of euro 1.5 billion per year; - increasing cross-zonal capacity at borders with our neighbouring countries, thus bolstering the security of our electricity supply; - implementing new mechanisms (demand response, capacity mechanism, interruptibility, etc.); - involvement in tests or projects

  1. Waste package reliability analysis

    International Nuclear Information System (INIS)

    Pescatore, C.; Sastre, C.

    1983-01-01

    Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table

  2. Accelerator reliability workshop

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, L; Duru, Ph; Koch, J M; Revol, J L; Van Vaerenbergh, P; Volpe, A M; Clugnet, K; Dely, A; Goodhew, D

    2002-07-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  3. Human Reliability Program Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  4. Reliability and construction control

    Directory of Open Access Journals (Sweden)

    Sherif S. AbdelSalam

    2016-06-01

    Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.

  5. Scyllac equipment reliability analysis

    International Nuclear Information System (INIS)

    Gutscher, W.D.; Johnson, K.J.

    1975-01-01

    Most of the failures in Scyllac can be related to crowbar trigger cable faults. A new cable has been designed, procured, and is currently undergoing evaluation. When the new cable has been proven, it will be worked into the system as quickly as possible without causing too much additional down time. The cable-tip problem may not be easy or even desirable to solve. A tightly fastened permanent connection that maximizes contact area would be more reliable than the plug-in type of connection in use now, but it would make system changes and repairs much more difficult. The balance of the failures have such a low occurrence rate that they do not cause much down time and no major effort is underway to eliminate them. Even though Scyllac was built as an experimental system and has many thousands of components, its reliability is very good. Because of this the experiment has been able to progress at a reasonable pace

  6. Improving Power Converter Reliability

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental......The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  7. Accelerator reliability workshop

    International Nuclear Information System (INIS)

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D.

    2002-01-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop

  8. Safety and reliability assessment

    International Nuclear Information System (INIS)

    1979-01-01

    This report contains the papers delivered at the course on safety and reliability assessment held at the CSIR Conference Centre, Scientia, Pretoria. The following topics were discussed: safety standards; licensing; biological effects of radiation; what is a PWR; safety principles in the design of a nuclear reactor; radio-release analysis; quality assurance; the staffing, organisation and training for a nuclear power plant project; event trees, fault trees and probability; Automatic Protective Systems; sources of failure-rate data; interpretation of failure data; synthesis and reliability; quantification of human error in man-machine systems; dispersion of noxious substances through the atmosphere; criticality aspects of enrichment and recovery plants; and risk and hazard analysis. Extensive examples are given as well as case studies

  9. Reliability of Circumplex Axes

    Directory of Open Access Journals (Sweden)

    Micha Strack

    2013-06-01

    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  10. The cost of reliability

    International Nuclear Information System (INIS)

    Ilic, M.

    1998-01-01

    In this article the restructuring process under way in the US power industry is being revisited from the point of view of transmission system provision and reliability was rolled into the average cost of electricity to all, it is not so obvious how is this cost managed in the new industry. A new MIT approach to transmission pricing is here suggested as a possible solution [it

  11. Software reliability studies

    Science.gov (United States)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  12. Investment in new product reliability

    International Nuclear Information System (INIS)

    Murthy, D.N.P.; Rausand, M.; Virtanen, S.

    2009-01-01

    Product reliability is of great importance to both manufacturers and customers. Building reliability into a new product is costly, but the consequences of inadequate product reliability can be costlier. This implies that manufacturers need to decide on the optimal investment in new product reliability by achieving a suitable trade-off between the two costs. This paper develops a framework and proposes an approach to help manufacturers decide on the investment in new product reliability.

  13. Weibull distribution in reliability data analysis in nuclear power plant

    International Nuclear Information System (INIS)

    Ma Yingfei; Zhang Zhijian; Zhang Min; Zheng Gangyang

    2015-01-01

    Reliability is an important issue affecting each stage of the life cycle ranging from birth to death of a product or a system. The reliability engineering includes the equipment failure data processing, quantitative assessment of system reliability and maintenance, etc. Reliability data refers to the variety of data that describe the reliability of system or component during its operation. These data may be in the form of numbers, graphics, symbols, texts and curves. Quantitative reliability assessment is the task of the reliability data analysis. It provides the information related to preventing, detect, and correct the defects of the reliability design. Reliability data analysis under proceed with the various stages of product life cycle and reliability activities. Reliability data of Systems Structures and Components (SSCs) in Nuclear Power Plants is the key factor of probabilistic safety assessment (PSA); reliability centered maintenance and life cycle management. The Weibull distribution is widely used in reliability engineering, failure analysis, industrial engineering to represent manufacturing and delivery times. It is commonly used to model time to fail, time to repair and material strength. In this paper, an improved Weibull distribution is introduced to analyze the reliability data of the SSCs in Nuclear Power Plants. An example is given in the paper to present the result of the new method. The Weibull distribution of mechanical equipment for reliability data fitting ability is very strong in nuclear power plant. It's a widely used mathematical model for reliability analysis. The current commonly used methods are two-parameter and three-parameter Weibull distribution. Through comparison and analysis, the three-parameter Weibull distribution fits the data better. It can reflect the reliability characteristics of the equipment and it is more realistic to the actual situation. (author)

  14. Evaluation of ECT reliability for axial ODSCC in steam generator tubes

    International Nuclear Information System (INIS)

    Lee, Jae Bong; Park, Jai Hak; Kim, Hong Deok; Chung, Han Sub

    2010-01-01

    The integrity of steam generator tubes is usually evaluated based on eddy current test (ECT) results. Because detection capacity of the ECT is not perfect, all of the physical flaws, which actually exist in steam generator tubes, cannot be detected by ECT inspection. Therefore it is very important to analyze ECT reliability in the integrity assessment of steam generators. The reliability of an ECT inspection system is divided into reliability of inspection technique and reliability of quality of analyst. And the reliability of ECT results is also divided into reliability of size and reliability of detection. The reliability of ECT sizing is often characterized as a linear regression model relating true flaw size data to measured flaw size data. The reliability of detection is characterized in terms of probability of detection (POD), which is expressed as a function of flaw size. In this paper the reliability of an ECT inspection system is analyzed quantitatively. POD of the ECT inspection system for axial outside diameter stress corrosion cracks (ODSCC) in steam generator tubes is evaluated. Using a log-logistic regression model, POD is evaluated from hit (detection) and miss (no detection) binary data obtained from destructive and non-destructive inspections of cracked tubes. Crack length and crack depth are considered as variables in multivariate log-logistic regression and their effects on detection capacity are assessed using two-dimensional POD (2-D POD) surface. The reliability of detection is also analyzed using POD for inspection technique (POD T ) and POD for analyst (POD A ).

  15. Nuclear performance and reliability

    International Nuclear Information System (INIS)

    Rothwell, G.

    1993-01-01

    If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive. There has been a significant improvement in nuclear power plant performance, due largely to a decline in the forced outage rate and a dramatic drop in the average number of forced outages per fuel cycle. If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive over time. To encourage further increases in performance, regulatory incentive schemes should reward reactor operators for improved reliability and safety, as well as for improved performance

  16. [How Reliable is Neuronavigation?].

    Science.gov (United States)

    Stieglitz, Lennart Henning

    2016-02-17

    Neuronavigation plays a central role in modern neurosurgery. It allows visualizing instruments and three-dimensional image data intraoperatively and supports spatial orientation. Thus it allows to reduce surgical risks and speed up complex surgical procedures. The growing availability and importance of neuronavigation makes clear how relevant it is to know about its reliability and accuracy. Different factors may influence the accuracy during the surgery unnoticed, misleading the surgeon. Besides the best possible optimization of the systems themselves, a good knowledge about its weaknesses is mandatory for every neurosurgeon.

  17. The value of reliability

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Karlström, Anders

    2010-01-01

    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...... of the form of the standardised distribution of trip durations. This insight provides a unification of the scheduling model and models that include the standard deviation of trip duration directly as an argument in the cost or utility function. The results generalise approximately to the case where the mean...

  18. Validity of ultrasonography and measures of adult shoulder function and reliability of ultrasonography in detecting shoulder synovitis in patients with rheumatoid arthritis using magnetic resonance imaging as a gold standard.

    LENUS (Irish Health Repository)

    Bruyn, G A W

    2010-08-01

    To assess the intra- and interobserver reproducibility of musculoskeletal ultrasonography (US) in detecting inflammatory shoulder changes in patients with rheumatoid arthritis, and to determine the agreement between US and the Shoulder Pain and Disability Index (SPADI) and the Disabilities of the Arm, Shoulder, and Hand (DASH) questionnaire, using magnetic resonance imaging (MRI) as a gold standard.

  19. Interactive reliability assessment using an integrated reliability data bank

    International Nuclear Information System (INIS)

    Allan, R.N.; Whitehead, A.M.

    1986-01-01

    The logical structure, techniques and practical application of a computer-aided technique based on a microcomputer using floppy disc Random Access Files is described. This interactive computational technique is efficient if the reliability prediction program is coupled directly to a relevant source of data to create an integrated reliability assessment/reliability data bank system. (DG)

  20. Load Control System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Trudnowski, Daniel [Montana Tech of the Univ. of Montana, Butte, MT (United States)

    2015-04-03

    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  1. Microprocessor hardware reliability

    Energy Technology Data Exchange (ETDEWEB)

    Wright, R I

    1982-01-01

    Microprocessor-based technology has had an impact in nearly every area of industrial electronics and many applications have important safety implications. Microprocessors are being used for the monitoring and control of hazardous processes in the chemical, oil and power generation industries, for the control and instrumentation of aircraft and other transport systems and for the control of industrial machinery. Even in the field of nuclear reactor protection, where designers are particularly conservative, microprocessors are used to implement certain safety functions and may play increasingly important roles in protection systems in the future. Where microprocessors are simply replacing conventional hard-wired control and instrumentation systems no new hazards are created by their use. In the field of robotics, however, the microprocessor has opened up a totally new technology and with it has created possible new and as yet unknown hazards. The paper discusses some of the design and manufacturing techniques which may be used to enhance the reliability of microprocessor based systems and examines the available reliability data on lsi/vlsi microcircuits. 12 references.

  2. Supply chain reliability modelling

    Directory of Open Access Journals (Sweden)

    Eugen Zaitsev

    2012-03-01

    Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.

  3. Data processing of qualitative results from an interlaboratory comparison for the detection of "Flavescence dorée" phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology.

    Science.gov (United States)

    Chabirand, Aude; Loiseau, Marianne; Renaudin, Isabelle; Poliakoff, Françoise

    2017-01-01

    A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their

  4. Data processing of qualitative results from an interlaboratory comparison for the detection of "Flavescence dorée" phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology.

    Directory of Open Access Journals (Sweden)

    Aude Chabirand

    Full Text Available A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007, Pelletier (2009 and under patent oligonucleotides achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper

  5. Reliability of salivary testosterone measurements in diagnosis of Polycystic Ovarian Syndrome

    Directory of Open Access Journals (Sweden)

    Omnia Youssef

    2010-07-01

    Conclusion: Determination of salivary testosterone is a reliable method to detect changes in the concentration of available biologically active testosterone in the serum. Salivary testosterone provides a sensitive, simple, reliable, non-invasive and uncomplicated diagnostic approach for PCOS.

  6. OSS reliability measurement and assessment

    CERN Document Server

    Yamada, Shigeru

    2016-01-01

    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  7. Transit ridership, reliability, and retention.

    Science.gov (United States)

    2008-10-01

    This project explores two major components that affect transit ridership: travel time reliability and rider : retention. It has been recognized that transit travel time reliability may have a significant impact on : attractiveness of transit to many ...

  8. Travel reliability inventory for Chicago.

    Science.gov (United States)

    2013-04-01

    The overarching goal of this research project is to enable state DOTs to document and monitor the reliability performance : of their highway networks. To this end, a computer tool, TRIC, was developed to produce travel reliability inventories from : ...

  9. 2017 NREL Photovoltaic Reliability Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-15

    NREL's Photovoltaic (PV) Reliability Workshop (PVRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology -- both critical goals for moving PV technologies deeper into the electricity marketplace.

  10. AECL's reliability and maintainability program

    International Nuclear Information System (INIS)

    Wolfe, W.A.; Nieuwhof, G.W.E.

    1976-05-01

    AECL's reliability and maintainability program for nuclear generating stations is described. How the various resources of the company are organized to design and construct stations that operate reliably and safely is shown. Reliability and maintainability includes not only special mathematically oriented techniques, but also the technical skills and organizational abilities of the company. (author)

  11. Models on reliability of non-destructive testing

    International Nuclear Information System (INIS)

    Simola, K.; Pulkkinen, U.

    1998-01-01

    The reliability of ultrasonic inspections has been studied in e.g. international PISC (Programme for the Inspection of Steel Components) exercises. These exercises have produced a large amount of information on the effect of various factors on the reliability of inspections. The information obtained from reliability experiments are used to model the dependency of flaw detection probability on various factors and to evaluate the performance of inspection equipment, including the sizing accuracy. The information from experiments is utilised in a most effective way when mathematical models are applied. Here, some statistical models for reliability of non-destructive tests are introduced. In order to demonstrate the use of inspection reliability models, they have been applied to the inspection results of intergranular stress corrosion cracking (IGSCC) type flaws in PISC III exercise (PISC 1995). The models are applied to both flaw detection frequency data of all inspection teams and to flaw sizing data of one participating team. (author)

  12. Detection block

    International Nuclear Information System (INIS)

    Bezak, A.

    1987-01-01

    A diagram is given of a detection block used for monitoring burnup of nuclear reactor fuel. A shielding block is an important part of the detection block. It stabilizes the fuel assembly in the fixing hole in front of a collimator where a suitable gamma beam is defined for gamma spectrometry determination of fuel burnup. The detector case and a neutron source case are placed on opposite sides of the fixing hole. For neutron measurement for which the water in the tank is used as a moderator, the neutron detector-fuel assembly configuration is selected such that neutrons from spontaneous fission and neutrons induced with the neutron source can both be measured. The patented design of the detection block permits longitudinal travel and rotation of the fuel assembly to any position, and thus more reliable determination of nuclear fuel burnup. (E.S.). 1 fig

  13. Electronics reliability calculation and design

    CERN Document Server

    Dummer, Geoffrey W A; Hiller, N

    1966-01-01

    Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea

  14. Mathematical reliability an expository perspective

    CERN Document Server

    Mazzuchi, Thomas; Singpurwalla, Nozer

    2004-01-01

    In this volume consideration was given to more advanced theoretical approaches and novel applications of reliability to ensure that topics having a futuristic impact were specifically included. Topics like finance, forensics, information, and orthopedics, as well as the more traditional reliability topics were purposefully undertaken to make this collection different from the existing books in reliability. The entries have been categorized into seven parts, each emphasizing a theme that seems poised for the future development of reliability as an academic discipline with relevance. The seven parts are networks and systems; recurrent events; information and design; failure rate function and burn-in; software reliability and random environments; reliability in composites and orthopedics, and reliability in finance and forensics. Embedded within the above are some of the other currently active topics such as causality, cascading, exchangeability, expert testimony, hierarchical modeling, optimization and survival...

  15. Of plants and reliability

    International Nuclear Information System (INIS)

    Schneider Horst

    2009-01-01

    Behind the political statements made about the transformer event at the Kruemmel nuclear power station (KKK) in the summer of 2009 there are fundamental issues of atomic law. Pursuant to Articles 20 and 28 of its Basic Law, Germany is a state in which the rule of law applies. Consequently, the aspects of atomic law associated with the incident merit a closer look, all the more so as the items concerned have been known for many years. Important aspects in the debate about the Kruemmel nuclear power plant are the fact that the transformer is considered part of the nuclear power station under atomic law and thus a ''plant'' subject to surveillance by the nuclear regulatory agencies, on the one hand, and the reliability under atomic law of the operator and the executive personnel responsible, on the other hand. Both ''plant'' and ''reliability'' are terms focusing on nuclear safety. Hence the question to what extent safety was affected in the Kruemmel incident. The classification of the event as 0 = no or only a very slight safety impact on the INES scale (INES = International Nuclear Event Scale) should not be used to put aside the safety issue once and for all. Points of fact and their technical significance must be considered prior to any legal assessment. Legal assessments and regulations are associated with facts and circumstances. Any legal examination is based on the facts as determined and elucidated. Any other procedure would be tantamount to an inadmissible legal advance conviction. Now, what is the position of political statements, i.e. political assessments and political responsibility? If everything is done the correct way, they come at the end, after exploration of the facts and evaluation under applicable law. Sometimes things are handled differently, with consequences which are not very helpful. In the light of the provisions about the rule of law as laid down in the Basic Law, the new federal government should be made to observe the proper sequence of

  16. Evaluation of MHTGR fuel reliability

    International Nuclear Information System (INIS)

    Wichner, R.P.; Barthold, W.P.

    1992-07-01

    Modular High-Temperature Gas-Cooled Reactor (MHTGR) concepts that house the reactor vessel in a tight but unsealed reactor building place heightened importance on the reliability of the fuel particle coatings as fission product barriers. Though accident consequence analyses continue to show favorable results, the increased dependence on one type of barrier, in addition to a number of other factors, has caused the Nuclear Regulatory Commission (NRC) to consider conservative assumptions regarding fuel behavior. For this purpose, the concept termed ''weak fuel'' has been proposed on an interim basis. ''Weak fuel'' is a penalty imposed on consequence analyses whereby the fuel is assumed to respond less favorably to environmental conditions than predicted by behavioral models. The rationale for adopting this penalty, as well as conditions that would permit its reduction or elimination, are examined in this report. The evaluation includes an examination of possible fuel-manufacturing defects, quality-control procedures for defect detection, and the mechanisms by which fuel defects may lead to failure

  17. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  18. Quality assurance and reliability

    International Nuclear Information System (INIS)

    Normand, J.; Charon, M.

    1975-01-01

    Concern for obtaining high-quality products which will function properly when required to do so is nothing new - it is one manifestation of a conscientious attitude to work. However, the complexity and cost of equipment and the consequences of even temporary immobilization are such that it has become necessary to make special arrangements for obtaining high-quality products and examining what one has obtained. Each unit within an enterprise must examine its own work or arrange for it to be examined; a unit whose specific task is quality assurance is responsible for overall checking, but does not relieve other units of their responsibility. Quality assurance is a form of mutual assistance within an enterprise, designed to remove the causes of faults as far as possible. It begins very early in a project and continues through the ordering stage, construction, start-up trials and operation. Quality and hence reliability are the direct result of what is done at all stages of a project. They depend on constant attention to detail, for even a minor piece of poor workmanship can, in the case of an essential item of equipment, give rise to serious operational difficulties

  19. Reliability of using circulating tumor cells for detecting epidermal growth factor receptor mutation status in advanced non-small-cell lung cancer patients: a meta-analysis and systematic review

    Directory of Open Access Journals (Sweden)

    Hu F

    2018-03-01

    Full Text Available Fang Hu,* Xiaowei Mao,* Yujun Zhang, Xiaoxuan Zheng, Ping Gu, Huimin Wang, Xueyan ZhangDepartment of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, People’s Republic of China *These authors contributed equally to this workPurpose: To evaluate the clinical value of circulating tumor cells as a surrogate to detect epidermal growth factor receptor mutation in advanced non-small-cell lung cancer (NSCLC patients.Methods: We searched the electronic databases, and all articles meeting predetermined selection criteria were included in this study. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. The evaluation indexes of the diagnostic performance were the summary receiver operating characteristic curve and area under the summary receiver operating characteristic curve.Results: Eight eligible publications with 255 advanced NSCLC patients were included in this meta-analysis. Taking tumor tissues as reference, the pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio of circulating tumor cells for detecting the epidermal growth factor receptor mutation status were found to be 0.82 (95% confidence interval [CI]: 0.50–0.95, 0.95 (95% CI: 0.24–1.00, 16.81 (95% CI: 0.33–848.62, 0.19 (95% CI: 0.06–0.64, and 86.81 (95% CI: 1.22–6,154.15, respectively. The area under the summary receiver operating characteristic curve was 0.92 (95% CI: 0.89–0.94. The subgroup analysis showed that the factors of blood volume, histological type, EGFR-tyrosine kinase inhibitor therapy, and circulating tumor cell and tissue test methods for EGFR accounted for the significant difference of the pooled specificity. No significant difference was found between the pooled sensitivity of the subgroup.Conclusion: Our meta-analysis confirmed that circulating tumor cells are a good surrogate for

  20. Reliability of Oronasal Fistula Classification.

    Science.gov (United States)

    Sitzman, Thomas J; Allori, Alexander C; Matic, Damir B; Beals, Stephen P; Fisher, David M; Samson, Thomas D; Marcus, Jeffrey R; Tse, Raymond W

    2018-01-01

    Objective Oronasal fistula is an important complication of cleft palate repair that is frequently used to evaluate surgical quality, yet reliability of fistula classification has never been examined. The objective of this study was to determine the reliability of oronasal fistula classification both within individual surgeons and between multiple surgeons. Design Using intraoral photographs of children with repaired cleft palate, surgeons rated the location of palatal fistulae using the Pittsburgh Fistula Classification System. Intrarater and interrater reliability scores were calculated for each region of the palate. Participants Eight cleft surgeons rated photographs obtained from 29 children. Results Within individual surgeons reliability for each region of the Pittsburgh classification ranged from moderate to almost perfect (κ = .60-.96). By contrast, reliability between surgeons was lower, ranging from fair to substantial (κ = .23-.70). Between-surgeon reliability was lowest for the junction of the soft and hard palates (κ = .23). Within-surgeon and between-surgeon reliability were almost perfect for the more general classification of fistula in the secondary palate (κ = .95 and κ = .83, respectively). Conclusions This is the first reliability study of fistula classification. We show that the Pittsburgh Fistula Classification System is reliable when used by an individual surgeon, but less reliable when used among multiple surgeons. Comparisons of fistula occurrence among surgeons may be subject to less bias if they use the more general classification of "presence or absence of fistula of the secondary palate" rather than the Pittsburgh Fistula Classification System.

  1. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  2. A reliable method for the stability analysis of structures ...

    African Journals Online (AJOL)

    The detection of structural configurations with singular tangent stiffness matrix is essential because they can be unstable. The secondary paths, especially in unstable buckling, can play the most important role in the loss of stability and collapse of the structure. A new method for reliable detection and accurate computation of ...

  3. Reliability of reactor materials

    International Nuclear Information System (INIS)

    Toerroenen, K.; Aho-Mantila, I.

    1986-05-01

    This report is the final technical report of the fracture mechanics part of the Reliability of Reactor Materials Programme, which was carried out at the Technical Research Centre of Finland (VTT) through the years 1981 to 1983. Research and development work was carried out in five major areas, viz. statistical treatment and modelling of cleavage fracture, crack arrest, ductile fracture, instrumented impact testing as well as comparison of numerical and experimental elastic-plastic fracture mechanics. In the area of cleavage fracture the critical variables affecting the fracture of steels are considered in the frames of a statistical model, so called WST-model. Comparison of fracture toughness values predicted by the model and corresponding experimental values shows excellent agreement for a variety of microstructures. different posibilities for using the model are discussed. The development work in the area of crack arrest testing was concentrated in the crack starter properties, test arrangement and computer control. A computerized elastic-plastic fracture testing method with a variety of test specimen geometries in a large temperature range was developed for a routine stage. Ductile fracture characteristics of reactor pressure vessel steel A533B and comparable weld material are given. The features of a new, patented instrumented impact tester are described. Experimental and theoretical comparisons between the new and conventional testers indicated clearly the improvements achieved with the new tester. A comparison of numerical and experimental elastic-plastic fracture mechanics capabilities at VTT was carried out. The comparison consisted of two-dimensional linear elastic as well as elastic-plastic finite element analysis of four specimen geometries and equivalent experimental tests. (author)

  4. Field reliability of electronic systems

    International Nuclear Information System (INIS)

    Elm, T.

    1984-02-01

    This report investigates, through several examples from the field, the reliability of electronic units in a broader sense. That is, it treats not just random parts failure, but also inadequate reliability design and (externally and internally) induced failures. The report is not meant to be merely an indication of the state of the art for the reliability prediction methods we know, but also as a contribution to the investigation of man-machine interplay in the operation and repair of electronic equipment. The report firmly links electronics reliability to safety and risk analyses approaches with a broader, system oriented view of reliability prediction and with postfailure stress analysis. It is intended to reveal, in a qualitative manner, the existence of symptom and cause patterns. It provides a background for further investigations to identify the detailed mechanisms of the faults and the remedical actions and precautions for achieving cost effective reliability. (author)

  5. Reliability Assessment Of Wind Turbines

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2014-01-01

    Reduction of cost of energy for wind turbines are very important in order to make wind energy competitive compared to other energy sources. Therefore the turbine components should be designed to have sufficient reliability but also not be too costly (and safe). This paper presents models...... for uncertainty modeling and reliability assessment of especially the structural components such as tower, blades, substructure and foundation. But since the function of a wind turbine is highly dependent on many electrical and mechanical components as well as a control system also reliability aspects...... of these components are discussed and it is described how there reliability influences the reliability of the structural components. Two illustrative examples are presented considering uncertainty modeling, reliability assessment and calibration of partial safety factors for structural wind turbine components exposed...

  6. Reliability engineering theory and practice

    CERN Document Server

    Birolini, Alessandro

    2014-01-01

    This book shows how to build in, evaluate, and demonstrate reliability and availability of components, equipment, systems. It presents the state-of-theart of reliability engineering, both in theory and practice, and is based on the author's more than 30 years experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The structure of the book allows rapid access to practical results. This final edition extend and replace all previous editions. New are, in particular, a strategy to mitigate incomplete coverage, a comprehensive introduction to human reliability with design guidelines and new models, and a refinement of reliability allocation, design guidelines for maintainability, and concepts related to regenerative stochastic processes. The set of problems for homework has been extended. Methods & tools are given in a way that they can be tailored to cover different reliability requirement levels and be used for safety analysis. Because of the Appendice...

  7. Reliability of Wireless Sensor Networks

    Science.gov (United States)

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  8. Reliability analysis of reactor protection systems

    International Nuclear Information System (INIS)

    Alsan, S.

    1976-07-01

    A theoretical mathematical study of reliability is presented and the concepts subsequently defined applied to the study of nuclear reactor safety systems. The theory is applied to investigations of the operational reliability of the Siloe reactor from the point of view of rod drop. A statistical study conducted between 1964 and 1971 demonstrated that most rod drop incidents arose from circumstances associated with experimental equipment (new set-ups). The reliability of the most suitable safety system for some recently developed experimental equipment is discussed. Calculations indicate that if all experimental equipment were equipped with these new systems, only 1.75 rod drop accidents would be expected to occur per year on average. It is suggested that all experimental equipment should be equipped with these new safety systems and tested every 21 days. The reliability of the new safety system currently being studied for the Siloe reactor was also investigated. The following results were obtained: definite failures must be detected immediately as a result of the disturbances produced; the repair time must not exceed a few hours; the equipment must be tested every week. Under such conditions, the rate of accidental rod drops is about 0.013 on average per year. The level of nondefinite failures is less than 10 -6 per hour and the level of nonprotection 1 hour per year. (author)

  9. The value of service reliability

    OpenAIRE

    Benezech , Vincent; Coulombel , Nicolas

    2013-01-01

    International audience; This paper studies the impact of service frequency and reliability on the choice of departure time and the travel cost of transit users. When the user has (α, β, γ) scheduling preferences, we show that the optimal head start decreases with service reliability, as expected. It does not necessarily decrease with service frequency, however. We derive the value of service headway (VoSH) and the value of service reliability (VoSR), which measure the marginal effect on the e...

  10. Distribution-Independent Reliable Learning

    OpenAIRE

    Kanade, Varun; Thaler, Justin

    2014-01-01

    We study several questions in the reliable agnostic learning framework of Kalai et al. (2009), which captures learning tasks in which one type of error is costlier than others. A positive reliable classifier is one that makes no false positive errors. The goal in the positive reliable agnostic framework is to output a hypothesis with the following properties: (i) its false positive error rate is at most $\\epsilon$, (ii) its false negative error rate is at most $\\epsilon$ more than that of the...

  11. A quantitative calculation for software reliability evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young-Jun; Lee, Jang-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    To meet these regulatory requirements, the software used in the nuclear safety field has been ensured through the development, validation, safety analysis, and quality assurance activities throughout the entire process life cycle from the planning phase to the installation phase. A variety of activities, such as the quality assurance activities are also required to improve the quality of a software. However, there are limitations to ensure that the quality is improved enough. Therefore, the effort to calculate the reliability of the software continues for a quantitative evaluation instead of a qualitative evaluation. In this paper, we propose a quantitative calculation method for the software to be used for a specific operation of the digital controller in an NPP. After injecting random faults in the internal space of a developed controller and calculating the ability to detect the injected faults using diagnostic software, we can evaluate the software reliability of a digital controller in an NPP. We tried to calculate the software reliability of the controller in an NPP using a new method that differs from a traditional method. It calculates the fault detection coverage after injecting the faults into the software memory space rather than the activity through the life cycle process. We attempt differentiation by creating a new definition of the fault, imitating the software fault using the hardware, and giving a consideration and weights for injection faults.

  12. RTE - 2015 Reliability Report. Summary

    International Nuclear Information System (INIS)

    2016-01-01

    Every year, RTE produces a reliability report for the past year. This report includes a number of results from previous years so that year-to-year comparisons can be drawn and long-term trends analysed. The 2015 report underlines the major factors that have impacted on the reliability of the electrical power system, without focusing exclusively on Significant System Events (ESS). It describes various factors which contribute to present and future reliability and the numerous actions implemented by RTE to ensure reliability today and in the future, as well as the ways in which the various parties involved in the electrical power system interact across the whole European interconnected network

  13. System Reliability for Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Marquez-Dominguez, Sergio; Sørensen, John Dalsgaard

    2013-01-01

    E). In consequence, a rational treatment of uncertainties is done in order to assess the reliability of critical details in OWTs. Limit state equations are formulated for fatigue critical details which are not influenced by wake effects generated in offshore wind farms. Furthermore, typical bi-linear S-N curves...... are considered for reliability verification according to international design standards of OWTs. System effects become important for each substructure with many potential fatigue hot spots. Therefore, in this paper a framework for system effects is presented. This information can be e.g. no detection of cracks...... in inspections or measurements from condition monitoring systems. Finally, an example is established to illustrate the practical application of this framework for jacket type wind turbine substructure considering system effects....

  14. Model-based fault detection algorithm for photovoltaic system monitoring

    KAUST Repository

    Harrou, Fouzi; Sun, Ying; Saidi, Ahmed

    2018-01-01

    Reliable detection of faults in PV systems plays an important role in improving their reliability, productivity, and safety. This paper addresses the detection of faults in the direct current (DC) side of photovoltaic (PV) systems using a

  15. Reliability in automotive ethernet networks

    DEFF Research Database (Denmark)

    Soares, Fabio L.; Campelo, Divanilson R.; Yan, Ying

    2015-01-01

    This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular.......This paper provides an overview of in-vehicle communication networks and addresses the challenges of providing reliability in automotive Ethernet in particular....

  16. Estimation of Bridge Reliability Distributions

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...

  17. Reliability of wind turbine subassemblies

    NARCIS (Netherlands)

    Spinato, F.; Tavner, P.J.; Bussel, van G.J.W.; Koutoulakos, E.

    2009-01-01

    We have investigated the reliability of more than 6000 modern onshore wind turbines and their subassemblies in Denmark and Germany over 11 years and particularly changes in reliability of generators, gearboxes and converters in a subset of 650 turbines in Schleswig Holstein, Germany. We first start

  18. Reliability engineering theory and practice

    CERN Document Server

    Birolini, Alessandro

    2010-01-01

    Presenting a solid overview of reliability engineering, this volume enables readers to build and evaluate the reliability of various components, equipment and systems. Current applications are presented, and the text itself is based on the author's 30 years of experience in the field.

  19. Reliability-Based Optimization in Structural Engineering

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1994-01-01

    In this paper reliability-based optimization problems in structural engineering are formulated on the basis of the classical decision theory. Several formulations are presented: Reliability-based optimal design of structural systems with component or systems reliability constraints, reliability...

  20. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  1. Photovoltaic performance and reliability workshop

    Energy Technology Data Exchange (ETDEWEB)

    Mrig, L. [ed.

    1993-12-01

    This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the US, PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

  2. 18 CFR 39.5 - Reliability Standards.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reliability Standards... RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability Organization shall file each Reliability Standard or modification to a Reliability Standard that it proposes to be made effective under...

  3. Behavioral reliability program for the nuclear industry. Technical report

    International Nuclear Information System (INIS)

    Buchanan, J.C.; Davis, S.O.; Dunnette, M.D.; Meyer, P.; Sharac, J.

    1981-07-01

    The subject of the study was the development of standards for a behavioral observation program which could be used by the NRC licensed nuclear industry to detect indications of emotional instability in its employees who have access to protected and vital areas. Emphasis was placed on those observable characteristics which could be assessed by supervisors or peers in a work environment. The behavioral reliability program, as was defined in this report, encompasses the concept and basic components of the program, the definition of the behavioral reliability program, the definition of the behavioral reliability criterion, and a set of instructions for the creation and implementation of the program by an individual facility

  4. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  5. North American Electric Reliability Council (NERC) Reliability Coordinators

    Data.gov (United States)

    Department of Homeland Security — ERC is an international regulatory authority that works to improve the reliability of the bulk power system in North America. NERC works with many different regional...

  6. Calculating system reliability with SRFYDO

    Energy Technology Data Exchange (ETDEWEB)

    Morzinski, Jerome [Los Alamos National Laboratory; Anderson - Cook, Christine M [Los Alamos National Laboratory; Klamann, Richard M [Los Alamos National Laboratory

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  7. 78 FR 41339 - Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards

    Science.gov (United States)

    2013-07-10

    ...] Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards AGENCY: Federal... Reliability Standards identified by the North American Electric Reliability Corporation (NERC), the Commission-certified Electric Reliability Organization. FOR FURTHER INFORMATION CONTACT: Kevin Ryan (Legal Information...

  8. New Approaches to Reliability Assessment

    DEFF Research Database (Denmark)

    Ma, Ke; Wang, Huai; Blaabjerg, Frede

    2016-01-01

    of energy. New approaches for reliability assessment are being taken in the design phase of power electronics systems based on the physics-of-failure in components. In this approach, many new methods, such as multidisciplinary simulation tools, strength testing of components, translation of mission profiles......, and statistical analysis, are involved to enable better prediction and design of reliability for products. This article gives an overview of the new design flow in the reliability engineering of power electronics from the system-level point of view and discusses some of the emerging needs for the technology...

  9. Some remarks on software reliability

    International Nuclear Information System (INIS)

    Gonzalez Hernando, J.; Sanchez Izquierdo, J.

    1978-01-01

    Trend in modern NPPCI is toward a broad use of programmable elements. Some aspects concerning present status of programmable digital systems reliability are reported. Basic differences between software and hardware concept require a specific approach in all the reliability topics concerning software systems. The software reliability theory was initialy developed upon hardware models analogies. At present this approach is changing and specific models are being developed. The growing use of programmable systems necessitates emphasizing the importance of more adequate regulatory requirements to include this technology in NPPCI. (author)

  10. Reliability evaluation of power systems

    CERN Document Server

    Billinton, Roy

    1996-01-01

    The Second Edition of this well-received textbook presents over a decade of new research in power system reliability-while maintaining the general concept, structure, and style of the original volume. This edition features new chapters on the growing areas of Monte Carlo simulation and reliability economics. In addition, chapters cover the latest developments in techniques and their application to real problems. The text also explores the progress occurring in the structure, planning, and operation of real power systems due to changing ownership, regulation, and access. This work serves as a companion volume to Reliability Evaluation of Engineering Systems: Second Edition (1992).

  11. Aerospace reliability applied to biomedicine.

    Science.gov (United States)

    Lalli, V. R.; Vargo, D. J.

    1972-01-01

    An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.

  12. Integrating reliability analysis and design

    International Nuclear Information System (INIS)

    Rasmuson, D.M.

    1980-10-01

    This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems

  13. Reliability testing of tendon disease using two different scanning methods in patients with rheumatoid arthritis

    DEFF Research Database (Denmark)

    Bruyn, George A W; Möller, Ingrid; Garrido, Jesus

    2012-01-01

    To assess the intra- and interobserver reliability of musculoskeletal ultrasonography (US) in detecting inflammatory and destructive tendon abnormalities in patients with RA using two different scanning methods.......To assess the intra- and interobserver reliability of musculoskeletal ultrasonography (US) in detecting inflammatory and destructive tendon abnormalities in patients with RA using two different scanning methods....

  14. Resolution of GSI B-56 - Emergency diesel generator reliability

    International Nuclear Information System (INIS)

    Serkiz, A.W.

    1989-01-01

    The need for an emergency diesel generator (EDG) reliability program has been established by 10 CFR Part 50, Section 50.63, Loss of All Alternating Current Power, which requires that licensees assess their station blackout coping and recovery capability. EDGs are the principal emergency ac power sources for avoiding a station blackout. Regulatory Guide 1.155, Station Blackout, identifies a need for (1) a nuclear unit EDG reliability level of at least 0.95, and (2) an EDG reliability program to monitor and maintain the required EDG reliability levels. NUMARC-8700, Guidelines and Technical Bases for NUMARC Initiatives Addressing Station Blackout at Light Water Reactors, also provides guidance on such needs. The resolution of GSI B-56, Diesel Reliability will be accomplished by issuing Regulatory Guide 1.9, Rev. 3, Selection, Design, Qualification, Testing, and Reliability of Diesel Generator Units Used as Onsite Electric Power Systems at Nuclear Plants. This revision will integrate into a single regulatory guide pertinent guidance previously addressed in R.G. 1.9, Rev. 2, R.G. 1.108, and Generic Letter 84-15. R.G. 1.9 has been expanded to define the principal elements of an EDG reliability program for monitoring and maintaining EDG reliability levels selected for SBO. In addition, alert levels and corrective actions have been defined to detect a deteriorating situation for all EDGs assigned to a particular nuclear unit, as well as an individual problem EDG

  15. Reliability analysis of shutdown system

    International Nuclear Information System (INIS)

    Kumar, C. Senthil; John Arul, A.; Pal Singh, Om; Suryaprakasa Rao, K.

    2005-01-01

    This paper presents the results of reliability analysis of Shutdown System (SDS) of Indian Prototype Fast Breeder Reactor. Reliability analysis carried out using Fault Tree Analysis predicts a value of 3.5 x 10 -8 /de for failure of shutdown function in case of global faults and 4.4 x 10 -8 /de for local faults. Based on 20 de/y, the frequency of shutdown function failure is 0.7 x 10 -6 /ry, which meets the reliability target, set by the Indian Atomic Energy Regulatory Board. The reliability is limited by Common Cause Failure (CCF) of actuation part of SDS and to a lesser extent CCF of electronic components. The failure frequency of individual systems is -3 /ry, which also meets the safety criteria. Uncertainty analysis indicates a maximum error factor of 5 for the top event unavailability

  16. Measures of differences in reliability

    International Nuclear Information System (INIS)

    Doksum, K.A.

    1975-01-01

    Measures of differences in reliability of two systems are considered in the scale model, location-scale model, and a nonparametric model. In each model, estimates and confidence intervals are given and some of their properties discussed

  17. Transportation reliability and trip satisfaction.

    Science.gov (United States)

    2012-10-01

    Travel delays and associated costs have become a major problem in Michigan over the : past several decades as congestion has continued to increase, creating significant negative : impacts on travel reliability on many roadways throughout the State. T...

  18. Reliability Analysis of Wind Turbines

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    2008-01-01

    In order to minimise the total expected life-cycle costs of a wind turbine it is important to estimate the reliability level for all components in the wind turbine. This paper deals with reliability analysis for the tower and blades of onshore wind turbines placed in a wind farm. The limit states...... consideres are in the ultimate limit state (ULS) extreme conditions in the standstill position and extreme conditions during operating. For wind turbines, where the magnitude of the loads is influenced by the control system, the ultimate limit state can occur in both cases. In the fatigue limit state (FLS......) the reliability level for a wind turbine placed in a wind farm is considered, and wake effects from neighbouring wind turbines is taken into account. An illustrative example with calculation of the reliability for mudline bending of the tower is considered. In the example the design is determined according...

  19. Reliability issues at the LHC

    CERN Multimedia

    CERN. Geneva. Audiovisual Unit; Gillies, James D

    2002-01-01

    The Lectures on reliability issues at the LHC will be focused on five main Modules on five days. Module 1: Basic Elements in Reliability Engineering Some basic terms, definitions and methods, from components up to the system and the plant, common cause failures and human factor issues. Module 2: Interrelations of Reliability & Safety (R&S) Reliability and risk informed approach, living models, risk monitoring. Module 3: The ideal R&S Process for Large Scale Systems From R&S goals via the implementation into the system to the proof of the compliance. Module 4: Some Applications of R&S on LHC Master logic, anatomy of risk, cause - consequence diagram, decomposition and aggregation of the system. Module 5: Lessons learned from R&S Application in various Technologies Success stories, pitfalls, constrains in data and methods, limitations per se, experienced in aviation, space, process, nuclear, offshore and transport systems and plants. The Lectures will reflect in summary the compromise in...

  20. On Bayesian System Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen Ringi, M

    1995-05-01

    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  1. On Bayesian System Reliability Analysis

    International Nuclear Information System (INIS)

    Soerensen Ringi, M.

    1995-01-01

    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person's state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs

  2. As reliable as the sun

    Science.gov (United States)

    Leijtens, J. A. P.

    2017-11-01

    Fortunately there is almost nothing as reliable as the sun which can consequently be utilized as a very reliable source of spacecraft power. In order to harvest this power, the solar panels have to be pointed towards the sun as accurately and reliably as possible. To this extend, sunsensors are available on almost every satellite to support vital sun-pointing capability throughout the mission, even in the deployment and save mode phases of the satellites life. Given the criticality of the application one would expect that after more than 50 years of sun sensor utilisation, such sensors would be fully matured and optimised. In actual fact though, the majority of sunsensors employed are still coarse sunsensors which have a proven extreme reliability but present major issues regarding albedo sensitivity and pointing accuracy.

  3. Steam generator reliability improvement project

    International Nuclear Information System (INIS)

    Blomgren, J.C.; Green, S.J.

    1987-01-01

    Upon successful completion of its research and development technology transfer program, the Electric Power Research Institute's Steam Generator Owners Group (SGOG II) will disband in December 1986 and be replaced in January 1987 by a successor project, the Steam Generator Reliability Project (SGRP). The new project, funded in the EPRI base program, will continue the emphasis on reliability and life extension that was carried forward by SGOG II. The objectives of SGOG II have been met. Causes and remedies have been identified for tubing corrosion problems, such as stress corrosion cracking and pitting, and steam generator technology has been improved in areas such as tube wear prediction and nondestructive evaluation (NDE). These actions have led to improved reliability of steam generators. Now the owners want to continue with a centrally managed program that builds on what has been learned. The goal is to continue to improve steam generator reliability and solve small problems before they become large problems

  4. Steam generator reliability improvement project

    International Nuclear Information System (INIS)

    Blomgren, J.C.; Green, S.J.

    1987-01-01

    Upon successful completion of its research and development technology transfer program, the Electric Power Research Institute's (EPRI's) Steam Generator Owners Group (SGOG II) will disband in December 1986, and be replaced in January 1987, by a successor project, the Steam Generator Reliability Project (SGRP). The new project, funded in the EPRI base program, will continue to emphasize reliability and life extension, which were carried forward by SGOG II. The objectives of SGOG II have been met. Causes and remedies have been identified for tubing corrosion problems such as stress corrosion cracking and pitting, and steam generator technology has been improved in areas such as tube wear prediction and nondestructive evaluation. These actions have led to improved reliability of steam generators. Now the owners want to continue with a centrally managed program that builds on what has been learned. The goal is to continue to improve steam generator reliability and to solve small problems before they become large problems

  5. Reliability analysis in intelligent machines

    Science.gov (United States)

    Mcinroy, John E.; Saridis, George N.

    1990-01-01

    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  6. Innovations in power systems reliability

    CERN Document Server

    Santora, Albert H; Vaccaro, Alfredo

    2011-01-01

    Electrical grids are among the world's most reliable systems, yet they still face a host of issues, from aging infrastructure to questions of resource distribution. Here is a comprehensive and systematic approach to tackling these contemporary challenges.

  7. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  8. Reliability and Maintainability (RAM) Training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Packard, Michael H. (Editor)

    2000-01-01

    The theme of this manual is failure physics-the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low-cost reliable products. In a broader sense the manual should do more. It should underscore the urgent need CI for mature attitudes toward reliability. Five of the chapters were originally presented as a classroom course to over 1000 Martin Marietta engineers and technicians. Another four chapters and three appendixes have been added, We begin with a view of reliability from the years 1940 to 2000. Chapter 2 starts the training material with a review of mathematics and a description of what elements contribute to product failures. The remaining chapters elucidate basic reliability theory and the disciplines that allow us to control and eliminate failures.

  9. Development of reliable pavement models.

    Science.gov (United States)

    2011-05-01

    The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...

  10. Cost analysis of reliability investigations

    International Nuclear Information System (INIS)

    Schmidt, F.

    1981-01-01

    Taking Epsteins testing theory as a basis, premisses are formulated for the selection of cost-optimized reliability inspection plans. Using an example, the expected testing costs and inspection time periods of various inspection plan types, standardized on the basis of the exponential distribution, are compared. It can be shown that sequential reliability tests usually involve lower costs than failure or time-fixed tests. The most 'costly' test is to be expected with the inspection plan type NOt. (orig.) [de

  11. Nonparametric predictive inference in reliability

    International Nuclear Information System (INIS)

    Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.

    2002-01-01

    We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere

  12. Accelerator Availability and Reliability Issues

    Energy Technology Data Exchange (ETDEWEB)

    Steve Suhring

    2003-05-01

    Maintaining reliable machine operations for existing machines as well as planning for future machines' operability present significant challenges to those responsible for system performance and improvement. Changes to machine requirements and beam specifications often reduce overall machine availability in an effort to meet user needs. Accelerator reliability issues from around the world will be presented, followed by a discussion of the major factors influencing machine availability.

  13. Power peaking nuclear reliability factors

    International Nuclear Information System (INIS)

    Hassan, H.A.; Pegram, J.W.; Mays, C.W.; Romano, J.J.; Woods, J.J.; Warren, H.D.

    1977-11-01

    The Calculational Nuclear Reliability Factor (CNRF) assigned to the limiting power density calculated in reactor design has been determined. The CNRF is presented as a function of the relative power density of the fuel assembly and its radial local. In addition, the Measurement Nuclear Reliability Factor (MNRF) for the measured peak hot pellet power in the core has been evaluated. This MNRF is also presented as a function of the relative power density and radial local within the fuel assembly

  14. Travel Time Reliability in Indiana

    OpenAIRE

    Martchouk, Maria; Mannering, Fred L.; Singh, Lakhwinder

    2010-01-01

    Travel time and travel time reliability are important performance measures for assessing traffic condition and extent of congestion on a roadway. This study first uses a floating car technique to assess travel time and travel time reliability on a number of Indiana highways. Then the study goes on to describe the use of Bluetooth technology to collect real travel time data on a freeway and applies it to obtain two weeks of data on Interstate 69 in Indianapolis. An autoregressive model, estima...

  15. Transmission reliability faces future challenges

    International Nuclear Information System (INIS)

    Beaty, W.

    1993-01-01

    The recently published Washington International Energy Group's 1993 Electric Utility Outlook states that nearly one-third (31 percent) of U.S. utility executives expect reliability to decrease in the near future. Electric power system stability is crucial to reliability. Stability analysis determines whether a system will stay intact under normal operating conditions, during minor disturbances such as load fluctuations, and during major disturbances when one or more parts of the system fails. All system elements contribute to reliability or the lack of it. However, this report centers on the transmission segment of the electric system. The North American Electric Reliability Council (NERC) says the transmission systems as planned will be adequate over the next 10 years. However, delays in building new lines and increasing demands for transmission services are serious concerns. Reliability concerns exist in the Mid-Continent Area Power Pool and the Mid-America Interconnected Network regions where transmission facilities have not been allowed to be constructed as planned. Portions of the transmission systems in other regions are loaded at or near their limits. NERC further states that utilities must be allowed to complete planned generation and transmission as scheduled. A reliable supply of electricity also depends on adhering to established operating criteria. Factors that could complicate operations include: More interchange schedules resulting from increased transmission services. Increased line loadings in portions of the transmission systems. Proliferation of non-utility generators

  16. MEMS reliability: coming of age

    Science.gov (United States)

    Douglass, Michael R.

    2008-02-01

    In today's high-volume semiconductor world, one could easily take reliability for granted. As the MOEMS/MEMS industry continues to establish itself as a viable alternative to conventional manufacturing in the macro world, reliability can be of high concern. Currently, there are several emerging market opportunities in which MOEMS/MEMS is gaining a foothold. Markets such as mobile media, consumer electronics, biomedical devices, and homeland security are all showing great interest in microfabricated products. At the same time, these markets are among the most demanding when it comes to reliability assurance. To be successful, each company developing a MOEMS/MEMS device must consider reliability on an equal footing with cost, performance and manufacturability. What can this maturing industry learn from the successful development of DLP technology, air bag accelerometers and inkjet printheads? This paper discusses some basic reliability principles which any MOEMS/MEMS device development must use. Examples from the commercially successful and highly reliable Digital Micromirror Device complement the discussion.

  17. Fuel reliability experience in Finland

    International Nuclear Information System (INIS)

    Kekkonen, L.

    2015-01-01

    Four nuclear reactors have operated in Finland now for 35-38 years. The two VVER-440 units at Loviisa Nuclear Power Plant are operated by Fortum and two BWR’s in Olkiluoto are operated by Teollisuuden Voima Oyj (TVO). The fuel reliability experience of the four reactors operating currently in Finland has been very good and the fuel failure rates have been very low. Systematic inspection of spent fuel assemblies, and especially all failed assemblies, is a good practice that is employed in Finland in order to improve fuel reliability and operational safety. Investigation of the root cause of fuel failures is important in developing ways to prevent similar failures in the future. The operational and fuel reliability experience at the Loviisa Nuclear Power Plant has been reported also earlier in the international seminars on WWER Fuel Performance, Modelling and Experimental Support. In this paper the information on fuel reliability experience at Loviisa NPP is updated and also a short summary of the fuel reliability experience at Olkiluoto NPP is given. Keywords: VVER-440, fuel reliability, operational experience, poolside inspections, fuel failure identification. (author)

  18. Reliability issues : a Canadian perspective

    International Nuclear Information System (INIS)

    Konow, H.

    2004-01-01

    A Canadian perspective of power reliability issues was presented. Reliability depends on adequacy of supply and a framework for standards. The challenges facing the electric power industry include new demand, plant replacement and exports. It is expected that demand will by 670 TWh by 2020, with 205 TWh coming from new plants. Canada will require an investment of $150 billion to meet this demand and the need is comparable in the United States. As trade grows, the challenge becomes a continental issue and investment in the bi-national transmission grid will be essential. The 5 point plan of the Canadian Electricity Association is to: (1) establish an investment climate to ensure future electricity supply, (2) move government and industry towards smart and effective regulation, (3) work to ensure a sustainable future for the next generation, (4) foster innovation and accelerate skills development, and (5) build on the strengths of an integrated North American system to maximize opportunity for Canadians. The CEA's 7 measures that enhance North American reliability were listed with emphasis on its support for a self-governing international organization for developing and enforcing mandatory reliability standards. CEA also supports the creation of a binational Electric Reliability Organization (ERO) to identify and solve reliability issues in the context of a bi-national grid. tabs., figs

  19. On the reliability of Quake-Catcher Network earthquake detections

    Science.gov (United States)

    Yildirim, Battalgazi; Cochran, Elizabeth S.; Chung, Angela I.; Christensen, Carl M.; Lawrence, Jesse F.

    2015-01-01

    Over the past two decades, there have been several initiatives to create volunteer‐based seismic networks. The Personal Seismic Network, proposed around 1990, used a short‐period seismograph to record earthquake waveforms using existing phone lines (Cranswick and Banfill, 1990; Cranswicket al., 1993). NetQuakes (Luetgert et al., 2010) deploys triaxial Micro‐Electromechanical Systems (MEMS) sensors in private homes, businesses, and public buildings where there is an Internet connection. Other seismic networks using a dense array of low‐cost MEMS sensors are the Community Seismic Network (Clayton et al., 2012; Kohler et al., 2013) and the Home Seismometer Network (Horiuchi et al., 2009). One main advantage of combining low‐cost MEMS sensors and existing Internet connection in public and private buildings over the traditional networks is the reduction in installation and maintenance costs (Koide et al., 2006). In doing so, it is possible to create a dense seismic network for a fraction of the cost of traditional seismic networks (D’Alessandro and D’Anna, 2013; D’Alessandro, 2014; D’Alessandro et al., 2014).

  20. Reliability of genetic bottleneck tests for detecting recent population declines

    NARCIS (Netherlands)

    Peery, M. Zachariah; Kirby, Rebecca; Reid, Brendan N.; Stoelting, Ricka; Doucet-Beer, Elena; Robinson, Stacie; Vasquez-Carrillo, Catalina; Pauli, Jonathan N.; Palsboll, Per J.

    The identification of population bottlenecks is critical in conservation because populations that have experienced significant reductions in abundance are subject to a variety of genetic and demographic processes that can hasten extinction. Genetic bottleneck tests constitute an appealing and

  1. Improving Reliability of Information Leakage Detection and Prevention Systems

    Directory of Open Access Journals (Sweden)

    A. V. Mamaev

    2011-03-01

    Full Text Available The problem of protection from deliberate leaks of information is one of the most difficult. Integrated systems of information protection against insider have a serious drawback. Using this disadvantage the offender receives the possibility of unauthorized theft of information from working machine.

  2. Reliability analysis techniques in power plant design

    International Nuclear Information System (INIS)

    Chang, N.E.

    1981-01-01

    An overview of reliability analysis techniques is presented as applied to power plant design. The key terms, power plant performance, reliability, availability and maintainability are defined. Reliability modeling, methods of analysis and component reliability data are briefly reviewed. Application of reliability analysis techniques from a design engineering approach to improving power plant productivity is discussed. (author)

  3. Assessment of the reliability of ultrasonic inspection methods

    International Nuclear Information System (INIS)

    Haines, N.F.; Langston, D.B.; Green, A.J.; Wilson, R.

    1982-01-01

    The reliability of NDT techniques has remained an open question for many years. A reliable technique may be defined as one that, when rigorously applied by a number of inspection teams, consistently finds then correctly sizes all defects of concern. In this paper we report an assessment of the reliability of defect detection by manual ultrasonic methods applied to the inspection of thick section pressure vessel weldments. Initially we consider the available data relating to the inherent physical capabilities of ultrasonic techniques to detect cracks in weldment and then, independently, we assess the likely variability in team to team performance when several teams are asked to follow the same specified test procedure. The two aspects of 'capability' and 'variability' are brought together to provide quantitative estimates of the overall reliability of ultrasonic inspection of thick section pressure vessel weldments based on currently existing data. The final section of the paper considers current research programmes on reliability and presents a view on how these will help to further improve NDT reliability. (author)

  4. The Berg Balance Scale has high intra- and inter-rater reliability but absolute reliability varies across the scale: a systematic review.

    Science.gov (United States)

    Downs, Stephen; Marquez, Jodie; Chiarelli, Pauline

    2013-06-01

    What is the intra-rater and inter-rater relative reliability of the Berg Balance Scale? What is the absolute reliability of the Berg Balance Scale? Does the absolute reliability of the Berg Balance Scale vary across the scale? Systematic review with meta-analysis of reliability studies. Any clinical population that has undergone assessment with the Berg Balance Scale. Relative intra-rater reliability, relative inter-rater reliability, and absolute reliability. Eleven studies involving 668 participants were included in the review. The relative intrarater reliability of the Berg Balance Scale was high, with a pooled estimate of 0.98 (95% CI 0.97 to 0.99). Relative inter-rater reliability was also high, with a pooled estimate of 0.97 (95% CI 0.96 to 0.98). A ceiling effect of the Berg Balance Scale was evident for some participants. In the analysis of absolute reliability, all of the relevant studies had an average score of 20 or above on the 0 to 56 point Berg Balance Scale. The absolute reliability across this part of the scale, as measured by the minimal detectable change with 95% confidence, varied between 2.8 points and 6.6 points. The Berg Balance Scale has a higher absolute reliability when close to 56 points due to the ceiling effect. We identified no data that estimated the absolute reliability of the Berg Balance Scale among participants with a mean score below 20 out of 56. The Berg Balance Scale has acceptable reliability, although it might not detect modest, clinically important changes in balance in individual subjects. The review was only able to comment on the absolute reliability of the Berg Balance Scale among people with moderately poor to normal balance. Copyright © 2013 Australian Physiotherapy Association. Published by .. All rights reserved.

  5. Reliability of Bluetooth Technology for Travel Time Estimation

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Olesen, Jonas Hammershøj; Krishnan, Rajesh

    2015-01-01

    . However, their corresponding impacts on accuracy and reliability of estimated travel time have not been evaluated. In this study, a controlled field experiment is conducted to collect both Bluetooth and GPS data for 1000 trips to be used as the basis for evaluation. Data obtained by GPS logger is used...... to calculate actual travel time, referred to as ground truth, and to geo-code the Bluetooth detection events. In this setting, reliability is defined as the percentage of devices captured per trip during the experiment. It is found that, on average, Bluetooth-enabled devices will be detected 80% of the time......-range antennae detect Bluetooth-enabled devices in a closer location to the sensor, thus providing a more accurate travel time estimate. However, the smaller the size of the detection zone, the lower the penetration rate, which could itself influence the accuracy of estimates. Therefore, there has to be a trade...

  6. Reliability analysis under epistemic uncertainty

    International Nuclear Information System (INIS)

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  7. Emergency diesel generator reliability program

    International Nuclear Information System (INIS)

    Serkiz, A.W.

    1989-01-01

    The need for an emergency diesel generator (EDG) reliability program has been established by 10 CFR Part 50, Section 50.63, Loss of All Alternating Current Power, which requires that utilities assess their station blackout duration and recovery capability. EDGs are the principal emergency ac power sources for coping with a station blackout. Regulatory Guide 1.155, Station Blackout, identifies a need for (1) an EDG reliability equal to or greater than 0.95, and (2) an EDG reliability program to monitor and maintain the required levels. The resolution of Generic Safety Issue (GSI) B-56 embodies the identification of a suitable EDG reliability program structure, revision of pertinent regulatory guides and Tech Specs, and development of an Inspection Module. Resolution of B-56 is coupled to the resolution of Unresolved Safety Issue (USI) A-44, Station Blackout, which resulted in the station blackout rule, 10 CFR 50.63 and Regulatory Guide 1.155, Station Blackout. This paper discusses the principal elements of an EDG reliability program developed for resolving GSI B-56 and related matters

  8. Design of a Novel In-Pipe Reliable Leak Detector

    OpenAIRE

    Chatzigeorgiou, Dimitrios; Youcef-Toumi, Kamal; Ben-Mansour, Rached

    2013-01-01

    Leakage is the major factor for unaccounted losses in every pipe network around the world (oil, gas, or water). In most cases, the deleterious effects associated with the occurrence of leaks may present serious economical and health problems. Therefore, leaks must be quickly detected, located, and repaired. Unfortunately, most state-of-the-art leak detection systems have limited applicability, are neither reliable nor robust, while others depend on the user experience. In this paper, we prese...

  9. Reliability and continuous regeneration model

    Directory of Open Access Journals (Sweden)

    Anna Pavlisková

    2006-06-01

    Full Text Available The failure-free function of an object is very important for the service. This leads to the interest in the determination of the object reliability and failure intensity. The reliability of an element is defined by the theory of probability.The element durability T is a continuous random variate with the probability density f. The failure intensity (tλ is a very important reliability characteristics of the element. Often it is an increasing function, which corresponds to the element ageing. We disposed of the data about a belt conveyor failures recorded during the period of 90 months. The given ses behaves according to the normal distribution. By using a mathematical analysis and matematical statistics, we found the failure intensity function (tλ. The function (tλ increases almost linearly.

  10. Reliability Based Ship Structural Design

    DEFF Research Database (Denmark)

    Dogliani, M.; Østergaard, C.; Parmentier, G.

    1996-01-01

    This paper deals with the development of different methods that allow the reliability-based design of ship structures to be transferred from the area of research to the systematic application in current design. It summarises the achievements of a three-year collaborative research project dealing...... with developments of models of load effects and of structural collapse adopted in reliability formulations which aim at calibrating partial safety factors for ship structural design. New probabilistic models of still-water load effects are developed both for tankers and for containerships. New results are presented...... structure of several tankers and containerships. The results of the reliability analysis were the basis for the definition of a target safety level which was used to asses the partial safety factors suitable for in a new design rules format to be adopted in modern ship structural design. Finally...

  11. Structural Optimization with Reliability Constraints

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1986-01-01

    During the last 25 years considerable progress has been made in the fields of structural optimization and structural reliability theory. In classical deterministic structural optimization all variables are assumed to be deterministic. Due to the unpredictability of loads and strengths of actual......]. In this paper we consider only structures which can be modelled as systems of elasto-plastic elements, e.g. frame and truss structures. In section 2 a method to evaluate the reliability of such structural systems is presented. Based on a probabilistic point of view a modern structural optimization problem...... is formulated in section 3. The formulation is a natural extension of the commonly used formulations in determinstic structural optimization. The mathematical form of the optimization problem is briefly discussed. In section 4 two new optimization procedures especially designed for the reliability...

  12. Reliability assessment of Wind turbines

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2015-01-01

    Wind turbines can be considered as structures that are in between civil engineering structures and machines since they consist of structural components and many electrical and machine components together with a control system. Further, a wind turbine is not a one-of-a-kind structure...... but manufactured in series production based on many component tests, some prototype tests and zeroseries wind turbines. These characteristics influence the reliability assessment where focus in this paper is on the structural components. Levelized Cost Of Energy is very important for wind energy, especially when...... comparing to other energy sources. Therefore much focus is on cost reductions and improved reliability both for offshore and onshore wind turbines. The wind turbine components should be designed to have sufficient reliability level with respect to both extreme and fatigue loads but also not be too costly...

  13. Reliability Modeling of Wind Turbines

    DEFF Research Database (Denmark)

    Kostandyan, Erik

    Cost reductions for offshore wind turbines are a substantial requirement in order to make offshore wind energy more competitive compared to other energy supply methods. During the 20 – 25 years of wind turbines useful life, Operation & Maintenance costs are typically estimated to be a quarter...... for Operation & Maintenance planning. Concentrating efforts on development of such models, this research is focused on reliability modeling of Wind Turbine critical subsystems (especially the power converter system). For reliability assessment of these components, structural reliability methods are applied...... to one third of the total cost of energy. Reduction of Operation & Maintenance costs will result in significant cost savings and result in cheaper electricity production. Operation & Maintenance processes mainly involve actions related to replacements or repair. Identifying the right times when...

  14. Component reliability for electronic systems

    CERN Document Server

    Bajenescu, Titu-Marius I

    2010-01-01

    The main reason for the premature breakdown of today's electronic products (computers, cars, tools, appliances, etc.) is the failure of the components used to build these products. Today professionals are looking for effective ways to minimize the degradation of electronic components to help ensure longer-lasting, more technically sound products and systems. This practical book offers engineers specific guidance on how to design more reliable components and build more reliable electronic systems. Professionals learn how to optimize a virtual component prototype, accurately monitor product reliability during the entire production process, and add the burn-in and selection procedures that are the most appropriate for the intended applications. Moreover, the book helps system designers ensure that all components are correctly applied, margins are adequate, wear-out failure modes are prevented during the expected duration of life, and system interfaces cannot lead to failure.

  15. Reliable computation from contextual correlations

    Science.gov (United States)

    Oestereich, André L.; Galvão, Ernesto F.

    2017-12-01

    An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.

  16. Reliability Characteristics of Power Plants

    Directory of Open Access Journals (Sweden)

    Zbynek Martinek

    2017-01-01

    Full Text Available This paper describes the phenomenon of reliability of power plants. It gives an explanation of the terms connected with this topic as their proper understanding is important for understanding the relations and equations which model the possible real situations. The reliability phenomenon is analysed using both the exponential distribution and the Weibull distribution. The results of our analysis are specific equations giving information about the characteristics of the power plants, the mean time of operations and the probability of failure-free operation. Equations solved for the Weibull distribution respect the failures as well as the actual operating hours. Thanks to our results, we are able to create a model of dynamic reliability for prediction of future states. It can be useful for improving the current situation of the unit as well as for creating the optimal plan of maintenance and thus have an impact on the overall economics of the operation of these power plants.

  17. Strategy for continuous improvement in IC manufacturability, yield, and reliability

    Science.gov (United States)

    Dreier, Dean J.; Berry, Mark; Schani, Phil; Phillips, Michael; Steinberg, Joe; DePinto, Gary

    1993-01-01

    Continual improvements in yield, reliability and manufacturability measure a fab and ultimately result in Total Customer Satisfaction. A new organizational and technical methodology for continuous defect reduction has been established in a formal feedback loop, which relies on yield and reliability, failed bit map analysis, analytical tools, inline monitoring, cross functional teams and a defect engineering group. The strategy requires the fastest detection, identification and implementation of possible corrective actions. Feedback cycle time is minimized at all points to improve yield and reliability and reduce costs, essential for competitiveness in the memory business. Payoff was a 9.4X reduction in defectivity and a 6.2X improvement in reliability of 256 K fast SRAMs over 20 months.

  18. Product reliability and the reliability of its emanating operational processes.

    NARCIS (Netherlands)

    Sonnemans, P.J.M.; Geudens, W.H.J.M.

    1999-01-01

    This paper addresses the problem of proper reliability management in business operations today, facing increasing demands on essential business drivers such as time to market, quality and financial profit. In this paper a general method is described of how to achieve product quality in a highly

  19. Sequential decision reliability concept and failure rate assessment

    International Nuclear Information System (INIS)

    Ciftcioglu, O.

    1990-11-01

    Conventionally, a reliability concept is considered together with both each basic unit and their integration in a complicated large scale system such as a nuclear power plant (NPP). Basically, as the plant's operational status is determined by the information obtained from various sensors, the plant's reliability and the risk assessment is closely related to the reliability of the sensory information and hence the sensor components. However, considering the relevant information-processing systems, e.g. fault detection processors, there exists a further question about the reliability of such systems, specifically the reliability of the systems' decision-based outcomes by means of which the further actions are performed. To this end, a general sequential decision reliability concept and the failure rate assessment methodology is introduced. The implications of the methodology are investigated and the importance of the decision reliability concept in system operation is demonstrated by means of sensory signals in real-time from the Borssele NPP in the Netherlands. (author). 21 refs.; 8 figs

  20. Power Electronics Packaging Reliability | Transportation Research | NREL

    Science.gov (United States)

    Packaging Reliability Power Electronics Packaging Reliability A photo of a piece of power electronics laboratory equipment. NREL power electronics packaging reliability research investigates the electronics packaging around a semiconductor switching device determines the electrical, thermal, and

  1. Reliability Assessment of Concrete Bridges

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Middleton, C. R.

    This paper is partly based on research performed for the Highways Agency, London, UK under the project DPU/9/44 "Revision of Bridge Assessment Rules Based on Whole Life Performance: concrete bridges". It contains the details of a methodology which can be used to generate Whole Life (WL) reliability...... profiles. These WL reliability profiles may be used to establish revised rules for concrete bridges. This paper is to some extend based on Thoft-Christensen et. al. [1996], Thoft-Christensen [1996] et. al. and Thoft-Christensen [1996]....

  2. Metrological Reliability of Medical Devices

    Science.gov (United States)

    Costa Monteiro, E.; Leon, L. F.

    2015-02-01

    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  3. The problem of software reliability

    International Nuclear Information System (INIS)

    Ballard, G.M.

    1989-01-01

    The state of the art in safety and reliability assessment of the software of industrial computer systems is reviewed and likely progress over the next few years is identified and compared with the perceived needs of the user. Some of the current projects contributing to the development of new techniques for assessing software reliability are described. One is the software test and evaluation method which looked at the faults within and between two manufacturers specifications, faults in the codes and inconsistencies between the codes and specifications. The results are given. (author)

  4. Safety and reliability in Europe

    International Nuclear Information System (INIS)

    Colombo, A.G.

    1985-01-01

    This volume contains the papers presented at the ESRA Pre-Launching Meeting. The meeting was attended by about eighty European reliability and safety experts from industry, research organizations and universities. This meeting was dealing with the following subjects: the historical perspective of safety and reliability in Europe and to the aims of ESRA. Status and Trends in Research and Development; Codes, Standards and Regulations; Academic and Technical Training. National and international Organizations. Twenty six papers have been analyzed and abstracted for inclusion in the data base

  5. Photovoltaic power system reliability considerations

    Science.gov (United States)

    Lalli, V. R.

    1980-01-01

    This paper describes an example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems. This particular application was for a solar cell power system demonstration project in Tangaye, Upper Volta, Africa. The techniques involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of a fail-safe and planned spare parts engineering philosophy.

  6. Reliability in the design phase

    International Nuclear Information System (INIS)

    Siahpush, A.S.; Hills, S.W.; Pham, H.; Majumdar, D.

    1991-12-01

    A study was performed to determine the common methods and tools that are available to calculated or predict a system's reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs

  7. FRELIB, Failure Reliability Index Calculation

    International Nuclear Information System (INIS)

    Parkinson, D.B.; Oestergaard, C.

    1984-01-01

    1 - Description of problem or function: Calculation of the reliability index given the failure boundary. A linearization point (design point) is found on the failure boundary for a stationary reliability index (min) and a stationary failure probability density function along the failure boundary, provided that the basic variables are normally distributed. 2 - Method of solution: Iteration along the failure boundary which must be specified - together with its partial derivatives with respect to the basic variables - by the user in a subroutine FSUR. 3 - Restrictions on the complexity of the problem: No distribution information included (first-order-second-moment-method). 20 basic variables (could be extended)

  8. Reliability of sonographic assessment of tendinopathy in tennis elbow.

    Science.gov (United States)

    Poltawski, Leon; Ali, Syed; Jayaram, Vijay; Watson, Tim

    2012-01-01

    To assess the reliability and compute the minimum detectable change using sonographic scales to quantify the extent of pathology and hyperaemia in the common extensor tendon in people with tennis elbow. The lateral elbows of 19 people with tennis elbow were assessed sonographically twice, 1-2 weeks apart. Greyscale and power Doppler images were recorded for subsequent rating of abnormalities. Tendon thickening, hypoechogenicity, fibrillar disruption and calcification were each rated on four-point scales, and scores were summed to provide an overall rating of structural abnormality; hyperaemia was scored on a five point scale. Inter-rater reliability was established using the intraclass correlation coefficient (ICC) to compare scores assigned independently to the same set of images by a radiologist and a physiotherapist with training in musculoskeletal imaging. Test-retest reliability was assessed by comparing scores assigned by the physiotherapist to images recorded at the two sessions. The minimum detectable change (MDC) was calculated from the test-retest reliability data. ICC values for inter-rater reliability ranged from 0.35 (95% CI: 0.05, 0.60) for fibrillar disruption to 0.77 (0.55, 0.88) for overall greyscale score, and 0.89 (0.79, 0.95) for hyperaemia. Test-retest reliability ranged from 0.70 (0.48, 0.84) for tendon thickening to 0.82 (0.66, 0.90) for overall greyscale score and 0.86 (0.73, 0.93) for calcification. The MDC for the greyscale total score was 2.0/12 and for the hyperaemia score was 1.1/5. The sonographic scoring system used in this study may be used reliably to quantify tendon abnormalities and change over time. A relatively inexperienced imager can conduct the assessment and use the rating scales reliably.

  9. Enhancing reliable online transaction with intelligent rule-based ...

    African Journals Online (AJOL)

    Enhancing reliable online transaction with intelligent rule-based fraud detection technique. ... These are with a bid to reducing amongst other things the cost of production and also dissuade the poor handling of Nigeria currency. The CBN pronouncement has necessitated the upsurge in transactions completed with credit ...

  10. 77 FR 16435 - Transmission Relay Loadability Reliability Standard

    Science.gov (United States)

    2012-03-21

    ... conditions on all applicable transmission lines and transformers. I. Background A. Relay Protection Systems 2... and a power swing. If a power swing is detected, the protection system, ``blocks,'' or prevents the... to the reliability of the Bulk-Power System by requiring load-responsive phase protection relay...

  11. Case Study: Zutphen : Estimates of levee system reliability

    NARCIS (Netherlands)

    Roscoe, K.; Kothuis, Baukje; Kok, Matthijs

    2017-01-01

    Estimates of levee system reliability can conflict with experience and intuition. For example, a very high failure probability may be computed while no evidence of failure has been observed, or a very low failure probability when signs of failure have been detected.

  12. 76 FR 58101 - Electric Reliability Organization Interpretation of Transmission Operations Reliability Standard

    Science.gov (United States)

    2011-09-20

    ....C. Cir. 2009). \\4\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693, FERC... for maintaining real and reactive power balance. \\14\\ Electric Reliability Organization Interpretation...; Order No. 753] Electric Reliability Organization Interpretation of Transmission Operations Reliability...

  13. Fatigue Reliability under Random Loads

    DEFF Research Database (Denmark)

    Talreja, R.

    1979-01-01

    We consider the problem of estimating the probability of survival (non-failure) and the probability of safe operation (strength greater than a limiting value) of structures subjected to random loads. These probabilities are formulated in terms of the probability distributions of the loads...... propagation stage. The consequences of this behaviour on the fatigue reliability are discussed....

  14. Forecasting reliability of transformer populations

    NARCIS (Netherlands)

    Schijndel, van A.; Wetzer, J.; Wouters, P.A.A.F.

    2007-01-01

    The expected replacement wave in the current power grid faces asset managers with challenging questions. Setting up a replacement strategy and planning calls for a forecast of the long term component reliability. For transformers the future failure probability can be predicted based on the ongoing

  15. Teaming up to improve reliability

    International Nuclear Information System (INIS)

    Malone, E.A.; Ayres, D.J.; Lear, R.C. van

    1989-01-01

    Responding to increasingly stringent regulatory requirements, Babcock and Wilcox has teamed up with three specialist companies to provide services for nuclear utilities aiming to improve the performance of their valves and actuators. The services, which are outlined here, include inspection, repair, overhaul and valve and actuator reliability programmes. (author)

  16. Nonelectronic Parts Reliability Data 1991

    Science.gov (United States)

    1991-05-01

    Characterization & Failure Analysis Techniques 100.00 130.Ot_00, ______ MFAT_1 &2 Combined set ofMFAT-1_andMFAT-2 200 00 300,00 _ _ ____ FTA ...Device Reliability - 1988 10000 12000 ___ CRTA- FMD Failure Mode Distribution Critical Technology Review Assessment 10000 12000 ___ NONOP-1

  17. INCREASED RELIABILITY OF ELECTRIC BLASTING

    OpenAIRE

    Kashuba, Oleh Ivanovych; Skliarov, L I; Skliarov, A L

    2017-01-01

    The problems of improving reliability of an electric blasting method using electric detonators with nichrome filament bridges. It was revealed that in the calculation of the total resistance of the explosive network it is necessary to increase to 24% of the nominal value

  18. Reliability Analysis of Money Habitudes

    Science.gov (United States)

    Delgadillo, Lucy M.; Bushman, Brittani S.

    2015-01-01

    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  19. Proposed Reliability/Cost Model

    Science.gov (United States)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  20. Reliability in the Rasch Model

    Czech Academy of Sciences Publication Activity Database

    Martinková, Patrícia; Zvára, K.

    2007-01-01

    Roč. 43, č. 3 (2007), s. 315-326 ISSN 0023-5954 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : Cronbach's alpha * Rasch model * reliability Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.552, year: 2007 http://dml.cz/handle/10338.dmlcz/135776

  1. On the NPP structural reliability

    International Nuclear Information System (INIS)

    Klemin, A.I.; Polyakov, E.F.

    1980-01-01

    Reviewed are the main statements peculiarities and possibilities of the first branch guiding technical material (GTM) ''The methods of calculation of structural reliability of NPP and its systems at the stage of projecting''. It is stated, that in GTM presented are recomendations on the calculation of reliability of such specific systems, as the system of the reactor control and protection the system of measuring instruments and automatics and safe systems. GTM are based on analytical methods of modern theory of realibility with the Use of metodology of minimal cross sections of complex systems. It is stressed, that the calculations on the proposed methods permit to calculate a wide complex of reliability parameters, reflecting separately or together prorerties of NPP dependability and maintainability. For NPP, operating by a variable schedule of leading, aditionally considered are parameters, characterizing reliability with account of the proposed regime of power change, i.e. taking into account failures, caused by decrease of the obtained power lower, than the reguired or increase of the required power higher, than the obtained

  2. System reliability of corroding pipelines

    International Nuclear Information System (INIS)

    Zhou Wenxing

    2010-01-01

    A methodology is presented in this paper to evaluate the time-dependent system reliability of a pipeline segment that contains multiple active corrosion defects and is subjected to stochastic internal pressure loading. The pipeline segment is modeled as a series system with three distinctive failure modes due to corrosion, namely small leak, large leak and rupture. The internal pressure is characterized as a simple discrete stochastic process that consists of a sequence of independent and identically distributed random variables each acting over a period of one year. The magnitude of a given sequence follows the annual maximum pressure distribution. The methodology is illustrated through a hypothetical example. Furthermore, the impact of the spatial variability of the pressure loading and pipe resistances associated with different defects on the system reliability is investigated. The analysis results suggest that the spatial variability of pipe properties has a negligible impact on the system reliability. On the other hand, the spatial variability of the internal pressure, initial defect sizes and defect growth rates can have a significant impact on the system reliability.

  3. Reliability analysis framework for computer-assisted medical decision systems

    International Nuclear Information System (INIS)

    Habas, Piotr A.; Zurada, Jacek M.; Elmaghraby, Adel S.; Tourassi, Georgia D.

    2007-01-01

    We present a technique that enhances computer-assisted decision (CAD) systems with the ability to assess the reliability of each individual decision they make. Reliability assessment is achieved by measuring the accuracy of a CAD system with known cases similar to the one in question. The proposed technique analyzes the feature space neighborhood of the query case to dynamically select an input-dependent set of known cases relevant to the query. This set is used to assess the local (query-specific) accuracy of the CAD system. The estimated local accuracy is utilized as a reliability measure of the CAD response to the query case. The underlying hypothesis of the study is that CAD decisions with higher reliability are more accurate. The above hypothesis was tested using a mammographic database of 1337 regions of interest (ROIs) with biopsy-proven ground truth (681 with masses, 656 with normal parenchyma). Three types of decision models, (i) a back-propagation neural network (BPNN), (ii) a generalized regression neural network (GRNN), and (iii) a support vector machine (SVM), were developed to detect masses based on eight morphological features automatically extracted from each ROI. The performance of all decision models was evaluated using the Receiver Operating Characteristic (ROC) analysis. The study showed that the proposed reliability measure is a strong predictor of the CAD system's case-specific accuracy. Specifically, the ROC area index for CAD predictions with high reliability was significantly better than for those with low reliability values. This result was consistent across all decision models investigated in the study. The proposed case-specific reliability analysis technique could be used to alert the CAD user when an opinion that is unlikely to be reliable is offered. The technique can be easily deployed in the clinical environment because it is applicable with a wide range of classifiers regardless of their structure and it requires neither additional

  4. Exact reliability quantification of highly reliable systems with maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)

    2010-12-15

    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  5. A Model to Partly but Reliably Distinguish DDOS Flood Traffic from Aggregated One

    Directory of Open Access Journals (Sweden)

    Ming Li

    2012-01-01

    Full Text Available Reliable distinguishing DDOS flood traffic from aggregated traffic is desperately desired by reliable prevention of DDOS attacks. By reliable distinguishing, we mean that flood traffic can be distinguished from aggregated one for a predetermined probability. The basis to reliably distinguish flood traffic from aggregated one is reliable detection of signs of DDOS flood attacks. As is known, reliably distinguishing DDOS flood traffic from aggregated traffic becomes a tough task mainly due to the effects of flash-crowd traffic. For this reason, this paper studies reliable detection in the underlying DiffServ network to use static-priority schedulers. In this network environment, we present a method for reliable detection of signs of DDOS flood attacks for a given class with a given priority. There are two assumptions introduced in this study. One is that flash-crowd traffic does not have all priorities but some. The other is that attack traffic has all priorities in all classes, otherwise an attacker cannot completely achieve its DDOS goal. Further, we suppose that the protected site is equipped with a sensor that has a signature library of the legitimate traffic with the priorities flash-crowd traffic does not have. Based on those, we are able to reliably distinguish attack traffic from aggregated traffic with the priorities that flash-crowd traffic does not have according to a given detection probability.

  6. Reliability on the move: safety and reliability in transportation

    International Nuclear Information System (INIS)

    Guy, G.B.

    1989-01-01

    The development of transportation has been a significant factor in the development of civilisation as a whole. Our technical ability to move people and goods now seems virtually limitless when one considers for example the achievements of the various space programmes. Yet our current achievements rely heavily on high standards of safety and reliability from equipment and the human component of transportation systems. Recent failures have highlighted our dependence on equipment and human reliability. This book represents the proceedings of the 1989 Safety and Reliability Society symposium held at Bath on 11-12 October 1989. The structure of the book follows the structure of the symposium itself and the papers selected represent current thinking the the wide field of transportation, and the areas of rail (6 papers, three on railway signalling), air including space (two papers), road (one paper), road and rail (two papers) and sea (three papers) are covered. There are four papers concerned with general transport issues. Three papers concerned with the transport of radioactive materials are indexed separately. (author)

  7. Cyber security for greater service reliability

    Energy Technology Data Exchange (ETDEWEB)

    Vickery, P. [N-Dimension Solutions Inc., Richmond Hill, ON (Canada)

    2008-05-15

    Service reliability in the electricity transmission and distribution (T and D) industry is being challenged by increased equipment failures, harsher climatic conditions, and computer hackers who aim to disrupt services by gaining access to transmission and distribution resources. This article discussed methods of ensuring the cyber-security of T and D operators. Weak points in the T and D industry include remote terminal units; intelligent electronic devices; distributed control systems; programmable logic controllers; and various intelligent field devices. An increasing number of interconnection points exist between an operator's service control system and external systems. The North American Electric Reliability Council (NERC) standards specify that cyber security strategies should ensure that all cyber assets are protected, and that access points must be monitored to detect intrusion attempts. The introduction of new advanced metering initiatives must also be considered. Comprehensive monitoring systems should be available to support compliance with cyber security standards. It was concluded that senior management should commit to a periodic cyber security re-assessment program in order to keep up-to-date.

  8. Reliability of non-destructive testing methods

    International Nuclear Information System (INIS)

    Broekhoven, M.J.G.

    1988-01-01

    This contribution regards the results of an evaluation of the reliability of radiography (X-rays and gamma-rays), manual-, and mechanized/automated ultrasonic examination by generally accepted codes/rules, with respect to detection, characterization and sizing/localization of defects. The evaluation is based on the results of examinations, by a number of teams, of 30 test plates, 30 and 50 mm thickness, containing V,U, X and K-shaped welds each containing several types of imperfections (211) in total) typical for steel arc fusion welding, such as porosity, inclusions, lack of fusion or penetration and cracks. Besides, some results are presented obtained from research on advanced UT-techniques, viz. the time-of-flight-diffraction and flaw-tip deflection technique. (author)

  9. Human reliability in probabilistic safety assessments

    International Nuclear Information System (INIS)

    Nunez Mendez, J.

    1989-01-01

    Nowadays a growing interest in medioambiental aspects is detected in our country. It implies an assessment of the risk involved in the industrial processess and installations in order to determine if those are into the acceptable limits. In these safety assessments, among which PSA (Probabilistic Safety Assessments), can be pointed out the role played by the human being in the system is one of the more relevant subjects. (This relevance has been demostrated in the accidents happenned). However in Spain there aren't manuals specifically dedicated to asses the human contribution to risk in the frame of PSAs. This report aims to improve this situation providing: a) a theoretical background to help the reader in the understanding of the nature of the human error, b) a guide to carry out a Human Reliability Analysis and c) a selected overwiev of the techniques and methodologies currently applied in this area. (Author)

  10. Reldata - a tool for reliability database management

    International Nuclear Information System (INIS)

    Vinod, Gopika; Saraf, R.K.; Babar, A.K.; Sanyasi Rao, V.V.S.; Tharani, Rajiv

    2000-01-01

    Component failure, repair and maintenance data is a very important element of any Probabilistic Safety Assessment study. The credibility of the results of such study is enhanced if the data used is generated from operating experience of similar power plants. Towards this objective, a computerised database is designed, with fields such as, date and time of failure, component name, failure mode, failure cause, ways of failure detection, reactor operating power status, repair times, down time, etc. This leads to evaluation of plant specific failure rate, and on demand failure probability/unavailability for all components. Systematic data updation can provide a real time component reliability parameter statistics and trend analysis and this helps in planning maintenance strategies. A software package has been developed RELDATA, which incorporates the database management and data analysis methods. This report describes the software features and underlying methodology in detail. (author)

  11. Human Reliability in Probabilistic Safety Assessments

    International Nuclear Information System (INIS)

    Nunez Mendez, J.

    1989-01-01

    Nowadays a growing interest in environmental aspects is detected in our country. It implies an assessment of the risk involved in the industrial processes and installations in order to determine if those are into the acceptable limits. In these safety assessments, among which PSA (Probabilistic Safety Assessments), can be pointed out the role played by the human being in the system is one of the more relevant subjects (This relevance has been demonstrated in the accidents happened) . However, in Spain there aren't manuals specifically dedicated to asses the human contribution to risk in the frame of PSAs. This report aims to improve this situation providing: a) a theoretical background to help the reader in the understanding of the nature of the human error, b) a quid to carry out a Human Reliability Analysis and c) a selected overview of the techniques and methodologies currently applied in this area. (Author) 20 refs

  12. Advanced Functionalities for Highly Reliable Optical Networks

    DEFF Research Database (Denmark)

    An, Yi

    This thesis covers two research topics concerning optical solutions for networks e.g. avionic systems. One is to identify the applications for silicon photonic devices for cost-effective solutions in short-range optical networks. The other one is to realise advanced functionalities in order...... to increase the availability of highly reliable optical networks. A cost-effective transmitter based on a directly modulated laser (DML) using a silicon micro-ring resonator (MRR) to enhance its modulation speed is proposed, analysed and experimentally demonstrated. A modulation speed enhancement from 10 Gbit...... interconnects and network-on-chips. A novel concept of all-optical protection switching scheme is proposed, where fault detection and protection trigger are all implemented in the optical domain. This scheme can provide ultra-fast establishment of the protection path resulting in a minimum loss of data...

  13. Reliability of non-destructive testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Broekhoven, M J.G. [Ministry of Social Affairs, (Netherlands)

    1988-12-31

    This contribution regards the results of an evaluation of the reliability of radiography (X-rays and gamma-rays), manual-, and mechanized/automated ultrasonic examination by generally accepted codes/rules, with respect to detection, characterization and sizing/localization of defects. The evaluation is based on the results of examinations, by a number of teams, of 30 test plates, 30 and 50 mm thickness, containing V,U, X and K-shaped welds each containing several types of imperfections (211) in total) typical for steel arc fusion welding, such as porosity, inclusions, lack of fusion or penetration and cracks. Besides, some results are presented obtained from research on advanced UT-techniques, viz. the time-of-flight-diffraction and flaw-tip deflection technique. (author). 4 refs.

  14. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  15. Influence Of Inspection Intervals On Mechanical System Reliability

    International Nuclear Information System (INIS)

    Zilberman, B.

    1998-01-01

    In this paper a methodology of reliability analysis of mechanical systems with latent failures is described. Reliability analysis of such systems must include appropriate usage of check intervals for latent failure detection. The methodology suggests, that based on system logic the analyst decides at the beginning if a system can fail actively or latently and propagates this approach through all system levels. All inspections are assumed to be perfect (all failures are detected and repaired and no new failures are introduced as a result of the maintenance). Additional assumptions are that mission time is much smaller, than check intervals and all components have constant failure rates. Analytical expressions for reliability calculates are provided, based on fault tree and Markov modeling techniques (for two and three redundant systems with inspection intervals). The proposed methodology yields more accurate results than are obtained by not using check intervals or using half check interval times. The conventional analysis assuming that at the beginning of each mission system is as new, give an optimistic prediction of system reliability. Some examples of reliability calculations of mechanical systems with latent failures and establishing optimum check intervals are provided

  16. 78 FR 38311 - Reliability Technical Conference Agenda

    Science.gov (United States)

    2013-06-26

    ... issues related to the reliability of the Bulk-Power System. The agenda for this conference is attached... Reliability Technical Docket No. AD13-6-000 Conference. North American Electric Docket No. RC11-6-004 Reliability Corporation. North American Electric Docket No. RR13-2-000 Reliability Corporation. Not...

  17. 78 FR 63036 - Transmission Planning Reliability Standards

    Science.gov (United States)

    2013-10-23

    ... Reliability Standards for the Bulk Power System, 130 FERC ] 61,200 (2010). \\8\\ Mandatory Reliability Standards... electric system operations across normal and contingency conditions. We also find that Reliability Standard... Reliability Standards for the Bulk Power System, 131 FERC ] 61,231 at P 21. Comments 24. NERC supports the...

  18. Medical device reliability and associated areas

    National Research Council Canada - National Science Library

    Dhillon, Balbir S

    2000-01-01

    .... Although the history of reliability engineering can be traced back to World War II, the application of reliability engineering concepts to medical devices is a fairly recent idea that goes back to the latter part of the 1960s when many publications on medical device reliability emerged. Today, a large number of books on general reliability have been...

  19. Human factors reliability Benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-06-01

    The Joint Research Centre of the European Commission has organized a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organized around two study cases: (1) analysis of routine functional Test and Maintenance (T and M) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report contains the final summary reports produced by the participants in the exercise

  20. Reliability technology and nuclear power

    International Nuclear Information System (INIS)

    Garrick, B.J.; Kaplan, S.

    1976-01-01

    This paper reviews some of the history and status of nuclear reliability and the evolution of this subject from art towards science. It shows that that probability theory is the appropriate and essential mathematical language of this subject. The authors emphasize that it is more useful to view probability not as a $prime$frequency$prime$, i.e., not as the result of a statistical experiment, but rather as a measure of state of confidence or a state of knowledge. They also show that the probabilistic, quantitative approach has a considerable history of application in the electric power industry in the area of power system planning. Finally, the authors show that the decision theory notion of utility provides a point of view from which risks, benefits, safety, and reliability can be viewed in a unified way thus facilitating understanding, comparison, and communication. 29 refs

  1. Gearbox Reliability Collaborative Bearing Calibration

    Energy Technology Data Exchange (ETDEWEB)

    van Dam, J.

    2011-10-01

    NREL has initiated the Gearbox Reliability Collaborative (GRC) to investigate the root cause of the low wind turbine gearbox reliability. The GRC follows a multi-pronged approach based on a collaborative of manufacturers, owners, researchers and consultants. The project combines analysis, field testing, dynamometer testing, condition monitoring, and the development and population of a gearbox failure database. At the core of the project are two 750kW gearboxes that have been redesigned and rebuilt so that they are representative of the multi-megawatt gearbox topology currently used in the industry. These gearboxes are heavily instrumented and are tested in the field and on the dynamometer. This report discusses the bearing calibrations of the gearboxes.

  2. Reliability improvement of multiversion software by exchanging modules

    International Nuclear Information System (INIS)

    Shima, Kazuyuki; Matsumoto, Ken-ichi; Torii, Koji

    1996-01-01

    In this paper, we proposes a method to improve reliability of multiversion software. In CER proposed in, checkpoints are put in versions of program and errors of versions are detected and recovered at the checkpoints. It prevent versions from failing and improve the reliability of multiversion software. But it is point out that CER decreases the reliability of the multiversion software if the detection and recovery of errors are assumed to be able to fail. In the method proposed in this paper, versions of program are developed following the same module specifications. When failures of versions of program are detected, faulty modules are identified and replaced them to other modules. It create versions without faulty modules and improve the reliability of multiversion software. The failure probability of multiversion software is estimated to become about a hundredth of the failure probability by the proposed method where the failure probability of each version is 0.000698, the number of versions is 5 and the number of modules is 20. (author)

  3. The reliability and stability of visual working memory capacity.

    Science.gov (United States)

    Xu, Z; Adam, K C S; Fang, X; Vogel, E K

    2018-04-01

    Because of the central role of working memory capacity in cognition, many studies have used short measures of working memory capacity to examine its relationship to other domains. Here, we measured the reliability and stability of visual working memory capacity, measured using a single-probe change detection task. In Experiment 1, the participants (N = 135) completed a large number of trials of a change detection task (540 in total, 180 each of set sizes 4, 6, and 8). With large numbers of both trials and participants, reliability estimates were high (α > .9). We then used an iterative down-sampling procedure to create a look-up table for expected reliability in experiments with small sample sizes. In Experiment 2, the participants (N = 79) completed 31 sessions of single-probe change detection. The first 30 sessions took place over 30 consecutive days, and the last session took place 30 days later. This unprecedented number of sessions allowed us to examine the effects of practice on stability and internal reliability. Even after much practice, individual differences were stable over time (average between-session r = .76).

  4. Interrater reliability of a Pilates movement-based classification system.

    Science.gov (United States)

    Yu, Kwan Kenny; Tulloch, Evelyn; Hendrick, Paul

    2015-01-01

    To determine the interrater reliability for identification of a specific movement pattern using a Pilates Classification system. Videos of 5 subjects performing specific movement tasks were sent to raters trained in the DMA-CP classification system. Ninety-six raters completed the survey. Interrater reliability for the detection of a directional bias was excellent (Pi = 0.92, and K(free) = 0.89). Interrater reliability for classifying an individual into a specific subgroup was moderate (Pi = 0.64, K(free) = 0.55) however raters who had completed levels 1-4 of the DMA-CP training and reported using the assessment daily demonstrated excellent reliability (Pi = 0.89 and K(free) = 0.87). The reliability of the classification system demonstrated almost perfect agreement in determining the existence of a specific movement pattern and classifying into a subgroup for experienced raters. There was a trend for greater reliability associated with increased levels of training and experience of the raters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. RELIABILITY OF LENTICULAR EXPANSION COMPENSATORS

    Directory of Open Access Journals (Sweden)

    Gabriel BURLACU,

    2011-11-01

    Full Text Available Axial lenticular compensators are made to take over the longitudinal heat expansion, shock , vibration and noise, made elastic connections for piping systems. In order to have a long life for installations it is necessary that all elements, including lenticular compensators, have a good reliability. This desire can be did by technology of manufactoring and assembly of compensators, the material for lenses and by maintenance.of compensator

  6. PWR system reliability improvement activities

    International Nuclear Information System (INIS)

    Yoshikawa, Yuichiro

    1985-01-01

    In Japan lacking in energy resources, it is our basic energy policy to accelerate the development program of nuclear power, thereby reducing our dependence. As referred to in the foregoing, every effort has been exerted on our part to improve the PWR system reliability by dint of the so-called 'HOMEMADE' TQC activities, which is our brain-child as a result of applying to the energy industry the quality control philosophy developed in the field of manufacturing industry

  7. Reliability of Composite Dichotomous Measurements

    Czech Academy of Sciences Publication Activity Database

    Martinková, Patrícia; Zvára, Karel

    2010-01-01

    Roč. 6, č. 2 (2010), s. 103-109 ISSN 1801-5603 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : reliability * binary data * logistic regression * Cronbach alpha * Rasch model * myocardial perfusion diagnosis Subject RIV: BB - Applied Statistics, Operational Research http://www.ejbi.cz/articles/201012/65/1.html

  8. TFTR CAMAC power supplies reliability

    International Nuclear Information System (INIS)

    Camp, R.A.; Bergin, W.

    1989-01-01

    Since the expected life of the Tokamak Fusion Test Reactor (TFTR) has been extended into the early 1990's, the issues of equipment wear-out, when to refurbish/replace, and the costs associated with these decisions, must be faced. The management of the maintenance of the TFTR Central Instrumentation, Control and Data Acquisition System (CICADA) power supplies within the CAMAC network is a case study of a set of systems to monitor repairable systems reliability, costs, and results of action. The CAMAC network is composed of approximately 500 racks, each with its own power supply. By using a simple reliability estimator on a coarse time interval, in conjunction with determining the root cause of individual failures, a cost effective repair and maintenance program has been realized. This paper describes the estimator, some of the specific causes for recurring failures and their correction, and the subsequent effects on the reliability estimator. By extension of this program the authors can assess the continued viability of CAMAC power supplies into the future, predicting wear-out and developing cost effective refurbishment/replacement policies. 4 refs., 3 figs., 1 tab

  9. Reliability in individual monitoring service.

    Science.gov (United States)

    Mod Ali, N

    2011-03-01

    As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country.

  10. Reliability engineering theory and practice

    CERN Document Server

    Birolini, Alessandro

    2017-01-01

    This book shows how to build in and assess reliability, availability, maintainability, and safety (RAMS) of components, equipment, and systems. It presents the state of the art of reliability (RAMS) engineering, in theory & practice, and is based on over 30 years author's experience in this field, half in industry and half as Professor of Reliability Engineering at the ETH, Zurich. The book structure allows rapid access to practical results. Methods & tools are given in a way that they can be tailored to cover different RAMS requirement levels. Thanks to Appendices A6 - A8 the book is mathematically self-contained, and can be used as a textbook or as a desktop reference with a large number of tables (60), figures (210), and examples / exercises^ 10,000 per year since 2013) were the motivation for this final edition, the 13th since 1985, including German editions. Extended and carefully reviewed to improve accuracy, it represents the continuous improvement effort to satisfy reader's needs and confidenc...

  11. Operator reliability assessment system (OPERAS)

    International Nuclear Information System (INIS)

    Singh, A.; Spurgin, A.J.; Martin, T.; Welsch, J.; Hallam, J.W.

    1991-01-01

    OPERAS is a personal-computer (PC) based software to collect and process simulator data on control-room operators responses during requalification training scenarios. The data collection scheme is based upon approach developed earlier during the EPRI Operator Reliability Experiments project. The software allows automated data collection from simulator, thus minimizing simulator staff time and resources to collect, maintain and process data which can be useful in monitoring, assessing and enhancing the progress of crew reliability and effectiveness. The system is designed to provide the data and output information in the form of user-friendly charts, tables and figures for use by plant staff. OPERAS prototype software has been implemented at the Diablo Canyon (PWR) and Millstone (BWR) plants and is currently being used to collect operator response data. Data collected from similator include plant-state variables such as reactor pressure and temperature, malfunction, times at which annunciators are activated, operator actions and observations of crew behavior by training staff. The data and systematic analytical results provided by the OPERAS system can contribute to increase objectivity by the utility probabilistic risk analysis (PRA) and training staff in monitoring and assessing reliability of their crews

  12. Human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1989-08-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim of assessing the state of the art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participated in the HF-RBE. The HF-RBE was organised around two study cases: (1) analysis of routine functional Test and Maintenance (TPM) procedures: with the aim of assessing the probability of test induced failures, the probability of failures to remain unrevealed and the potential to initiate transients because of errors performed in the test; (2) analysis of human actions during an operational transient: with the aim of assessing the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. This report summarises the contributions received from the participants and analyses these contributions on a comparative basis. The aim of this analysis was to compare the procedures, modelling techniques and quantification methods used, to obtain insight in the causes and magnitude of the variability observed in the results, to try to identify preferred human reliability assessment approaches and to get an understanding of the current state of the art in the field identifying the limitations that are still inherent to the different approaches

  13. Conformal prediction for reliable machine learning theory, adaptations and applications

    CERN Document Server

    Balasubramanian, Vineeth; Vovk, Vladimir

    2014-01-01

    The conformal predictions framework is a recent development in machine learning that can associate a reliable measure of confidence with a prediction in any real-world pattern recognition application, including risk-sensitive applications such as medical diagnosis, face recognition, and financial risk prediction. Conformal Predictions for Reliable Machine Learning: Theory, Adaptations and Applications captures the basic theory of the framework, demonstrates how to apply it to real-world problems, and presents several adaptations, including active learning, change detection, and anomaly detecti

  14. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    Science.gov (United States)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those

  15. Statistical Primer for Athletic Trainers: The Essentials of Understanding Measures of Reliability and Minimal Important Change.

    Science.gov (United States)

    Riemann, Bryan L; Lininger, Monica R

    2018-01-01

      To describe the concepts of measurement reliability and minimal important change.   All measurements have some magnitude of error. Because clinical practice involves measurement, clinicians need to understand measurement reliability. The reliability of an instrument is integral in determining if a change in patient status is meaningful.   Measurement reliability is the extent to which a test result is consistent and free of error. Three perspectives of reliability-relative reliability, systematic bias, and absolute reliability-are often reported. However, absolute reliability statistics, such as the minimal detectable difference, are most relevant to clinicians because they provide an expected error estimate. The minimal important difference is the smallest change in a treatment outcome that the patient would identify as important.   Clinicians should use absolute reliability characteristics, preferably the minimal detectable difference, to determine the extent of error around a patient's measurement. The minimal detectable difference, coupled with an appropriately estimated minimal important difference, can assist the practitioner in identifying clinically meaningful changes in patients.

  16. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  17. Reliability of Laterality Effects in a Dichotic Listening Task with Words and Syllables

    Science.gov (United States)

    Russell, Nancy L.; Voyer, Daniel

    2004-01-01

    Large and reliable laterality effects have been found using a dichotic target detection task in a recent experiment using word stimuli pronounced with an emotional component. The present study tested the hypothesis that the magnitude and reliability of the laterality effects would increase with the removal of the emotional component and variations…

  18. 76 FR 23171 - Electric Reliability Organization Interpretations of Interconnection Reliability Operations and...

    Science.gov (United States)

    2011-04-26

    ... Reliability Standards for the Bulk-Power System, Order No. 693, FERC Stats. & Regs. ] 31,242, order on reh'g...-Power System reliability may request an interpretation of a Reliability Standard.\\7\\ The ERO's standards... information in its reliability assessments. The Reliability Coordinator must monitor Bulk Electric System...

  19. 76 FR 66055 - North American Electric Reliability Corporation; Order Approving Interpretation of Reliability...

    Science.gov (United States)

    2011-10-25

    ...\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693, FERC Stats. & Regs. ] 31,242... materially affected'' by Bulk-Power System reliability may request an interpretation of a Reliability... Electric Reliability Corporation; Order Approving Interpretation of Reliability Standard; Before...

  20. Improved reliability of power modules

    DEFF Research Database (Denmark)

    Baker, Nick; Liserre, Marco; Dupont, Laurent

    2014-01-01

    Power electronic systems play an increasingly important role in providing high-efficiency power conversion for adjustable-speed drives, power-quality correction, renewable-energy systems, energy-storage systems, and electric vehicles. However, they are often presented with demanding operating...... environments that challenge the reliability aspects of power electronic techniques. For example, increasingly thermally stressful environments are seen in applications such as electric vehicles, where ambient temperatures under the hood exceed 150 °C, while some wind turbine applications can place large...

  1. A Survey of Network Reliability.

    Science.gov (United States)

    1983-07-01

    STATEMENT A. Atoyed for public releosel Oo . Distibution Unlimited JULY 1983 ORC 83-5 This research was supported by the Air Force Office of Scientific ...cONTso.oSoo oaoCC NaME ara AOoM It. REPORT DATE United States Air Force July 1983 Air Force Office of Scientific Research IL NUNeen or s Boiling Air Force Base...One node in K is designated the root and the reliability problem is to calculate the probability that the root can comunicate with the remaining K C V

  2. Impact of inservice inspection on the reliability of nuclear piping

    International Nuclear Information System (INIS)

    Woo, H.H.

    1983-12-01

    The reliability of nuclear piping is a function of piping quality as fabricated, service loadings and environments, plus programs of continuing inspection during operation. This report presents the results of a study of the impact of inservice inspection (ISI) programs on the reliability of specific nuclear piping systems that have actually failed in service. Two major factors are considered in the ISI programs: one is the capability of detecting flaws; the other is the frequency of performing ISI. A probabilistic fracture mechanics model issued to estimate the reliability of two nuclear piping lines over the plant life as functions of the ISI programs. Examples chosen for the study are the PWR feedwater steam generator nozzle cracking incident and the BWR recirculation reactor vessel nozzle safe-end cracking incident

  3. Application of reliability methods in Ontario Hydro

    International Nuclear Information System (INIS)

    Jeppesen, R.; Ravishankar, T.J.

    1985-01-01

    Ontario Hydro have established a reliability program in support of its substantial nuclear program. Application of the reliability program to achieve both production and safety goals is described. The value of such a reliability program is evident in the record of Ontario Hydro's operating nuclear stations. The factors which have contributed to the success of the reliability program are identified as line management's commitment to reliability; selective and judicious application of reliability methods; establishing performance goals and monitoring the in-service performance; and collection, distribution, review and utilization of performance information to facilitate cost-effective achievement of goals and improvements. (orig.)

  4. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  5. Towards Reliable Integrated Services for Dependable Systems

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Ravn, Anders Peter; Izadi-Zamanabadi, Roozbeh

    2003-01-01

    Reliability issues for various technical systems are discussed and focus is directed towards distributed systems, where communication facilities are vital to maintain system functionality. Reliability in communication subsystems is considered as a resource to be shared among a number of logical c...... applications residing on alternative routes. Details are provided for the operation of RRRSVP based on reliability slack calculus. Conclusions summarize the considerations and give directions for future research....... connections and a reliability management framework is suggested. We suggest a network layer level reliability management protocol RRSVP (Reliability Resource Reservation Protocol) as a counterpart of the RSVP for bandwidth and time resource management. Active and passive standby redundancy by background...

  6. Reliability studies in research reactors

    International Nuclear Information System (INIS)

    Albuquerque, Tob Rodrigues de

    2013-01-01

    Fault trees and event trees are widely used in industry to model and to evaluate the reliability of safety systems. Detailed analyzes in nuclear installations require the combination of these two techniques. This study uses the methods of FT (Fault Tree) and ET (Event Tree) to accomplish the PSA (Probabilistic Safety Assessment) in research reactors. According to IAEA (lnternational Atomic Energy Agency), the PSA is divided into Level 1, Level 2 and Level 3. At the Level 1, conceptually, the security systems perform to prevent the occurrence of accidents, At the Level 2, once accidents happened, this Level seeks to minimize consequences, known as stage management of accident, and at Level 3 accident impacts are determined. This study focuses on analyzing the Level 1, and searching through the acquisition of knowledge, the consolidation of methodologies for future reliability studies. The Greek Research Reactor, GRR-1, is a case example. The LOCA (Loss of Coolant Accident) was chosen as the initiating event and from it, using ET, possible accidental sequences were developed, which could lead damage to the core. Moreover, for each of affected systems, probabilities of each event top of FT were developed and evaluated in possible accidental sequences. Also, the estimates of importance measures for basic events are presented in this work. The studies of this research were conducted using a commercial computational tool SAPHIRE. Additionally, achieved results thus were considered satisfactory for the performance or the failure of analyzed systems. (author)

  7. Demonstration of reliability centered maintenance

    International Nuclear Information System (INIS)

    Schwan, C.A.; Morgan, T.A.

    1991-04-01

    Reliability centered maintenance (RCM) is an approach to preventive maintenance planning and evaluation that has been used successfully by other industries, most notably the airlines and military. Now EPRI is demonstrating RCM in the commercial nuclear power industry. Just completed are large-scale, two-year demonstrations at Rochester Gas ampersand Electric (Ginna Nuclear Power Station) and Southern California Edison (San Onofre Nuclear Generating Station). Both demonstrations were begun in the spring of 1988. At each plant, RCM was performed on 12 to 21 major systems. Both demonstrations determined that RCM is an appropriate means to optimize a PM program and improve nuclear plant preventive maintenance on a large scale. Such favorable results had been suggested by three earlier EPRI pilot studies at Florida Power ampersand Light, Duke Power, and Southern California Edison. EPRI selected the Ginna and San Onofre sites because, together, they represent a broad range of utility and plant size, plant organization, plant age, and histories of availability and reliability. Significant steps in each demonstration included: selecting and prioritizing plant systems for RCM evaluation; performing the RCM evaluation steps on selected systems; evaluating the RCM recommendations by a multi-disciplinary task force; implementing the RCM recommendations; establishing a system to track and verify the RCM benefits; and establishing procedures to update the RCM bases and recommendations with time (a living program). 7 refs., 1 tab

  8. Reliability Estimation Based Upon Test Plan Results

    National Research Council Canada - National Science Library

    Read, Robert

    1997-01-01

    The report contains a brief summary of aspects of the Maximus reliability point and interval estimation technique as it has been applied to the reliability of a device whose surveillance tests contain...

  9. Identifying factors influencing reliability of professional systems

    NARCIS (Netherlands)

    Balasubramanian, A.; Kevrekidis, K.; Sonnemans, P.J.M.; Newby, M.J.

    2008-01-01

    Modern product development strategies call for a more proactive approach to fight intense global competition in terms of technological innovation, shorter time to market, quality and reliability and accommodative price. From a reliability engineering perspective, development managers would like to

  10. NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Science.gov (United States)

    Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  11. Reliability analysis of a sensitive and independent stabilometry parameter set.

    Science.gov (United States)

    Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.

  12. Reliability analysis of a sensitive and independent stabilometry parameter set

    Science.gov (United States)

    Nagymáté, Gergely; Orlovits, Zsanett

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938

  13. Statistical reliability analyses of two wood plastic composite extrusion processes

    International Nuclear Information System (INIS)

    Crookston, Kevin A.; Mark Young, Timothy; Harper, David; Guess, Frank M.

    2011-01-01

    Estimates of the reliability of wood plastic composites (WPC) are explored for two industrial extrusion lines. The goal of the paper is to use parametric and non-parametric analyses to examine potential differences in the WPC metrics of reliability for the two extrusion lines that may be helpful for use by the practitioner. A parametric analysis of the extrusion lines reveals some similarities and disparities in the best models; however, a non-parametric analysis reveals unique and insightful differences between Kaplan-Meier survival curves for the modulus of elasticity (MOE) and modulus of rupture (MOR) of the WPC industrial data. The distinctive non-parametric comparisons indicate the source of the differences in strength between the 10.2% and 48.0% fractiles [3,183-3,517 MPa] for MOE and for MOR between the 2.0% and 95.1% fractiles [18.9-25.7 MPa]. Distribution fitting as related to selection of the proper statistical methods is discussed with relevance to estimating the reliability of WPC. The ability to detect statistical differences in the product reliability of WPC between extrusion processes may benefit WPC producers in improving product reliability and safety of this widely used house-decking product. The approach can be applied to many other safety and complex system lifetime comparisons.

  14. DUAL-PROCESS, a highly reliable process control system

    International Nuclear Information System (INIS)

    Buerger, L.; Gossanyi, A.; Parkanyi, T.; Szabo, G.; Vegh, E.

    1983-02-01

    A multiprocessor process control system is described. During its development the reliability was the most important aspect because it is used in the computerized control of a 5 MW research reactor. DUAL-PROCESS is fully compatible with the earlier single processor control system PROCESS-24K. The paper deals in detail with the communication, synchronization, error detection and error recovery problems of the operating system. (author)

  15. Optimal, Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2002-01-01

    Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....

  16. Structural reliability assessment capability in NESSUS

    Science.gov (United States)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  17. 78 FR 58492 - Generator Verification Reliability Standards

    Science.gov (United States)

    2013-09-24

    ... power capability that is available for planning models and bulk electric system reliability assessments... of generator equipment needed to support Bulk-Power System reliability and enhance coordination of... support Bulk-Power System reliability and will ensure that accurate data is verified and made available...

  18. 77 FR 26714 - Transmission Planning Reliability Standards

    Science.gov (United States)

    2012-05-07

    ... Reliability Standards for the Bulk-Power System, Order No. 693, FERC Stats. & Regs. ] 31,242, order on reh'g... Standards for the Bulk Power System, 130 FERC ] 61,200 (2010) (March 2010 Order). \\12\\ Mandatory Reliability... excluded from future planning assessments and its potential impact to bulk electric system reliability...

  19. 18 CFR 39.11 - Reliability reports.

    Science.gov (United States)

    2010-04-01

    ... Electric Reliability Organization shall conduct assessments of the adequacy of the Bulk-Power System in... assessments as determined by the Commission of the reliability of the Bulk-Power System in North America and... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reliability reports. 39...

  20. 2015 NREL Photovoltaic Module Reliability Workshops

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  1. 2016 NREL Photovoltaic Module Reliability Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-07

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology - both critical goals for moving PV technologies deeper into the electricity marketplace.

  2. Space transportation main engine reliability and safety

    Science.gov (United States)

    Monk, Jan C.

    1991-01-01

    Viewgraphs are used to illustrate the reliability engineering and aerospace safety of the Space Transportation Main Engine (STME). A technology developed is called Total Quality Management (TQM). The goal is to develop a robust design. Reducing process variability produces a product with improved reliability and safety. Some engine system design characteristics are identified which improves reliability.

  3. Reliability models for Space Station power system

    Science.gov (United States)

    Singh, C.; Patton, A. D.; Kim, Y.; Wagner, H.

    1987-01-01

    This paper presents a methodology for the reliability evaluation of Space Station power system. The two options considered are the photovoltaic system and the solar dynamic system. Reliability models for both of these options are described along with the methodology for calculating the reliability indices.

  4. Monitoring Travel Time Reliability on Freeways

    NARCIS (Netherlands)

    Tu, Huizhao

    2008-01-01

    Travel time and travel time reliability are important attributes of a trip. The current measures of reliability have in common that in general they all relate to the variability of travel times. However, travel time reliability does not only rely on variability but also on the stability of travel

  5. 46 CFR 169.619 - Reliability.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Reliability. 169.619 Section 169.619 Shipping COAST... Electrical Steering Systems § 169.619 Reliability. (a) Except where the OCMI judges it impracticable, the... be below that necessary for the safe navigation of the vessel. (c) The strength and reliability of...

  6. Human reliability assessment in context

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    2005-01-01

    Human Reliability Assessment (HRA) is conducted on the unspoken premise that 'human error' is a meaningful concept and that it can be associated with individual actions. The basis for this assumption it found in the origin of HRA, as a necessary extension of PSA to account for the impact of failures emanating from human actions. Although it was natural to model HRA on PSA, a large number of studies have shown that the premises are wrong, specifically that human and technological functions cannot be decomposed in the same manner. The general experience from accident studies also indicates that action failures are a function of the context, and that it is the variability of the context rather than the 'human error probability' that is the much sought for signal. Accepting this will have significant consequences for the way in which HRA, and ultimately also PSA, should be pursued

  7. Human Reliability Analysis: session summary

    International Nuclear Information System (INIS)

    Hall, R.E.

    1985-01-01

    The use of Human Reliability Analysis (HRA) to identify and resolve human factors issues has significantly increased over the past two years. Today, utilities, research institutions, consulting firms, and the regulatory agency have found a common application of HRA tools and Probabilistic Risk Assessment (PRA). The ''1985 IEEE Third Conference on Human Factors and Power Plants'' devoted three sessions to the discussion of these applications and a review of the insights so gained. This paper summarizes the three sessions and presents those common conclusions that were discussed during the meeting. The paper concludes that session participants supported the use of an adequately documented ''living PRA'' to address human factors issues in design and procedural changes, regulatory compliance, and training and that the techniques can produce cost effective qualitative results that are complementary to more classical human factors methods

  8. Interim reliability evaluation program (IREP)

    International Nuclear Information System (INIS)

    Carlson, D.D.; Murphy, J.A.

    1981-01-01

    The Interim Reliability Evaluation Program (IREP), sponsored by the Office of Nuclear Regulatory Research of the US Nuclear Regulatory Commission, is currently applying probabilistic risk analysis techniques to two PWR and two BWR type power plants. Emphasis was placed on the systems analysis portion of the risk assessment, as opposed to accident phenomenology or consequence analysis, since the identification of risk significant plant features was of primary interest. Traditional event tree/fault tree modeling was used for the analysis. However, the study involved a more thorough investigation of transient initiators and of support system faults than studies in the past and substantially improved techniques were used to quantify accident sequence frequencies. This study also attempted to quantify the potential for operator recovery actions in the course of each significant accident

  9. Reliability evaluation programmable logic devices

    International Nuclear Information System (INIS)

    Srivani, L.; Murali, N.; Thirugnana Murthy, D.; Satya Murty, S.A.V.

    2014-01-01

    Programmable Logic Devices (PLD) are widely used as basic building modules in high integrity systems, considering their robust features such as gate density, performance, speed etc. PLDs are used to implement digital design such as bus interface logic, control logic, sequencing logic, glue logic etc. Due to semiconductor evolution, new PLDs with state-of-the-art features are arriving to the market. Since these devices are reliable as per the manufacturer's specification, they were used in the design of safety systems. But due to their reduced market life, the availability of performance data is limited. So evaluating the PLD before deploying in a safety system is very important. This paper presents a survey on the use of PLDs in the nuclear domain and the steps involved in the evaluation of PLD using Quantitative Accelerated Life Testing. (author)

  10. Requirements of safety and reliability

    International Nuclear Information System (INIS)

    Franzen, L.F.

    1977-01-01

    The safety strategy for nuclear power plants is characterized by the fact that the high level of safety was attained not as a result of experience, but on the basis of preventive accident analyses and the findings derived from such analyses. Although, in these accident analyses, the deterministic approach is predominant it is supplemented by reliability analyses. The accidents analyzed in nuclear licensing procedures cover a wide spectrum from minor incidents to the design basis accidents which determine the design of the safety devices. The initial and boundary conditions, which are essential for accident analyses, and the determination of the loads occuring in various states during regular operation and in accidents flow into the design of the individual systems and components. The inevitable residual risk and its origins are discussed. (orig./HP) [de

  11. System Reliability of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2010-01-01

    elements, alternate load path(s) etc. in the structural design. In general these characteristics can have a positive influence on system reliability of a structure however, in Eurocodes ductility is only awarded for concrete and steel structures but not for timber structures. It is well......-know that structural systems can redistribute internal forces due to ductility of a connection, i.e. some additional loads can be carried by the structure. The same effect is also possible for reinforced concrete structures and structures of steel. However, for timber structures codes do not award that ductility......For reduction of the risk of collapse in the event of loss of structural element(s), a structural engineer may take necessary steps to design a collapse-resistant structure that is insensitive to accidental circumstances e.g. by incorporating characteristics like redundancy, ties, ductility, key...

  12. Towards higher safety and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Takekuro, I. [Tokyo Electric Power Company, Tokyo (Japan)

    2001-06-01

    Japanese electric power companies are now positioning themselves to gain a stronger position in the liberalised electricity market. Nuclear power in particular plays an important role in satisfying a large part of domestic electricity demand and its performance has continued to improve as a result of enhanced safety operation and tough maintenance programmes. Although the criticality accident which occurred in 1999 shocked not only the public but also the nuclear industry itself, the accident provided an opportunity for the industry and the regulators to learn lessons and look again at safety issues. Japanese electric power companies are now eager to be seen as front-runners in the safe, reliable, and efficient generation of nuclear power for the twenty-first century. (author)

  13. Reliability on ISS Talk Outline

    Science.gov (United States)

    Misiora, Mike

    2015-01-01

    1. Overview of ISS 2. Space Environment and it effects a. Radiation b. Microgravity 3. How we ensure reliability a. Requirements b. Component Selection i. Note: I plan to stay away from talk about Rad Hardened components and talk about why we use older processors because they are less susceptible to SEUs. c. Testing d. Redundancy / Failure Tolerance e. Sparing strategies 4. Operational Examples a. Multiple MDM Failures on 6A due to hard drive failure In general, my plan is to only talk about data that is currently available via normal internet sources to ensure that I stay away from any topics that would be Export Controlled, ITAR, or NDA-controlled. The operational example has been well-reported on in the media and those are the details that I plan to cover. Additionally I am not planning on using any slides or showing any photos during the talk.

  14. Operator reliability assessment system (OPERAS)

    International Nuclear Information System (INIS)

    Spurgin, A.J.; Hallam, J.W.; Spurgin, J.P.; Singh, A.

    1991-01-01

    The paper gives an overview of the OPERAS project. It discusses the background which led to the design of the PC-based data collection and analysis system connected to plant training simulators including those used for nuclear power plants. The usefulness of a system like OPERAS was perceived during an earlier EPRI project, the Operator Reliability Experiments project, by EPRI and PG and E. The data collection and analysis approaches used in OPERAS were developed during the ORE project. The paper not only discusses the design of OPERAS but discusses the functions performed and the current experiences with the two prototype systems. Also listed are potential uses of OPERAS by utility personnel in Operations, Training and PRA groups

  15. Investigation of reliability indicators of information analysis systems based on Markov’s absorbing chain model

    Science.gov (United States)

    Gilmanshin, I. R.; Kirpichnikov, A. P.

    2017-09-01

    In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.

  16. Reliability Testing the Die-Attach of CPV Cell Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Bosco, N.; Sweet, C.; Kurtz, S.

    2011-02-01

    Results and progress are reported for a course of work to establish an efficient reliability test for the die-attach of CPV cell assemblies. Test vehicle design consists of a ~1 cm2 multijunction cell attached to a substrate via several processes. A thermal cycling sequence is developed in a test-to-failure protocol. Methods of detecting a failed or failing joint are prerequisite for this work; therefore both in-situ and non-destructive methods, including infrared imaging techniques, are being explored as a method to quickly detect non-ideal or failing bonds.

  17. A study of operational and testing reliability in software reliability analysis

    International Nuclear Information System (INIS)

    Yang, B.; Xie, M.

    2000-01-01

    Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated

  18. Fundamentals and applications of systems reliability analysis

    International Nuclear Information System (INIS)

    Boesebeck, K.; Heuser, F.W.; Kotthoff, K.

    1976-01-01

    The lecture gives a survey on the application of methods of reliability analysis to assess the safety of nuclear power plants. Possible statements of reliability analysis in connection with specifications of the atomic licensing procedure are especially dealt with. Existing specifications of safety criteria are additionally discussed with the help of reliability analysis by the example of the reliability analysis of a reactor protection system. Beyond the limited application to single safety systems, the significance of reliability analysis for a closed risk concept is explained in the last part of the lecture. (orig./LH) [de

  19. Human reliability. Is probabilistic human reliability assessment possible?

    International Nuclear Information System (INIS)

    Mosneron Dupin, F.

    1996-01-01

    The possibility of carrying out Probabilistic Human Reliability Assessments (PHRA) is often doubted. Basing ourselves on the experience Electricite de France (EDF) has acquired in Probabilistic Safety Assessments for nuclear power plants, we show why the uncertainty of PHRA is very high. We then specify the limits of generic data and models for PHRA: very important factors are often poorly taken into account. To account for them, you need to have proper understanding of the actual context in which operators work. This demands surveys on the field (power plant and simulator) all of which must be carried out with behaviours science skills. The idea of estimating the probabilities of operator failure must not be abandoned, but probabilities must be given less importance, for they are only approximate indications. The qualitative aspects of PHRA should be given greater value (analysis process and qualitative insights). That is why the description (illustrated by case histories) of the main mechanisms of human behaviour, and of their manifestations in the nuclear power plant context (in terms of habits, attitudes, and informal methods and organization in particular) should be an important part of PHRA handbooks. These handbooks should also insist more on methods for gathering information on the actual context of the work of operators. Under these conditions, the PHRA should be possible and even desirable as a process for systematic analysis and assessment of human intervention. (author). 24 refs, 2 figs, 1 tab

  20. Human reliability. Is probabilistic human reliability assessment possible?

    Energy Technology Data Exchange (ETDEWEB)

    Mosneron Dupin, F

    1997-12-31

    The possibility of carrying out Probabilistic Human Reliability Assessments (PHRA) is often doubted. Basing ourselves on the experience Electricite de France (EDF) has acquired in Probabilistic Safety Assessments for nuclear power plants, we show why the uncertainty of PHRA is very high. We then specify the limits of generic data and models for PHRA: very important factors are often poorly taken into account. To account for them, you need to have proper understanding of the actual context in which operators work. This demands surveys on the field (power plant and simulator) all of which must be carried out with behaviours science skills. The idea of estimating the probabilities of operator failure must not be abandoned, but probabilities must be given less importance, for they are only approximate indications. The qualitative aspects of PHRA should be given greater value (analysis process and qualitative insights). That is why the description (illustrated by case histories) of the main mechanisms of human behaviour, and of their manifestations in the nuclear power plant context (in terms of habits, attitudes, and informal methods and organization in particular) should be an important part of PHRA handbooks. These handbooks should also insist more on methods for gathering information on the actual context of the work of operators. Under these conditions, the PHRA should be possible and even desirable as a process for systematic analysis and assessment of human intervention. (author). 24 refs, 2 figs, 1 tab.

  1. Problem of nuclear power plant reliability

    International Nuclear Information System (INIS)

    Popyrin, L.S.; Nefedov, Yu.V.

    1989-01-01

    The problem of substantiation of rational and methods of ensurance of NPP reliability at the stage of its designing has been studied. It is shown that the optimal level of NPP reliability is determined by coordinating solution of the proiblems for optimization of reliability of power industry, heat and power supply and nuclear power generation systems comprising NPP, and problems of reliability optimization of NPP proper, as a complex engineering system. The conclusion is made that the greatest attention should be paid to the development of mathematical models of reliability, taking into account different methods of equipment redundancy, as well as dependence of failures on barious factors, improvement of NPP reliability indices, development of data base, working out of the complec of consistent standards of reliability. 230 refs.; 2 figs.; 1 tab

  2. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  3. Software reliability models for critical applications

    Energy Technology Data Exchange (ETDEWEB)

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  4. Reliability tasks from prediction to field use

    International Nuclear Information System (INIS)

    Guyot, Christian.

    1975-01-01

    This tutorial paper is part of a series intended to sensitive on reliability prolems. Reliability probabilistic concept, is an important parameter of availability. Reliability prediction is an estimation process for evaluating design progress. It is only by the application of a reliability program that reliability objectives can be attained through the different stages of work: conception, fabrication, field use. The user is mainly interested in operational reliability. Indication are given on the support and the treatment of data in the case of electronic equipment at C.E.A. Reliability engineering requires a special state of mind which must be formed and developed in a company in the same way as it may be done for example for safety [fr

  5. Reliability and optimization of structural systems

    International Nuclear Information System (INIS)

    Thoft-Christensen, P.

    1987-01-01

    The proceedings contain 28 papers presented at the 1st working conference. The working conference was organized by the IFIP Working Group 7.5. The proceedings also include 4 papers which were submitted, but for various reasons not presented at the working conference. The working conference was attended by 50 participants from 18 countries. The conference was the first scientific meeting of the new IFIP Working Group 7.5 on 'Reliability and Optimization of Structural Systems'. The purpose of the Working Group 7.5 is to promote modern structural system optimization and reliability theory, to advance international cooperation in the field of structural system optimization and reliability theory, to stimulate research, development and application of structural system optimization and reliability theory, to further the dissemination and exchange of information on reliability and optimization of structural system optimization and reliability theory, and to encourage education in structural system optimization and reliability theory. (orig./HP)

  6. Human reliability in complex systems: an overview

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1976-07-01

    A detailed analysis is presented of the main conceptual background underlying the areas of human reliability and human error. The concept of error is examined and generalized to that of human reliability, and some of the practical and methodological difficulties of reconciling the different standpoints of the human factors specialist and the engineer discussed. Following a survey of general reviews available on human reliability, quantitative techniques for prediction of human reliability are considered. An in-depth critical analysis of the various quantitative methods is then presented, together with the data bank requirements for human reliability prediction. Reliability considerations in process control and nuclear plant, and also areas of design, maintenance, testing and emergency situations are discussed. The effects of stress on human reliability are analysed and methods of minimizing these effects discussed. Finally, a summary is presented and proposals for further research are set out. (author)

  7. Reliability and radiation effects in compound semiconductors

    CERN Document Server

    Johnston, Allan

    2010-01-01

    This book discusses reliability and radiation effects in compound semiconductors, which have evolved rapidly during the last 15 years. Johnston's perspective in the book focuses on high-reliability applications in space, but his discussion of reliability is applicable to high reliability terrestrial applications as well. The book is important because there are new reliability mechanisms present in compound semiconductors that have produced a great deal of confusion. They are complex, and appear to be major stumbling blocks in the application of these types of devices. Many of the reliability problems that were prominent research topics five to ten years ago have been solved, and the reliability of many of these devices has been improved to the level where they can be used for ten years or more with low failure rates. There is also considerable confusion about the way that space radiation affects compound semiconductors. Some optoelectronic devices are so sensitive to damage in space that they are very difficu...

  8. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  9. Equipment Reliability Program in NPP Krsko

    International Nuclear Information System (INIS)

    Skaler, F.; Djetelic, N.

    2006-01-01

    Operation that is safe, reliable, effective and acceptable to public is the common message in a mission statement of commercial nuclear power plants (NPPs). To fulfill these goals, nuclear industry, among other areas, has to focus on: 1 Human Performance (HU) and 2 Equipment Reliability (EQ). The performance objective of HU is as follows: The behaviors of all personnel result in safe and reliable station operation. While unwanted human behaviors in operations mostly result directly in the event, the behavior flaws either in the area of maintenance or engineering usually cause decreased equipment reliability. Unsatisfied Human performance leads even the best designed power plants into significant operating events, which can be found as well-known examples in nuclear industry. Equipment reliability is today recognized as the key to success. While the human performance at most NPPs has been improving since the start of WANO / INPO / IAEA evaluations, the open energy market has forced the nuclear plants to reduce production costs and operate more reliably and effectively. The balance between these two (opposite) goals has made equipment reliability even more important for safe, reliable and efficient production. Insisting on on-line operation by ignoring some principles of safety could nowadays in a well-developed safety culture and human performance environment exceed the cost of electricity losses. In last decade the leading USA nuclear companies put a lot of effort to improve equipment reliability primarily based on INPO Equipment Reliability Program AP-913 at their NPP stations. The Equipment Reliability Program is the key program not only for safe and reliable operation, but also for the Life Cycle Management and Aging Management on the way to the nuclear power plant life extension. The purpose of Equipment Reliability process is to identify, organize, integrate and coordinate equipment reliability activities (preventive and predictive maintenance, maintenance

  10. French power system reliability report 2008

    International Nuclear Information System (INIS)

    Tesseron, J.M.

    2009-06-01

    The reliability of the French power system was fully under control in 2008, despite the power outage in the eastern part of the Provence-Alpes-Cote d'Azur region on November 3, which had been dreaded for several years, since it had not been possible to set up a structurally adequate network. Pursuant to a consultation meeting, the reinforcement solution proposed by RTE was approved by the Minister of Energy, boding well for greater reliability in future. Based on the observations presented in this 2008 Report, RTE's Power System Reliability Audit Mission considers that no new recommendations are needed beyond those expressed in previous reliability reports and during reliability audits. The publication of this yearly report is in keeping with RTE's goal to promote the follow-up over time of the evolution of reliability in its various aspects. RTE thus aims to contribute to the development of reliability culture, by encouraging an improved assessment by the different players (both RTE and network users) of the role they play in building reliability, and by advocating the taking into account of reliability and benchmarking in the European organisations of Transmission System Operators. Contents: 1 - Brief overview of the evolution of the internal and external environment; 2 - Operating situations encountered: climatic conditions, supply / demand balance management, operation of interconnections, management of internal congestion, contingencies affecting the transmission facilities; 3 - Evolution of the reliability reference guide: external reference guide: directives, laws, decrees, etc, ETSO, UCTE, ENTSO-E, contracting contributing to reliability, RTE internal reference guide; 4 - Evolution of measures contributing to reliability in the equipment field: intrinsic performances of components (generating sets, protection systems, operation PLC's, instrumentation and control, automatic frequency and voltage controls, transmission facilities, control systems, load

  11. 76 FR 23801 - North American Electric Reliability Corporation; Order Approving Reliability Standard

    Science.gov (United States)

    2011-04-28

    ... have an operating plan and facilities for backup functionality to ensure Bulk-Power System reliability... entity's primary control center on the reliability of the Bulk-Power System. \\1\\ Mandatory Reliability... potential impact of a violation of the Requirement on the reliability of the Bulk-Power System. The...

  12. 76 FR 73608 - Reliability Technical Conference, North American Electric Reliability Corporation, Public Service...

    Science.gov (United States)

    2011-11-29

    ... or municipal authority play in forming your bulk power system reliability plans? b. Do you support..., North American Electric Reliability Corporation (NERC) Nick Akins, CEO of American Electric Power (AEP..., EL11-62-000] Reliability Technical Conference, North American Electric Reliability Corporation, Public...

  13. 76 FR 23222 - Electric Reliability Organization Interpretation of Transmission Operations Reliability

    Science.gov (United States)

    2011-04-26

    ....3d 1342 (DC Cir. 2009). \\5\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693... Reliability Standards for the Bulk-Power System. Action: FERC-725A. OMB Control No.: 1902-0244. Respondents...] Electric Reliability Organization Interpretation of Transmission Operations Reliability AGENCY: Federal...

  14. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Science.gov (United States)

    2011-07-19

    ... Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... data necessary to analyze and monitor Interconnection Reliability Operating Limits (IROL) within its... Interconnection Reliability Operating Limits, Order No. 748, 134 FERC ] 61,213 (2011). \\2\\ The term ``Wide-Area...

  15. Control system reliability at Jefferson Lab

    International Nuclear Information System (INIS)

    White, K.S.; Areti, H.; Garza, O.

    1997-01-01

    At Thomas Jefferson National Accelerator Facility (Jefferson Lab), the availability of the control system is crucial to the operation of the accelerator for experimental programs. Jefferson Lab's control system, uses 68040 based microprocessors running VxWorks, Unix workstations, and a variety of VME, CAMAC. GPIB, and serial devices. The software consists of control system toolkit software, commercial packages, and over 200 custom and generic applications, some of which are highly complex. The challenge is to keep this highly diverse and still growing system, with over 162,000 control points, operating reliably, while managing changes and upgrades to both the hardware and software. Downtime attributable to the control system includes the time to troubleshoot and repair problems and the time to restore the machine to operation of the scheduled program. This paper describes the availability of the control system during the last year, the heaviest contributors to downtime and the response to problems. Strategies for improving the robustness of the control system am detailed and include changes in hardware, software, procedures and processes. The improvements range from the routine preventive hardware maintenance, to improving their ability to detect, predict and prevent problems. This paper also describes the software tools used to assist in control system troubleshooting, maintenance and failure recovery processes

  16. Reliability analysis of safety systems of nuclear power plant and utility experience with reliability safeguarding of systems during specified normal operation

    International Nuclear Information System (INIS)

    Balfanz, H.P.

    1989-01-01

    The paper gives an outline of the methods applied for reliability analysis of safety systems in nuclear power plant. The main tasks are to check the system design for detection of weak points, and to find possibilities of optimizing the strategies for inspection, inspection intervals, maintenance periods. Reliability safeguarding measures include the determination and verification of the broundary conditions of the analysis with regard to the reliability parameters and maintenance parameters used in the analysis, and the analysis of data feedback reflecting the plant response during operation. (orig.) [de

  17. Reliability analysis of operator's monitoring behavior in digital main control room of nuclear power plants and its application

    International Nuclear Information System (INIS)

    Zhang Li; Hu Hong; Li Pengcheng; Jiang Jianjun; Yi Cannan; Chen Qingqing

    2015-01-01

    In order to build a quantitative model to analyze operators' monitoring behavior reliability of digital main control room of nuclear power plants, based on the analysis of the design characteristics of digital main control room of a nuclear power plant and operator's monitoring behavior, and combining with operators' monitoring behavior process, monitoring behavior reliability was divided into three parts including information transfer reliability among screens, inside-screen information sampling reliability and information detection reliability. Quantitative calculation model of information transfer reliability among screens was established based on Senders's monitoring theory; the inside screen information sampling reliability model was established based on the allocation theory of attention resources; and considering the performance shaping factor causality, a fuzzy Bayesian method was presented to quantify information detection reliability and an example of application was given. The results show that the established model of monitoring behavior reliability gives an objective description for monitoring process, which can quantify the monitoring reliability and overcome the shortcomings of traditional methods. Therefore, it provides theoretical support for operator's monitoring behavior reliability analysis in digital main control room of nuclear power plants and improves the precision of human reliability analysis. (authors)

  18. Reliability analysis using network simulation

    International Nuclear Information System (INIS)

    Engi, D.

    1985-01-01

    The models that can be used to provide estimates of the reliability of nuclear power systems operate at many different levels of sophistication. The least-sophisticated models treat failure processes that entail only time-independent phenomena (such as demand failure). More advanced models treat processes that also include time-dependent phenomena such as run failure and possibly repair. However, many of these dynamic models are deficient in some respects because they either disregard the time-dependent phenomena that cannot be expressed in closed-form analytic terms or because they treat these phenomena in quasi-static terms. The next level of modeling requires a dynamic approach that incorporates not only procedures for treating all significant time-dependent phenomena but also procedures for treating these phenomena when they are conditionally linked or characterized by arbitrarily selected probability distributions. The level of sophistication that is required is provided by a dynamic, Monte Carlo modeling approach. A computer code that uses a dynamic, Monte Carlo modeling approach is Q-GERT (Graphical Evaluation and Review Technique - with Queueing), and the present study had demonstrated the feasibility of using Q-GERT for modeling time-dependent, unconditionally and conditionally linked phenomena that are characterized by arbitrarily selected probability distributions

  19. Reliability of analog quantum simulation

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan [Sandia National Laboratories, Digital and Quantum Information Systems, Livermore, CA (United States); Zhang, Jun; Zeng, Lishan [Shanghai Jiao Tong University, Joint Institute of UMich-SJTU, Key Laboratory of System Control and Information Processing (MOE), Shanghai (China)

    2017-12-15

    Analog quantum simulators (AQS) will likely be the first nontrivial application of quantum technology for predictive simulation. However, there remain questions regarding the degree of confidence that can be placed in the results of AQS since they do not naturally incorporate error correction. Specifically, how do we know whether an analog simulation of a quantum model will produce predictions that agree with the ideal model in the presence of inevitable imperfections? At the same time there is a widely held expectation that certain quantum simulation questions will be robust to errors and perturbations in the underlying hardware. Resolving these two points of view is a critical step in making the most of this promising technology. In this work we formalize the notion of AQS reliability by determining sensitivity of AQS outputs to underlying parameters, and formulate conditions for robust simulation. Our approach naturally reveals the importance of model symmetries in dictating the robust properties. To demonstrate the approach, we characterize the robust features of a variety of quantum many-body models. (orig.)

  20. Experience based reliability centered maintenance

    International Nuclear Information System (INIS)

    Haenninen, S.; Laakso, K.

    1993-03-01

    The systematic analysis and documentation of operating experiences should be included in a living NPP life management program. Failure mode and effects and maintenance effects analyses are suitable methods for analysis of the failure and corrective maintenance experiences of equipment. Combined use of the information on occurred functional failures and the decision tree logic of the reliability centered maintenance identifies applicable and effective preventive maintenance tasks of equipment in an old plant. In this study the electrical motor drives of closing and isolation valves (MOV) of TVO and Loviisa nuclear power plants were selected to serve as pilot study objects. The study was limited to valve drives having actuators manufactured by AUMA in Germany. The fault and maintenance history of MOVs from 1981 up to and including October 1991 in different safety and process systems at TVO 1 and 2 nuclear power units was at first analyzed in a systematic way. The scope of the components studied was 81 MOVs in safety-related systems and 127 other MOVs per each TVO unit. In the case of the Loviisa plant, the observation period was limited to three years, i.e. from February 1989 up to February 1992. The scope of the Loviisa 1 and 2 components studied was 44 respectively 95 MOVs. (25 refs., 22 figs., 8 tabs.)