WorldWideScience

Sample records for statistical failure analysis

  1. Uncertainty analysis with statistically correlated failure data

    International Nuclear Information System (INIS)

    Modarres, M.; Dezfuli, H.; Roush, M.L.

    1987-01-01

    Likelihood of occurrence of the top event of a fault tree or sequences of an event tree is estimated from the failure probability of components that constitute the events of the fault/event tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. At present most fault tree calculations are based on uncorrelated component failure data. This chapter describes a methodology for assessing the probability intervals for the top event failure probability of fault trees or frequency of occurrence of event tree sequences when event failure data are statistically correlated. To estimate mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. Moment matching technique is used to obtain the probability distribution function of the top event through fitting the Johnson Ssub(B) distribution. The computer program, CORRELATE, was developed to perform the calculations necessary for the implementation of the method developed. (author)

  2. Statistical trend analysis methodology for rare failures in changing technical systems

    International Nuclear Information System (INIS)

    Ott, K.O.; Hoffmann, H.J.

    1983-07-01

    A methodology for a statistical trend analysis (STA) in failure rates is presented. It applies primarily to relatively rare events in changing technologies or components. The formulation is more general and the assumptions are less restrictive than in a previously published version. Relations of the statistical analysis and probabilistic assessment (PRA) are discussed in terms of categorization of decisions for action following particular failure events. The significance of tentatively identified trends is explored. In addition to statistical tests for trend significance, a combination of STA and PRA results quantifying the trend complement is proposed. The STA approach is compared with other concepts for trend characterization. (orig.)

  3. Statistical Analysis Of Failure Strength Of Material Using Weibull Distribution

    International Nuclear Information System (INIS)

    Entin Hartini; Mike Susmikanti; Antonius Sitompul

    2008-01-01

    In evaluation of ceramic and glass materials strength a statistical approach is necessary Strength of ceramic and glass depend on its measure and size distribution of flaws in these material. The distribution of strength for ductile material is narrow and close to a Gaussian distribution while strength of brittle materials as ceramic and glass following Weibull distribution. The Weibull distribution is an indicator of the failure of material strength resulting from a distribution of flaw size. In this paper, cumulative probability of material strength to failure probability, cumulative probability of failure versus fracture stress and cumulative probability of reliability of material were calculated. Statistical criteria calculation supporting strength analysis of Silicon Nitride material were done utilizing MATLAB. (author)

  4. Statistical analysis of human maintenance failures of a nuclear power plant

    International Nuclear Information System (INIS)

    Pyy, P.

    2000-01-01

    In this paper, a statistical study of faults caused by maintenance activities is presented. The objective of the study was to draw conclusions on the unplanned effects of maintenance on nuclear power plant safety and system availability. More than 4400 maintenance history reports from the years 1992-1994 of Olkiluoto BWR nuclear power plant (NPP) were analysed together with the maintenance personnel. The human action induced faults were classified, e.g., according to their multiplicity and effects. This paper presents and discusses the results of a statistical analysis of the data. Instrumentation and electrical components are especially prone to human failures. Many human failures were found in safety related systems. Similarly, several failures remained latent from outages to power operation. The safety significance was generally small. Modifications are an important source of multiple human failures. Plant maintenance data is a good source of human reliability data and it should be used more, in future. (orig.)

  5. Statistical analysis of events related to emergency diesel generators failures in the nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Kančev, Duško, E-mail: dusko.kancev@ec.europa.eu [European Commission, DG-JRC, Institute for Energy and Transport, P.O. Box 2, NL-1755 ZG Petten (Netherlands); Duchac, Alexander; Zerger, Benoit [European Commission, DG-JRC, Institute for Energy and Transport, P.O. Box 2, NL-1755 ZG Petten (Netherlands); Maqua, Michael [Gesellschaft für Anlagen-und-Reaktorsicherheit (GRS) mbH, Schwetnergasse 1, 50667 Köln (Germany); Wattrelos, Didier [Institut de Radioprotection et de Sûreté Nucléaire (IRSN), BP 17 - 92262 Fontenay-aux-Roses Cedex (France)

    2014-07-01

    Highlights: • Analysis of operating experience related to emergency diesel generators events at NPPs. • Four abundant operating experience databases screened. • Delineating important insights and conclusions based on the operating experience. - Abstract: This paper is aimed at studying the operating experience related to emergency diesel generators (EDGs) events at nuclear power plants collected from the past 20 years. Events related to EDGs failures and/or unavailability as well as all the supporting equipment are in the focus of the analysis. The selected operating experience was analyzed in detail in order to identify the type of failures, attributes that contributed to the failure, failure modes potential or real, discuss risk relevance, summarize important lessons learned, and provide recommendations. The study in this particular paper is tightly related to the performing of statistical analysis of the operating experience. For the purpose of this study EDG failure is defined as EDG failure to function on demand (i.e. fail to start, fail to run) or during testing, or an unavailability of an EDG, except of unavailability due to regular maintenance. The Gesellschaft für Anlagen und Reaktorsicherheit mbH (GRS) and Institut de Radioprotection et de Sûreté Nucléaire (IRSN) databases as well as the operating experience contained in the IAEA/NEA International Reporting System for Operating Experience and the U.S. Licensee Event Reports were screened. The screening methodology applied for each of the four different databases is presented. Further on, analysis aimed at delineating the causes, root causes, contributing factors and consequences are performed. A statistical analysis was performed related to the chronology of events, types of failures, the operational circumstances of detection of the failure and the affected components/subsystems. The conclusions and results of the statistical analysis are discussed. The main findings concerning the testing

  6. Statistical analysis of events related to emergency diesel generators failures in the nuclear industry

    International Nuclear Information System (INIS)

    Kančev, Duško; Duchac, Alexander; Zerger, Benoit; Maqua, Michael; Wattrelos, Didier

    2014-01-01

    Highlights: • Analysis of operating experience related to emergency diesel generators events at NPPs. • Four abundant operating experience databases screened. • Delineating important insights and conclusions based on the operating experience. - Abstract: This paper is aimed at studying the operating experience related to emergency diesel generators (EDGs) events at nuclear power plants collected from the past 20 years. Events related to EDGs failures and/or unavailability as well as all the supporting equipment are in the focus of the analysis. The selected operating experience was analyzed in detail in order to identify the type of failures, attributes that contributed to the failure, failure modes potential or real, discuss risk relevance, summarize important lessons learned, and provide recommendations. The study in this particular paper is tightly related to the performing of statistical analysis of the operating experience. For the purpose of this study EDG failure is defined as EDG failure to function on demand (i.e. fail to start, fail to run) or during testing, or an unavailability of an EDG, except of unavailability due to regular maintenance. The Gesellschaft für Anlagen und Reaktorsicherheit mbH (GRS) and Institut de Radioprotection et de Sûreté Nucléaire (IRSN) databases as well as the operating experience contained in the IAEA/NEA International Reporting System for Operating Experience and the U.S. Licensee Event Reports were screened. The screening methodology applied for each of the four different databases is presented. Further on, analysis aimed at delineating the causes, root causes, contributing factors and consequences are performed. A statistical analysis was performed related to the chronology of events, types of failures, the operational circumstances of detection of the failure and the affected components/subsystems. The conclusions and results of the statistical analysis are discussed. The main findings concerning the testing

  7. Beyond reliability, multi-state failure analysis of satellite subsystems: A statistical approach

    International Nuclear Information System (INIS)

    Castet, Jean-Francois; Saleh, Joseph H.

    2010-01-01

    Reliability is widely recognized as a critical design attribute for space systems. In recent articles, we conducted nonparametric analyses and Weibull fits of satellite and satellite subsystems reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we extend our investigation of failures of satellites and satellite subsystems beyond the binary concept of reliability to the analysis of their anomalies and multi-state failures. In reliability analysis, the system or subsystem under study is considered to be either in an operational or failed state; multi-state failure analysis introduces 'degraded states' or partial failures, and thus provides more insights through finer resolution into the degradation behavior of an item and its progression towards complete failure. The database used for the statistical analysis in the present work identifies five states for each satellite subsystem: three degraded states, one fully operational state, and one failed state (complete failure). Because our dataset is right-censored, we calculate the nonparametric probability of transitioning between states for each satellite subsystem with the Kaplan-Meier estimator, and we derive confidence intervals for each probability of transitioning between states. We then conduct parametric Weibull fits of these probabilities using the Maximum Likelihood Estimation (MLE) approach. After validating the results, we compare the reliability versus multi-state failure analyses of three satellite subsystems: the thruster/fuel; the telemetry, tracking, and control (TTC); and the gyro/sensor/reaction wheel subsystems. The results are particularly revealing of the insights that can be gleaned from multi-state failure analysis and the deficiencies, or blind spots, of the traditional reliability analysis. In addition to the specific results provided here, which should prove particularly useful to the space industry, this work highlights the importance

  8. Statistical analysis of nuclear power plant pump failure rate variability: some preliminary results

    International Nuclear Information System (INIS)

    Martz, H.F.; Whiteman, D.E.

    1984-02-01

    In-Plant Reliability Data System (IPRDS) pump failure data on over 60 selected pumps in four nuclear power plants are statistically analyzed using the Failure Rate Analysis Code (FRAC). A major purpose of the analysis is to determine which environmental, system, and operating factors adequately explain the variability in the failure data. Catastrophic, degraded, and incipient failure severity categories are considered for both demand-related and time-dependent failures. For catastrophic demand-related pump failures, the variability is explained by the following factors listed in their order of importance: system application, pump driver, operating mode, reactor type, pump type, and unidentified plant-specific influences. Quantitative failure rate adjustments are provided for the effects of these factors. In the case of catastrophic time-dependent pump failures, the failure rate variability is explained by three factors: reactor type, pump driver, and unidentified plant-specific influences. Finally, point and confidence interval failure rate estimates are provided for each selected pump by considering the influential factors. Both types of estimates represent an improvement over the estimates computed exclusively from the data on each pump

  9. Statistical analysis on failure-to-open/close probability of motor-operated valve in sodium system

    International Nuclear Information System (INIS)

    Kurisaka, Kenichi

    1998-08-01

    The objective of this work is to develop basic data for examination on efficiency of preventive maintenance and actuation test from the standpoint of failure probability. This work consists of a statistical trend analysis of valve failure probability in a failure-to-open/close mode on time since installation and time since last open/close action, based on the field data of operating- and failure-experience. In this work, the terms both dependent and independent on time were considered in the failure probability. The linear aging model was modified and applied to the first term. In this model there are two terms with both failure rates in proportion to time since installation and to time since last open/close-demand. Because of sufficient statistical population, motor-operated valves (MOV's) in sodium system were selected to be analyzed from the CORDS database which contains operating data and failure data of components in the fast reactors and sodium test facilities. According to these data, the functional parameters were statistically estimated to quantify the valve failure probability in a failure-to-open/close mode, with consideration of uncertainty. (J.P.N.)

  10. Uncertainty analysis of reactor safety systems with statistically correlated failure data

    International Nuclear Information System (INIS)

    Dezfuli, H.; Modarres, M.

    1985-01-01

    The probability of occurrence of the top event of a fault tree is estimated from failure probability of components that constitute the fault tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. Most fault tree evaluations have so far been based on uncorrelated component failure data. The subject of this paper is the description of a method of assessing the probability intervals for the top event failure probability of fault trees when component failure data are statistically correlated. To estimate the mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte-Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. A moment matching technique is used to obtain the probability distribution function of the top event through fitting a Johnson Ssub(B) distribution. The computer program (CORRELATE) was developed to perform the calculations necessary for the implementation of the method developed. The CORRELATE code is very efficient and consumes minimal computer time. This is primarily because it does not employ the time-consuming Monte-Carlo method. (author)

  11. A statistical analysis on failure-to open/close probability of pneumatic valve in sodium cooling systems

    International Nuclear Information System (INIS)

    Kurisaka, Kenichi

    1999-11-01

    The objective of this study is to develop fundamental data for examination on efficiency of preventive maintenance and surveillance test from the standpoint of failure probability. In this study, as a major standby component, a pneumatic valve in sodium cooling systems was selected. A statistical analysis was made about a trend of valve in sodium cooling systems was selected. A statistical analysis was made about a trend of valve failure-to-open/close (FTOC) probability depending on number of demands ('n'), time since installation ('t') and standby time since last open/close action ('T'). The analysis is based on the field data of operating- and failure-experiences stored in the Component Reliability Database and Statistical Analysis System for LMFBR's (CORDS). In the analysis, the FTOC probability ('P') was expressed as follows: P=1-exp{-C-En-F/n-λT-aT(t-T/2)-AT 2 /2}. The functional parameters, 'C', 'E', 'F', 'λ', 'a' and 'A', were estimated with the maximum likelihood estimation method. As a result, the FTOC probability is almost expressed with the failure probability being derived from the failure rate under assumption of the Poisson distribution only when valve cycle (i.e. open-close-open cycle) exceeds about 100 days. When the valve cycle is shorter than about 100 days, the FTOC probability can be adequately estimated with the parameter model proposed in this study. The results obtained from this study may make it possible to derive an adequate frequency of surveillance test for a given target of the FTOC probability. (author)

  12. A statistical analysis of pellet-clad interaction failures in water reactor fuel

    International Nuclear Information System (INIS)

    McDonald, S.G.; Fardo, R.D.; Sipush, P.J.; Kaiser, R.S.

    1981-01-01

    The primary objective of the statistical analysis was to develop a mathematical function that would predict PCI fuel rod failures as a function of the imposed operating conditions. Linear discriminant analysis of data from both test and commercial reactors was performed. The initial data base used encompassed 713 data points (117 failures and 596 non-failures) representing a wide variety of water cooled reactor fuel (PWR, BWR, CANDU, and SGHWR). When applied on a best-estimate basis, the resulting function simultaneously predicts approximately 80 percent of both the failure and non-failure data correctly. One of the most significant predictions of the analysis is that relatively large changes in power can be tolerated when the pre-ramp irradiation power is low, but that only small changes in power can be tolerated when the pre-ramp irradiation power is high. However, it is also predicted that fuel rods irradiated at low power will fail at lower final powers than those irradiated at high powers. Other results of the analysis are that fuel rods with high clad operating temperatures can withstand larger power increases that fuel rods with low clad operating temperatures, and that burnup has only a minimal effect on PCI performance after levels of approximately 10000 MWD/MTU have been exceeded. These trends in PCI performance and the operating parameters selected are believed to be consistent with mechanistic considerations. Published PCI data indicate that BWR fuel usually operates at higher local powers and changes in power, lower clad temperatures, and higher local ramp rates than PWR fuel

  13. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  14. Failure rate analysis using GLIMMIX

    International Nuclear Information System (INIS)

    Moore, L.M.; Hemphill, G.M.; Martz, H.F.

    1998-01-01

    This paper illustrates use of a recently developed SAS macro, GLIMMIX, for implementing an analysis suggested by Wolfinger and O'Connell (1993) in modeling failure count data with random as well as fixed factor effects. Interest in this software tool arose from consideration of modernizing the Failure Rate Analysis Code (FRAC), developed at Los Alamos National Laboratory in the early 1980's by Martz, Beckman and McInteer (1982). FRAC is a FORTRAN program developed to analyze Poisson distributed failure count data as a log-linear model, possibly with random as well as fixed effects. These statistical modeling assumptions are a special case of generalized linear mixed models, identified as GLMM in the current statistics literature. In the nearly 15 years since FRAC was developed, there have been considerable advances in computing capability, statistical methodology and available statistical software tools allowing worthwhile consideration of the tasks of modernizing FRAC. In this paper, the approaches to GLMM estimation implemented in GLIMMIX and in FRAC are described and a comparison of results for the two approaches is made with data on catastrophic time-dependent pump failures from a report by Martz and Whiteman (1984). Additionally, statistical and graphical model diagnostics are suggested and illustrated with the GLIMMIX analysis results

  15. The Statistical Analysis of Failure Time Data

    CERN Document Server

    Kalbfleisch, John D

    2011-01-01

    Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.

  16. Statistical characteristics of serious network failures in Japan

    International Nuclear Information System (INIS)

    Uchida, Masato

    2014-01-01

    Due to significant environmental changes in the telecommunications market, network failures affect socioeconomic activities more than ever before. However, the health of public networks at a national level has not been investigated in detail. In this paper, we investigate the statistical characteristics of interval, duration, and the number of users affected for serious network failures, which are defined as network failures that last for more than two hours and affect more than 30,000 users, that occurred in Japan during Japanese fiscal years 2008–2012 (April 2008–March 2013). The results show that (i) the interval follows a Poisson process, (ii) the duration follows a Pareto distribution, (iii) the number of users affected follows a piecewise Pareto distribution, (iv) the product of duration and the number of users affected roughly follow a distribution that can be derived from a convolution of two distributions of duration and the number of users affected, and (v) the relationship between duration and the number of users affected differs from service to service. - Highlights: • The statistical characteristics of serious network failures in Japan are analyzed. • The analysis is based on public information that is available at the moment. • The interval follows a Poisson process. • The duration follows a Pareto distribution. • The number of users affected follows a piecewise Pareto distribution

  17. Importance of competing risks in the analysis of anti-epileptic drug failure

    Directory of Open Access Journals (Sweden)

    Sander Josemir W

    2007-03-01

    Full Text Available Abstract Background Retention time (time to treatment failure is a commonly used outcome in antiepileptic drug (AED studies. Methods Two datasets are used to demonstrate the issues in a competing risks analysis of AEDs. First, data collection and follow-up considerations are discussed with reference to information from 15 monotherapy trials. Recommendations for improved data collection and cumulative incidence analysis are then illustrated using the SANAD trial dataset. The results are compared to the more common approach using standard survival analysis methods. Results A non-significant difference in overall treatment failure time between gabapentin and topiramate (logrank test statistic = 0.01, 1 degree of freedom, p-value = 0.91 masked highly significant differences in opposite directions with gabapentin resulting in fewer withdrawals due to side effects (Gray's test statistic = 11.60, 1 degree of freedom, p = 0.0007 but more due to poor seizure control (Gray's test statistic = 14.47, 1 degree of freedom, p-value = 0.0001. The significant difference in overall treatment failure time between lamotrigine and carbamazepine (logrank test statistic = 5.6, 1 degree of freedom, p-value = 0.018 was due entirely to a significant benefit of lamotrigine in terms of side effects (Gray's test statistic = 10.27, 1 degree of freedom, p = 0.001. Conclusion Treatment failure time can be measured reliably but care is needed to collect sufficient information on reasons for drug withdrawal to allow a competing risks analysis. Important differences between the profiles of AEDs may be missed unless appropriate statistical methods are used to fully investigate treatment failure time. Cumulative incidence analysis allows comparison of the probability of failure between two AEDs and is likely to be a more powerful approach than logrank analysis for most comparisons of standard and new anti-epileptic drugs.

  18. Establishment, maintenance and application of failure statistics as a basis for availability optimization

    International Nuclear Information System (INIS)

    Poll, H.

    1989-01-01

    The purpose of failure statistics is to obtain hints on weak points due to operation and design. The present failure statistics of Rheinisch-Westfaelisches Elektrizitaetswerk (RWE) is based on reducing availability of power station units. If damage or trouble occurs with a unit, data will be recorded in order to calculate the unavailability and to describe the occurence, the extent, and the removal of damage. Following a survey of the most important data, a short explanation is given on updating of failure statistics and some problems of this job are mentioned. Finally some examples are given, how failure statistics can be used for analyses. (orig.) [de

  19. Constructing Ontology for Knowledge Sharing of Materials Failure Analysis

    Directory of Open Access Journals (Sweden)

    Peng Shi

    2014-01-01

    Full Text Available Materials failure indicates the fault with materials or components during their performance. To avoid the reoccurrence of similar failures, materials failure analysis is executed to investigate the reasons for the failure and to propose improved strategies. The whole procedure needs sufficient domain knowledge and also produces valuable new knowledge. However, the information about the materials failure analysis is usually retained by the domain expert, and its sharing is technically difficult. This phenomenon may seriously reduce the efficiency and decrease the veracity of the failure analysis. To solve this problem, this paper adopts ontology, a novel technology from the Semantic Web, as a tool for knowledge representation and sharing and describes the construction of the ontology to obtain information concerning the failure analysis, application area, materials, and failure cases. The ontology represented information is machine-understandable and can be easily shared through the Internet. At the same time, failure case intelligent retrieval, advanced statistics, and even automatic reasoning can be accomplished based on ontology represented knowledge. Obviously this can promote the knowledge sharing of materials service safety and improve the efficiency of failure analysis. The case of a nuclear power plant area is presented to show the details and benefits of this method.

  20. VDEW statistic of failures and damage 1972. VDEW Stoerungs- und Schadensstatistik 1972

    Energy Technology Data Exchange (ETDEWEB)

    1975-01-01

    Results of the VDEW's statistics on failures and damage concerning the high-voltage network of the FRG and West Berlin in 1972 are presented. The tables, columns, and standard charts published in this brochure were elaborated by the VDEW working group 'Failures and damage statistics' under the leadership of Dipl.-Ing. H. Reisner.

  1. A COCAP program for the statistical analysis of common cause failure parameters

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Baehyeuk; Jae, Moosung [Hanyang Univ., Seoul (Korea, Republic of). Dept. of Nuclear Engineering

    2016-03-15

    Probabilistic Safety Assessment (PSA) based applications and regulations are becoming more important in the field of nuclear energy. According to the results of a PSA in Korea, the common cause failure evaluates CDF (Core Damage Frequency) as one of the significant factors affecting redundancy of NPPs. The purpose of the study is to develop a COCAP (Common Cause Failure parameter Analysis for PSA) program for the accurate use of the alpha factor model parameter data provided by other countries and for obtaining the indigenous CCF data of NPPs in Korea through Bayesian updating.

  2. System reliability analysis using dominant failure modes identified by selective searching technique

    International Nuclear Information System (INIS)

    Kim, Dong-Seok; Ok, Seung-Yong; Song, Junho; Koh, Hyun-Moo

    2013-01-01

    The failure of a redundant structural system is often described by innumerable system failure modes such as combinations or sequences of local failures. An efficient approach is proposed to identify dominant failure modes in the space of random variables, and then perform system reliability analysis to compute the system failure probability. To identify dominant failure modes in the decreasing order of their contributions to the system failure probability, a new simulation-based selective searching technique is developed using a genetic algorithm. The system failure probability is computed by a multi-scale matrix-based system reliability (MSR) method. Lower-scale MSR analyses evaluate the probabilities of the identified failure modes and their statistical dependence. A higher-scale MSR analysis evaluates the system failure probability based on the results of the lower-scale analyses. Three illustrative examples demonstrate the efficiency and accuracy of the approach through comparison with existing methods and Monte Carlo simulations. The results show that the proposed method skillfully identifies the dominant failure modes, including those neglected by existing approaches. The multi-scale MSR method accurately evaluates the system failure probability with statistical dependence fully considered. The decoupling between the failure mode identification and the system reliability evaluation allows for effective applications to larger structural systems

  3. Prediction of failure enthalpy and reliability of irradiated fuel rod under reactivity-initiated accidents by means of statistical approach

    International Nuclear Information System (INIS)

    Nam, Cheol; Choi, Byeong Kwon; Jeong, Yong Hwan; Jung, Youn Ho

    2001-01-01

    During the last decade, the failure behavior of high-burnup fuel rods under RIA has been an extensive concern since observations of fuel rod failures at low enthalpy. Of great importance is placed on failure prediction of fuel rod in the point of licensing criteria and safety in extending burnup achievement. To address the issue, a statistics-based methodology is introduced to predict failure probability of irradiated fuel rods. Based on RIA simulation results in literature, a failure enthalpy correlation for irradiated fuel rod is constructed as a function of oxide thickness, fuel burnup, and pulse width. From the failure enthalpy correlation, a single damage parameter, equivalent enthalpy, is defined to reflect the effects of the three primary factors as well as peak fuel enthalpy. Moreover, the failure distribution function with equivalent enthalpy is derived, applying a two-parameter Weibull statistical model. Using these equations, the sensitivity analysis is carried out to estimate the effects of burnup, corrosion, peak fuel enthalpy, pulse width and cladding materials used

  4. Electric propulsion reliability: Statistical analysis of on-orbit anomalies and comparative analysis of electric versus chemical propulsion failure rates

    Science.gov (United States)

    Saleh, Joseph Homer; Geng, Fan; Ku, Michelle; Walker, Mitchell L. R.

    2017-10-01

    With a few hundred spacecraft launched to date with electric propulsion (EP), it is possible to conduct an epidemiological study of EP's on orbit reliability. The first objective of the present work was to undertake such a study and analyze EP's track record of on orbit anomalies and failures by different covariates. The second objective was to provide a comparative analysis of EP's failure rates with those of chemical propulsion. Satellite operators, manufacturers, and insurers will make reliability- and risk-informed decisions regarding the adoption and promotion of EP on board spacecraft. This work provides evidence-based support for such decisions. After a thorough data collection, 162 EP-equipped satellites launched between January 1997 and December 2015 were included in our dataset for analysis. Several statistical analyses were conducted, at the aggregate level and then with the data stratified by severity of the anomaly, by orbit type, and by EP technology. Mean Time To Anomaly (MTTA) and the distribution of the time to (minor/major) anomaly were investigated, as well as anomaly rates. The important findings in this work include the following: (1) Post-2005, EP's reliability has outperformed that of chemical propulsion; (2) Hall thrusters have robustly outperformed chemical propulsion, and they maintain a small but shrinking reliability advantage over gridded ion engines. Other results were also provided, for example the differentials in MTTA of minor and major anomalies for gridded ion engines and Hall thrusters. It was shown that: (3) Hall thrusters exhibit minor anomalies very early on orbit, which might be indicative of infant anomalies, and thus would benefit from better ground testing and acceptance procedures; (4) Strong evidence exists that EP anomalies (onset and likelihood) and orbit type are dependent, a dependence likely mediated by either the space environment or differences in thrusters duty cycles; (5) Gridded ion thrusters exhibit both

  5. Analysis of dependent failures in risk assessment and reliability evaluation

    International Nuclear Information System (INIS)

    Fleming, K.N.; Mosleh, A.; Kelley, A.P. Jr.; Gas-Cooled Reactors Associates, La Jolla, CA)

    1983-01-01

    The ability to estimate the risk of potential reactor accidents is largely determined by the ability to analyze statistically dependent multiple failures. The importance of dependent failures has been indicated in recent probabilistic risk assessment (PRA) studies as well as in reports of reactor operating experiences. This article highlights the importance of several different types of dependent failures from the perspective of the risk and reliability analyst and provides references to the methods and data available for their analysis. In addition to describing the current state of the art, some recent advances, pitfalls, misconceptions, and limitations of some approaches to dependent failure analysis are addressed. A summary is included of the discourse on this subject, which is presented in the Institute of Electrical and Electronics Engineers/American Nuclear Society PRA Procedures Guide

  6. Statistical evaluations concerning the failure behaviour of formed parts with superheated steam flow. Pt. 1

    International Nuclear Information System (INIS)

    Oude-Hengel, H.H.; Vorwerk, K.; Heuser, F.W.; Boesebeck, K.

    1976-01-01

    Statistical evaluations concerning the failure behaviour of formed parts with superheated-steam flow were carried out using data from VdTUEV inventory and failure statistics. Due to the great number of results, the findings will be published in two volumes. This first part will describe and classify the stock of data and will make preliminary quantitative statements on failure behaviour. More differentiated statements are made possible by including the operation time and the number of start-ups per failed part. On the basis of time-constant failure rates some materials-specific statements are given. (orig./ORU) [de

  7. PV System Component Fault and Failure Compilation and Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne

    2018-02-01

    This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.

  8. Probabilistic analysis of ''common mode failures''

    International Nuclear Information System (INIS)

    Easterling, R.G.

    1978-01-01

    Common mode failure is a topic of considerable interest in reliability and safety analyses of nuclear reactors. Common mode failures are often discussed in terms of examples: two systems fail simultaneously due to an external event such as an earthquake; two components in redundant channels fail because of a common manufacturing defect; two systems fail because a component common to both fails; the failure of one system increases the stress on other systems and they fail. The common thread running through these is a dependence of some sort--statistical or physical--among multiple failure events. However, the nature of the dependence is not the same in all these examples. An attempt is made to model situations, such as the above examples, which have been termed ''common mode failures.'' In doing so, it is found that standard probability concepts and terms, such as statistically dependent and independent events, and conditional and unconditional probabilities, suffice. Thus, it is proposed that the term ''common mode failures'' be dropped, at least from technical discussions of these problems. A corollary is that the complementary term, ''random failures,'' should also be dropped. The mathematical model presented may not cover all situations which have been termed ''common mode failures,'' but provides insight into the difficulty of obtaining estimates of the probabilities of these events

  9. Statistical analysis of fuel failures in large break loss-of-coolant accident (LBLOCA) in EPR type nuclear power plant

    International Nuclear Information System (INIS)

    Arkoma, Asko; Hänninen, Markku; Rantamäki, Karin; Kurki, Joona; Hämäläinen, Anitta

    2015-01-01

    Highlights: • The number of failing fuel rods in a LB-LOCA in an EPR is evaluated. • 59 scenarios are simulated with the system code APROS. • 1000 rods per scenario are simulated with the fuel performance code FRAPTRAN-GENFLO. • All the rods in the reactor are simulated in the worst scenario. • Results suggest that the regulations set by the Finnish safety authority are met. - Abstract: In this paper, the number of failing fuel rods in a large break loss-of-coolant accident (LB-LOCA) in EPR-type nuclear power plant is evaluated using statistical methods. For this purpose, a statistical fuel failure analysis procedure has been developed. The developed method utilizes the results of nonparametric statistics, the Wilks’ formula in particular, and is based on the selection and variation of parameters that are important in accident conditions. The accident scenario is simulated with the coupled fuel performance – thermal hydraulics code FRAPTRAN-GENFLO using various parameter values and thermal hydraulic and power history boundary conditions between the simulations. The number of global scenarios is 59 (given by the Wilks’ formula), and 1000 rods are simulated in each scenario. The boundary conditions are obtained from a new statistical version of the system code APROS. As a result, in the worst global scenario, 1.2% of the simulated rods failed, and it can be concluded that the Finnish safety regulations are hereby met (max. 10% of the rods allowed to fail)

  10. Statistical analysis of fuel failures in large break loss-of-coolant accident (LBLOCA) in EPR type nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Arkoma, Asko, E-mail: asko.arkoma@vtt.fi; Hänninen, Markku; Rantamäki, Karin; Kurki, Joona; Hämäläinen, Anitta

    2015-04-15

    Highlights: • The number of failing fuel rods in a LB-LOCA in an EPR is evaluated. • 59 scenarios are simulated with the system code APROS. • 1000 rods per scenario are simulated with the fuel performance code FRAPTRAN-GENFLO. • All the rods in the reactor are simulated in the worst scenario. • Results suggest that the regulations set by the Finnish safety authority are met. - Abstract: In this paper, the number of failing fuel rods in a large break loss-of-coolant accident (LB-LOCA) in EPR-type nuclear power plant is evaluated using statistical methods. For this purpose, a statistical fuel failure analysis procedure has been developed. The developed method utilizes the results of nonparametric statistics, the Wilks’ formula in particular, and is based on the selection and variation of parameters that are important in accident conditions. The accident scenario is simulated with the coupled fuel performance – thermal hydraulics code FRAPTRAN-GENFLO using various parameter values and thermal hydraulic and power history boundary conditions between the simulations. The number of global scenarios is 59 (given by the Wilks’ formula), and 1000 rods are simulated in each scenario. The boundary conditions are obtained from a new statistical version of the system code APROS. As a result, in the worst global scenario, 1.2% of the simulated rods failed, and it can be concluded that the Finnish safety regulations are hereby met (max. 10% of the rods allowed to fail)

  11. Statistical analysis of field data for aircraft warranties

    Science.gov (United States)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  12. Statistical analysis of failure time in stress corrosion cracking of fuel tube in light water reactor

    International Nuclear Information System (INIS)

    Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi

    1991-01-01

    This report is to show how the life due to stress corrosion cracking breakdown of fuel cladding tubes is evaluated by applying the statistical techniques to that examined by a few testing methods. The statistical distribution of the limiting values of constant load stress corrosion cracking life, the statistical analysis by making the probabilistic interpretation of constant load stress corrosion cracking life, and the statistical analysis of stress corrosion cracking life by the slow strain rate test (SSRT) method are described. (K.I.)

  13. Nuclear reactor component populations, reliability data bases, and their relationship to failure rate estimation and uncertainty analysis

    International Nuclear Information System (INIS)

    Martz, H.F.; Beckman, R.J.

    1981-12-01

    Probabilistic risk analyses are used to assess the risks inherent in the operation of existing and proposed nuclear power reactors. In performing such risk analyses the failure rates of various components which are used in a variety of reactor systems must be estimated. These failure rate estimates serve as input to fault trees and event trees used in the analyses. Component failure rate estimation is often based on relevant field failure data from different reliability data sources such as LERs, NPRDS, and the In-Plant Data Program. Various statistical data analysis and estimation methods have been proposed over the years to provide the required estimates of the component failure rates. This report discusses the basis and extent to which statistical methods can be used to obtain component failure rate estimates. The report is expository in nature and focuses on the general philosophical basis for such statistical methods. Various terms and concepts are defined and illustrated by means of numerous simple examples

  14. Failure analysis: Status and future trends

    International Nuclear Information System (INIS)

    Anderson, R.E.; Soden, J.M.; Henderson, C.L.

    1995-01-01

    Failure analysis is a critical element in the integrated circuit manufacturing industry. This paper reviews the changing role of failure analysis and describes major techniques employed in the industry today. Several advanced failure analysis techniques that meet the challenges imposed by advancements in integrated circuit technology are described and their applications are discussed. Future trends in failure analysis needed to keep pace with the continuing advancements in integrated circuit technology are anticipated

  15. Reliability Analysis of Fatigue Failure of Cast Components for Wind Turbines

    Directory of Open Access Journals (Sweden)

    Hesam Mirzaei Rafsanjani

    2015-04-01

    Full Text Available Fatigue failure is one of the main failure modes for wind turbine drivetrain components made of cast iron. The wind turbine drivetrain consists of a variety of heavily loaded components, like the main shaft, the main bearings, the gearbox and the generator. The failure of each component will lead to substantial economic losses such as cost of lost energy production and cost of repairs. During the design lifetime, the drivetrain components are exposed to variable loads from winds and waves and other sources of loads that are uncertain and have to be modeled as stochastic variables. The types of loads are different for offshore and onshore wind turbines. Moreover, uncertainties about the fatigue strength play an important role in modeling and assessment of the reliability of the components. In this paper, a generic stochastic model for fatigue failure of cast iron components based on fatigue test data and a limit state equation for fatigue failure based on the SN-curve approach and Miner’s rule is presented. The statistical analysis of the fatigue data is performed using the Maximum Likelihood Method which also gives an estimate of the statistical uncertainties. Finally, illustrative examples are presented with reliability analyses depending on various stochastic models and partial safety factors.

  16. Failure analysis of a repairable system: The case study of a cam-driven reciprocating pump

    Science.gov (United States)

    Dudenhoeffer, Donald D.

    1994-09-01

    This thesis supplies a statistical and economic tool for analysis of the failure characteristics of one typical piece of equipment under evaluation: a cam-driven reciprocating pump used in the submarine's distillation system. Comprehensive statistical techniques and parametric modeling are employed to identify and quantify pump failure characteristics. Specific areas of attention include: the derivation of an optimal maximum replacement interval based on costs, an evaluation of the mission reliability for the pump as a function of pump age, and a calculation of the expected times between failures. The purpose of this analysis is to evaluate current maintenance practices of time-based replacement and examine the consequences of different replacement intervals in terms of costs and mission reliability. Tradeoffs exist between cost savings and system reliability that must be fully understood prior to making any policy decisions.

  17. Electrical failure analysis for root-cause determination

    International Nuclear Information System (INIS)

    Riddle, J.

    1990-01-01

    This paper outlines a practical failure analysis sequence. Several technical definitions are required. A failure is defined as a component that was operating in a system where the system malfunctioned and the replacement of the device restored system functionality. The failure mode is the malfunctioning behavior of the device. The failure mechanism is the underlying cause or source of the failure mode. The failure mechanism is the root cause of the failure mode. The failure analysis procedure needs to be adequately refined to result in the determination of the cause of failure to the degree that corrective action or design changes will prevent recurrence of the failure mode or mechanism. An example of a root-cause determination analysis performed for a nuclear power industry customer serves to illustrate the analysis methodology

  18. Quantification of Solar Cell Failure Signatures Based on Statistical Analysis of Electroluminescence Images

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Parikh, Harsh; Hacke, Peter

    2017-01-01

    We demonstrate a method to quantify the extent of solar cell cracks, shunting, or damaged cell interconnects, present in crystalline silicon photovoltaic (PV) modules by statistical analysis of the electroluminescence (EL) intensity distributions of individual cells within the module. From the EL...... intensity distributions (ELID) of each cell, we calculated summary statistics such as standard deviation, median, skewness and kurtosis, and analyzed how they correlate with the magnitude of the solar cell degradation. We found that the dispersion of the ELID increases with the size and severity...

  19. Statistical analysis and planning of multihundred-watt impact tests

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Waterman, M.S.

    1977-10-01

    Modular multihundred-watt (MHW) radioisotope thermoelectric generators (RTG's) are used as a power source for spacecraft. Due to possible environmental contamination by radioactive materials, numerous tests are required to determine and verify the safety of the RTG. There are results available from 27 fueled MHW impact tests regarding hoop failure, fingerprint failure, and fuel failure. Data from the 27 tests are statistically analyzed for relationships that exist between the test design variables and the failure types. Next, these relationships are used to develop a statistical procedure for planning and conducting either future MHW impact tests or similar tests on other RTG fuel sources. Finally, some conclusions are given

  20. Quantification of solar cell failure signatures based on statistical analysis of electroluminescence images

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Parikh, Harsh; Benatto, Gisele Alves dos Reis

    2017-01-01

    We propose a method to identify and quantify the extent of solar cell cracks, shunting, or damaged cell interconnects, present in crystalline silicon photovoltaic (PV) modules by statistical analysis of the electroluminescence (EL) intensity distributions of individual cells within the module. From...... the EL intensity distributions (ELID) of each cell, we calculated summary statistics such as standard deviation, median, skewness and kurtosis, and analyzed how they correlate with the type of the solar cell degradation. We found that the dispersion of the ELID increases with the size and severity...

  1. Statistical evaluation of failures and repairs of the V-1 measuring and control system

    International Nuclear Information System (INIS)

    Laurinec, R.; Korec, J.; Mitosinka, J.; Zarnovican, V.

    1984-01-01

    A failure record card system was introduced for evaluating the reliability of the measurement and control equipment of the V-1 nuclear power plant. The SPU-800 microcomputer system is used for recording data on magnetic tape and their transmission to the central data processing department. The data are used for evaluating the reliability of components and circuits and a selection is made of the most failure-prone components, and the causes of failures are evaluated as are failure identification, repair and causes of outages. The system provides monthly, annual and total assessment data since the system was commissioned. The results of the statistical evaluation of failures are used for planning preventive maintenance and for determining optimal repair intervals. (E.S.)

  2. Some calculations of the failure statistics of coated fuel particles

    International Nuclear Information System (INIS)

    Martin, D.G.; Hobbs, J.E.

    1977-03-01

    Statistical variations of coated fuel particle parameters were considered in stress model calculations and the resulting particle failure fraction versus burn-up evaluated. Variations in the following parameters were considered simultaneously: kernel diameter and porosity, thickness of the buffer, seal, silicon carbide and inner and outer pyrocarbon layers, which were all assumed to be normally distributed, and the silicon carbide fracture stress which was assumed to follow a Weibull distribution. Two methods, based respectively on random sampling and convolution of the variations were employed and applied to particles manufactured by Dragon Project and RFL Springfields. Convolution calculations proved the more satisfactory. In the present calculations variations in the silicon carbide fracture stress caused the greatest spread in burn-up for a given change in failure fraction; kernel porosity is the next most important parameter. (author)

  3. Common cause failure analysis methodology for complex systems

    International Nuclear Information System (INIS)

    Wagner, D.P.; Cate, C.L.; Fussell, J.B.

    1977-01-01

    Common cause failure analysis, also called common mode failure analysis, is an integral part of a complex system reliability analysis. This paper extends existing methods of computer aided common cause failure analysis by allowing analysis of the complex systems often encountered in practice. The methods presented here aid in identifying potential common cause failures and also address quantitative common cause failure analysis

  4. Trend and pattern analysis of failures of main feedwater system components in United States commercial nuclear power plants

    International Nuclear Information System (INIS)

    Gentillon, C.D.; Meachum, T.R.; Brady, B.M.

    1987-01-01

    The goal of the trend and pattern analysis of MFW (main feedwater) component failure data is to identify component attributes that are associated with relatively high incidences of failure. Manufacturer, valve type, and pump rotational speed are examples of component attributes under study; in addition, the pattern of failures among NPP units is studied. A series of statistical methods is applied to identify trends and patterns in failures and trends in occurrences in time with regard to these component attributes or variables. This process is followed by an engineering evaluation of the statistical results. In the remainder of this paper, the characteristics of the NPRDS that facilitate its use in reliability and risk studies are highlighted, the analysis methods are briefly described, and the lessons learned thus far for improving MFW system availability and reliability are summarized (orig./GL)

  5. Failure mode analysis using state variables derived from fault trees with application

    International Nuclear Information System (INIS)

    Bartholomew, R.J.

    1982-01-01

    Fault Tree Analysis (FTA) is used extensively to assess both the qualitative and quantitative reliability of engineered nuclear power systems employing many subsystems and components. FTA is very useful, but the method is limited by its inability to account for failure mode rate-of-change interdependencies (coupling) of statistically independent failure modes. The state variable approach (using FTA-derived failure modes as states) overcomes these difficulties and is applied to the determination of the lifetime distribution function for a heat pipe-thermoelectric nuclear power subsystem. Analyses are made using both Monte Carlo and deterministic methods and compared with a Markov model of the same subsystem

  6. Sensor Failure Detection of FASSIP System using Principal Component Analysis

    Science.gov (United States)

    Sudarno; Juarsa, Mulya; Santosa, Kussigit; Deswandri; Sunaryo, Geni Rina

    2018-02-01

    In the nuclear reactor accident of Fukushima Daiichi in Japan, the damages of core and pressure vessel were caused by the failure of its active cooling system (diesel generator was inundated by tsunami). Thus researches on passive cooling system for Nuclear Power Plant are performed to improve the safety aspects of nuclear reactors. The FASSIP system (Passive System Simulation Facility) is an installation used to study the characteristics of passive cooling systems at nuclear power plants. The accuracy of sensor measurement of FASSIP system is essential, because as the basis for determining the characteristics of a passive cooling system. In this research, a sensor failure detection method for FASSIP system is developed, so the indication of sensor failures can be detected early. The method used is Principal Component Analysis (PCA) to reduce the dimension of the sensor, with the Squarred Prediction Error (SPE) and statistic Hotteling criteria for detecting sensor failure indication. The results shows that PCA method is capable to detect the occurrence of a failure at any sensor.

  7. An analysis of human maintenance failures of a nuclear power plant

    International Nuclear Information System (INIS)

    Pyy, P.

    2000-01-01

    In the report, a study of faults caused by maintenance activities is presented. The objective of the study was to draw conclusions on the unplanned effects of maintenance on nuclear power plant safety and system availability. More than 4400 maintenance history reports from the years 1992-1994 of Olkiluoto BWR nuclear power plant (NPP) were analysed together with the maintenance personnel. The human action induced faults were classified, e.g., according to their multiplicity and effects. This paper presents and discusses the results of a statistical analysis of the data. Instrumentation and electrical components appeared to be especially prone to human failures. Many human failures were found in safety related systems. Several failures also remained latent from outages to power operation. However, the safety significance of failures was generally small. Modifications were an important source of multiple human failures. Plant maintenance data is a good source of human reliability data and it should be used more in the future. (orig.)

  8. Isogeometric failure analysis

    NARCIS (Netherlands)

    Verhoosel, C.V.; Scott, M.A.; Borden, M.J.; Borst, de R.; Hughes, T.J.R.; Mueller-Hoeppe, D.; Loehnert, S.; Reese, S.

    2011-01-01

    Isogeometric analysis is a versatile tool for failure analysis. On the one hand, the excellent control over the inter-element continuity conditions enables a natural incorporation of continuum constitutive relations that incorporate higher-order strain gradients, as in gradient plasticity or damage.

  9. Analysis of cause-specific failure endpoints using simple proportions: an example from a randomized controlled clinical trial in early breast cancer

    International Nuclear Information System (INIS)

    Panzarella, Tony; Meakin, J. William

    1998-01-01

    Purpose: To describe a statistically valid method for analyzing cause-specific failure data based on simple proportions, that is easy to understand and apply, and outline under what conditions its implementation is well-suited. Methods and Materials: In the comparison of treatment groups, time to first failure (in any site) was analyzed first, followed by an analysis of the pattern of first failure, preferably at the latest complete follow-up time common to each group. Results: A retrospective analysis of time to contralateral breast cancer in 777 early breast cancer patients was undertaken. Patients previously treated by mastectomy plus radiation therapy to the chest wall and regional nodal areas were randomized to receive further radiation and prednisone (R+P), radiation alone (R), or no further treatment (NT). Those randomized to R+P had a statistically significantly delayed time to first failure compared to the group randomized to NT (p = 0.0008). Patients randomized to R also experienced a delayed time to first failure compared to NT, but the difference was not statistically significant (p 0.14). At 14 years from the date of surgery (the latest common complete follow-up time) the distribution of first failures was statistically significantly different between R+P and NT (p = 0.005), but not between R and NT (p = 0.09). The contralateral breast cancer first failure rate at 14 years from surgery was 7.2% for NT, 4.6% for R, and 3.7% for R+P. The corresponding Kaplan-Meier estimates were 13.2%, 8.2%, and 5.4%, respectively. Conclusion: Analyzing cause-specific failure data using methods developed for survival endpoints is problematic. We encourage the use of the two-step analysis strategy described when, as in the example presented, competing causes of failure are not likely to be statistically independent, and when a treatment comparison at a single time-point is clinically relevant and feasible; that is, all patients have complete follow-up to this point

  10. Failure Analysis

    International Nuclear Information System (INIS)

    Iorio, A.F.; Crespi, J.C.

    1987-01-01

    After ten years of operation at the Atucha I Nuclear Power Station a gear belonging to a pressurized heavy water reactor refuelling machine, failed. The gear box was used to operate the inlet-outlet heavy-water valve of the machine. Visual examination of the gear device showed an absence of lubricant and that several gear teeth were broken at the root. Motion was transmitted with a speed-reducing device with controlled adjustable times in order to produce a proper fitness of the valve closure. The aim of this paper is to discuss the results of the gear failure analysis in order to recommend the proper solution to prevent further failures. (Author)

  11. Statistical analysis of absorptive laser damage in dielectric thin films

    International Nuclear Information System (INIS)

    Budgor, A.B.; Luria-Budgor, K.F.

    1978-01-01

    The Weibull distribution arises as an example of the theory of extreme events. It is commonly used to fit statistical data arising in the failure analysis of electrical components and in DC breakdown of materials. This distribution is employed to analyze time-to-damage and intensity-to-damage statistics obtained when irradiating thin film coated samples of SiO 2 , ZrO 2 , and Al 2 O 3 with tightly focused laser beams. The data used is furnished by Milam. The fit to the data is excellent; and least squared correlation coefficients greater than 0.9 are often obtained

  12. The application of Petri nets to failure analysis

    International Nuclear Information System (INIS)

    Liu, T.S.; Chiou, S.B.

    1997-01-01

    Unlike the technique of fault tree analysis that has been widely applied to system failure analysis in reliability engineering, this study presents a Petri net approach to failure analysis. It is essentially a graphical method for describing relations between conditions and events. The use of Petri nets in failure analysis enables to replace logic gate functions in fault trees, efficiently obtain minimal cut sets, and absorb models. It is demonstrated that for failure analysis Petri nets are more efficient than fault trees. In addition, this study devises an alternative; namely, a trapezoidal graph method in order to account for failure scenarios. Examples validate this novel method in dealing with failure analysis

  13. Statistical analysis of operating efficiency and failures of a medical linear accelerator for ten years

    International Nuclear Information System (INIS)

    Ju, Sang Gyu; Huh, Seung Jae; Han, Young Yih

    2005-01-01

    To improve the management of a medical linear accelerator, the records of operational failures of a Varian CL2100C over a ten year period were retrospectively analyzed. The failures were classified according to the involved functional subunits, with each class rated into one of three levels depending on the operational conditions. The relationships between the failure rate and working ratio and between the failure rate and outside temperature were investigated. In addition, the average life time of the main part and the operating efficiency over the last 4 years were analyzed. Among the recorded failures (total 587 failures), the most frequent failure was observed in the parts related with the collimation system, including the monitor chamber, which accounted for 20% of all failures. With regard to the operational conditions, 2nd level of failures, which temporally interrupted treatments, were the most frequent. Third level of failures, which interrupted treatment for more than several hours, were mostly caused by the accelerating subunit. The number of failures was increased with number of treatments and operating time. The average life-times of the Klystron and Thyratron became shorter as the working ratio increased, and were 42 and 83% of the expected values, respectively. The operating efficiency was maintained at 95% or higher, but this value slightly decreased. There was no significant correlation between the number of failures and the outside temperature. The maintenance of detailed equipment problems and failures records over a long period of time can provide good knowledge of equipment function as well as the capability of predicting future failure. More rigorous equipment maintenance is required for old medical linear accelerators for the advanced avoidance of serious failure and to improve the quality of patient treatment

  14. The statistical analysis of failure of a MEVATRON77 DX67 linear accelerator over a ten year period

    International Nuclear Information System (INIS)

    Aoyama, Hideki; Inamura, Keiji; Tahara, Seiji; Uno, Hirofumi; Kadohisa, Shigefumi; Azuma, Yoshiharu; Nakagiri, Yoshitada; Hiraki, Yoshio

    2003-01-01

    A linear accelerator (linac) takes a leading role in radiation therapy. A linac consists of complicated main parts and systems and it is required that highly accurate operational procedures should be maintained. Operational failure occurs for various reasons. In this report, the failure occurrences of one linac over a ten year period were recorded and analyzed. The subject model was a MEVATRON77 DX67 (Siemens, Inc). The failure rate for each system, the form classification of the contents of failure, the operation situation at the time of failure, and the average performance life of the main parts were totaled. Moreover, the relation between the number of therapies that patients received (operating efficiency) and the failure rate within that number and the relation between environment (temperature and humidity) and the failure rate attributed to other systems were analyzed. In this report, irradiation interruption was also included with situations where treatment was unable to begin in total for the number of failure cases. The cases of failure were classified into three kinds, irradiation possible: irradiation capacity decreased, and: irradiation impossible. Consequently, the total failure number of cases for ten years and eight months was 1,036, and the number of cases/rate of each kind were irradiation possible: 49/4.7%, irradiation capacity: 919/88.7%, and irradiation impossible: 68/6.6%. In the classification according to the system, the acceleration section accounted for 59.0% and the pulse section 23.2% of the total number of failure cases. Every year, an operating efficiency of 95% or higher was maintained. The average lives of a thyratron, a klystron, and radio frequency (RF) driver were 4,886 hours, 17,383 hours, and 5,924 hours respectively. Moreover, although analysis of the relation between the number of therapies performed (or operating time) and the number of failures for each main machine part was observed, the tendency was not to associate them

  15. The interaction of NDE and failure analysis

    International Nuclear Information System (INIS)

    Nichols, R.W.

    1988-01-01

    This paper deals with the use of Non-Destructive Examination (NDE) and failure analysis for the assessment of the structural integrity. It appears that failure analysis enables to know whether NDE is required or not, and can help to direct NDE into the most useful directions by identifying the areas where it is most important that defects are absent. It also appears that failure analysis can help the operator to decide which NDE method is best suited to the component studied and provides detailed specifications for this NDE method. The interaction between failure analysis and NDE is then described. (TEC)

  16. The interaction of NDE and failure analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, R W

    1988-12-31

    This paper deals with the use of Non-Destructive Examination (NDE) and failure analysis for the assessment of the structural integrity. It appears that failure analysis enables to know whether NDE is required or not, and can help to direct NDE into the most useful directions by identifying the areas where it is most important that defects are absent. It also appears that failure analysis can help the operator to decide which NDE method is best suited to the component studied and provides detailed specifications for this NDE method. The interaction between failure analysis and NDE is then described. (TEC).

  17. Analysis of failures in concrete containments

    International Nuclear Information System (INIS)

    Moreno-Gonzalez, A.

    1989-09-01

    The function of Containment, in an accident event, is to avoid the release of radioactive substances into the surroundings. Containment failure, therefore, is defined as the appearance of leak paths to the external environment. These leak paths may appear either as a result of loss of leaktightness due to degradation of design conditions or structural failure with containment material break. This document is a survey of the state of the art of Containment Failure Analysis. It gives a detailed description of all failure mechanisms, indicating all the possible failure modes and their causes, right from failure resulting from degradation of the materials to structural failure and linear breake failure. Following the description of failure modes, possible failure criteria are identified, with special emphasis on structural failure criteria. These criteria have been obtained not only from existing codes but also from the latest experimental results. A chapter has been dedicated exclusively to failure criteria in conventional structures, for the purpose of evaluating the possibility of application to the case of containment. As the structural behaviour of the containment building is very complex, it is not possible to define failure through a single parameter. It is therefore advisable to define a methodology for containment failure analysis which could be applied to a particular containment. This methodology should include prevailing load and material conditions together with the behaviour of complex conditions such as the liner-anchorage-cracked concrete interaction

  18. Analysis of risk factors for cluster behavior of dental implant failures.

    Science.gov (United States)

    Chrcanovic, Bruno Ramos; Kisch, Jenö; Albrektsson, Tomas; Wennerberg, Ann

    2017-08-01

    Some studies indicated that implant failures are commonly concentrated in few patients. To identify and analyze cluster behavior of dental implant failures among subjects of a retrospective study. This retrospective study included patients receiving at least three implants only. Patients presenting at least three implant failures were classified as presenting a cluster behavior. Univariate and multivariate logistic regression models and generalized estimating equations analysis evaluated the effect of explanatory variables on the cluster behavior. There were 1406 patients with three or more implants (8337 implants, 592 failures). Sixty-seven (4.77%) patients presented cluster behavior, with 56.8% of all implant failures. The intake of antidepressants and bruxism were identified as potential negative factors exerting a statistically significant influence on a cluster behavior at the patient-level. The negative factors at the implant-level were turned implants, short implants, poor bone quality, age of the patient, the intake of medicaments to reduce the acid gastric production, smoking, and bruxism. A cluster pattern among patients with implant failure is highly probable. Factors of interest as predictors for implant failures could be a number of systemic and local factors, although a direct causal relationship cannot be ascertained. © 2017 Wiley Periodicals, Inc.

  19. FRAC (failure rate analysis code): a computer program for analysis of variance of failure rates. An application user's guide

    International Nuclear Information System (INIS)

    Martz, H.F.; Beckman, R.J.; McInteer, C.R.

    1982-03-01

    Probabilistic risk assessments (PRAs) require estimates of the failure rates of various components whose failure modes appear in the event and fault trees used to quantify accident sequences. Several reliability data bases have been designed for use in providing the necessary reliability data to be used in constructing these estimates. In the nuclear industry, the Nuclear Plant Reliability Data System (NPRDS) and the In-Plant Reliability Data System (IRPDS), among others, were designed for this purpose. An important characteristic of such data bases is the selection and identification of numerous factors used to classify each component that is reported and the subsequent failures of each component. However, the presence of such factors often complicates the analysis of reliability data in the sense that it is inappropriate to group (that is, pool) data for those combinations of factors that yield significantly different failure rate values. These types of data can be analyzed by analysis of variance. FRAC (Failure Rate Analysis Code) is a computer code that performs an analysis of variance of failure rates. In addition, FRAC provides failure rate estimates

  20. A pragmatic approach to estimate alpha factors for common cause failure analysis

    International Nuclear Information System (INIS)

    Hassija, Varun; Senthil Kumar, C.; Velusamy, K.

    2014-01-01

    Highlights: • Estimation of coefficients in alpha factor model for common cause analysis. • A derivation of plant specific alpha factors is demonstrated. • We examine sensitivity of common cause contribution to total system failure. • We compare beta factor and alpha factor models for various redundant configurations. • The use of alpha factors is preferable, especially for large redundant systems. - Abstract: Most of the modern technological systems are deployed with high redundancy but still they fail mainly on account of common cause failures (CCF). Various models such as Beta Factor, Multiple Greek Letter, Binomial Failure Rate and Alpha Factor exists for estimation of risk from common cause failures. Amongst all, alpha factor model is considered most suitable for high redundant systems as it arrives at common cause failure probabilities from a set of ratios of failures and the total component failure probability Q T . In the present study, alpha factor model is applied for the assessment of CCF of safety systems deployed at two nuclear power plants. A method to overcome the difficulties in estimation of the coefficients viz., alpha factors in the model, importance of deriving plant specific alpha factors and sensitivity of common cause contribution to the total system failure probability with respect to hazard imposed by various CCF events is highlighted. An approach described in NUREG/CR-5500 is extended in this study to provide more explicit guidance for a statistical approach to derive plant specific coefficients for CCF analysis especially for high redundant systems. The procedure is expected to aid regulators for independent safety assessment

  1. Failure analysis for ultrasound machines in a radiology department after implementation of predictive maintenance method

    Directory of Open Access Journals (Sweden)

    Greg Chu

    2018-01-01

    Full Text Available Objective: The objective of the study was to perform quantitative failure and fault analysis to the diagnostic ultrasound (US scanners in a radiology department after the implementation of the predictive maintenance (PdM method; to study the reduction trend of machine failure; to understand machine operating parameters affecting the failure; to further optimize the method to maximize the machine clinically service time. Materials and Methods: The PdM method has been implemented to the 5 US machines since 2013. Log books were used to record machine failures and their root causes together with the time spent on repair, all of which were retrieved, categorized, and analyzed for the period between 2013 and 2016. Results: There were a total of 108 cases of failure occurred in these 5 US machines during the 4-year study period. The average number of failure per month for all these machines was 2.4. Failure analysis showed that there were 33 cases (30.5% due to software, 44 cases (40.7% due to hardware, and 31 cases (28.7% due to US probe. There was a statistically significant negative correlation between the time spent on regular quality assurance (QA by hospital physicists with the time spent on faulty parts replacement over the study period (P = 0.007. However, there was no statistically significant correlation between regular QA time and total yearly breakdown case (P = 0.12, although there has been a decreasing trend observed in the yearly total breakdown. Conclusion: There has been a significant improvement on the machine failure of US machines attributed to the concerted effort of sonographers and physicists in our department to practice the PdM method, in that system component repair time has been reduced, and a decreasing trend in the number of system breakdown has been observed.

  2. Dependent failure analysis of NPP data bases

    International Nuclear Information System (INIS)

    Cooper, S.E.; Lofgren, E.V.; Samanta, P.K.; Wong Seemeng

    1993-01-01

    A technical approach for analyzing plant-specific data bases for vulnerabilities to dependent failures has been developed and applied. Since the focus of this work is to aid in the formulation of defenses to dependent failures, rather than to quantify dependent failure probabilities, the approach of this analysis is critically different. For instance, the determination of component failure dependencies has been based upon identical failure mechanisms related to component piecepart failures, rather than failure modes. Also, component failures involving all types of component function loss (e.g., catastrophic, degraded, incipient) are equally important to the predictive purposes of dependent failure defense development. Consequently, dependent component failures are identified with a different dependent failure definition which uses a component failure mechanism categorization scheme in this study. In this context, clusters of component failures which satisfy the revised dependent failure definition are termed common failure mechanism (CFM) events. Motor-operated valves (MOVs) in two nuclear power plant data bases have been analyzed with this approach. The analysis results include seven different failure mechanism categories; identified potential CFM events; an assessment of the risk-significance of the potential CFM events using existing probabilistic risk assessments (PRAs); and postulated defenses to the identified potential CFM events. (orig.)

  3. 14 CFR 417.224 - Probability of failure analysis.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Probability of failure analysis. 417.224..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.224 Probability of failure..., must account for launch vehicle failure probability in a consistent manner. A launch vehicle failure...

  4. Overview and statistical failure analyses of the electrical insulation system for the SSC long dipole magnets from an industrialization point of view

    International Nuclear Information System (INIS)

    Roach, J.F.

    1992-01-01

    The electrical insulation system of the SSC long dipole magnets is reviewed and potential dielectric failure modes discussed. Electrical insulation fabrication and assembly issues with respect to rate production manufacturability are addressed. The automation required for rate assembly of electrical insulation components will require critical online visual and dielectric screening tests to insure production quality. Storage and assembly areas must bc designed to prevent foreign particles from becoming entrapped in the insulation during critical coil winding, molding, and collaring operations. All hand assembly procedures involving dielectrics must be performed with rigorous attention to their impact on insulation integrity. Individual dipole magnets must have a sufficiently low probability of electrical insulation failure under all normal and fault mode voltage conditions such that the series of magnets in the SSC rings have acceptable Mean Time Between Failure (MTBF) with respect to dielectric mode failure events. Statistical models appropriate for large electrical system breakdown failure analysis are applied to the SSC magnet rings. The MTBF of the SSC system is related to failure data base for individual dipole magnet samples

  5. Comparative analysis of positive and negative attitudes toward statistics

    Science.gov (United States)

    Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah

    2015-02-01

    Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.

  6. Statistical data analysis using SAS intermediate statistical methods

    CERN Document Server

    Marasinghe, Mervyn G

    2018-01-01

    The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...

  7. Bruxism and dental implant failures: a multilevel mixed effects parametric survival analysis approach.

    Science.gov (United States)

    Chrcanovic, B R; Kisch, J; Albrektsson, T; Wennerberg, A

    2016-11-01

    Recent studies have suggested that the insertion of dental implants in patients being diagnosed with bruxism negatively affected the implant failure rates. The aim of the present study was to investigate the association between the bruxism and the risk of dental implant failure. This retrospective study is based on 2670 patients who received 10 096 implants at one specialist clinic. Implant- and patient-related data were collected. Descriptive statistics were used to describe the patients and implants. Multilevel mixed effects parametric survival analysis was used to test the association between bruxism and risk of implant failure adjusting for several potential confounders. Criteria from a recent international consensus (Lobbezoo et al., J Oral Rehabil, 40, 2013, 2) and from the International Classification of Sleep Disorders (International classification of sleep disorders, revised: diagnostic and coding manual, American Academy of Sleep Medicine, Chicago, 2014) were used to define and diagnose the condition. The number of implants with information available for all variables totalled 3549, placed in 994 patients, with 179 implants reported as failures. The implant failure rates were 13·0% (24/185) for bruxers and 4·6% (155/3364) for non-bruxers (P bruxism was a statistically significantly risk factor to implant failure (HR 3·396; 95% CI 1·314, 8·777; P = 0·012), as well as implant length, implant diameter, implant surface, bone quantity D in relation to quantity A, bone quality 4 in relation to quality 1 (Lekholm and Zarb classification), smoking and the intake of proton pump inhibitors. It is suggested that the bruxism may be associated with an increased risk of dental implant failure. © 2016 John Wiley & Sons Ltd.

  8. Failure Analysis Of Industrial Boiler Pipe

    International Nuclear Information System (INIS)

    Natsir, Muhammad; Soedardjo, B.; Arhatari, Dewi; Andryansyah; Haryanto, Mudi; Triyadi, Ari

    2000-01-01

    Failure analysis of industrial boiler pipe has been done. The tested pipe material is carbon steel SA 178 Grade A refer to specification data which taken from Fertilizer Company. Steps in analysis were ; collection of background operation and material specification, visual inspection, dye penetrant test, radiography test, chemical composition test, hardness test, metallography test. From the test and analysis result, it is shown that the pipe failure caused by erosion and welding was shown porosity and incomplete penetration. The main cause of failure pipe is erosion due to cavitation, which decreases the pipe thickness. Break in pipe thickness can be done due to decreasing in pipe thickness. To anticipate this problem, the ppe will be replaced with new pipe

  9. Combinatorial analysis of systems with competing failures subject to failure isolation and propagation effects

    International Nuclear Information System (INIS)

    Xing Liudong; Levitin, Gregory

    2010-01-01

    This paper considers the reliability analysis of binary-state systems, subject to propagated failures with global effect, and failure isolation phenomena. Propagated failures with global effect are common-cause failures originated from a component of a system/subsystem causing the failure of the entire system/subsystem. Failure isolation occurs when the failure of one component (referred to as a trigger component) causes other components (referred to as dependent components) within the same system to become isolated from the system. On the one hand, failure isolation makes the isolated dependent components unusable; on the other hand, it prevents the propagation of failures originated from those dependent components. However, the failure isolation effect does not exist if failures originated in the dependent components already propagate globally before the trigger component fails. In other words, there exists a competition in the time domain between the failure of the trigger component that causes failure isolation and propagated failures originated from the dependent components. This paper presents a combinatorial method for the reliability analysis of systems subject to such competing propagated failures and failure isolation effect. Based on the total probability theorem, the proposed method is analytical, exact, and has no limitation on the type of time-to-failure distributions for the system components. An illustrative example is given to demonstrate the basics and advantages of the proposed method.

  10. Failure analysis and failure prevention in electric power systems

    International Nuclear Information System (INIS)

    Rau, C.A. Jr.; Becker, D.G.; Besuner, P.M.; Cipolla, R.C.; Egan, G.R.; Gupta, P.; Johnson, D.P.; Omry, U.; Tetelman, A.S.; Rettig, T.W.; Peters, D.C.

    1977-01-01

    New methods have been developed and applied to better quantify and increase the reliability, safety, and availability of electric power plants. Present and potential problem areas have been identified both by development of an improved computerized data base of malfunctions in nuclear power plants and by detailed metallurgical and mechanical failure analyses of selected problems. Significant advances in the accuracy and speed of structural analyses have been made through development and application of the boundary integral equation and influence function methods of stress and fracture mechanics analyses. The currently specified flaw evaluation procedures of the ASME Boiler and Pressure Vessel Code have been computerized. Results obtained from these procedures for evaluation of specific in-service inspection indications have been compared with results obtained utilizing the improved analytical methods. Mathematical methods have also been developed to describe and analyze the statistical variations in materials properties and in component loading, and uncertainties in the flaw size that might be passed by quality assurance systems. These new methods have been combined to develop accurate failure rate predictions based upon probabilistic fracture mechanics. Improved failure prevention strategies have been formulated by combining probabilistic fracture mechanics and cost optimization techniques. The approach has been demonstrated by optimizing the nondestructive inspection level with regard to both reliability and cost. (Auth.)

  11. submitter Methodologies for the Statistical Analysis of Memory Response to Radiation

    CERN Document Server

    Bosser, Alexandre L; Tsiligiannis, Georgios; Frost, Christopher D; Zadeh, Ali; Jaatinen, Jukka; Javanainen, Arto; Puchner, Helmut; Saigne, Frederic; Virtanen, Ari; Wrobel, Frederic; Dilillo, Luigi

    2016-01-01

    Methodologies are proposed for in-depth statistical analysis of Single Event Upset data. The motivation for using these methodologies is to obtain precise information on the intrinsic defects and weaknesses of the tested devices, and to gain insight on their failure mechanisms, at no additional cost. The case study is a 65 nm SRAM irradiated with neutrons, protons and heavy ions. This publication is an extended version of a previous study [1].

  12. Methods for dependency estimation and system unavailability evaluation based on failure data statistics

    International Nuclear Information System (INIS)

    Azarm, M.A.; Hsu, F.; Martinez-Guridi, G.; Vesely, W.E.

    1993-07-01

    This report introduces a new perspective on the basic concept of dependent failures where the definition of dependency is based on clustering in failure times of similar components. This perspective has two significant implications: first, it relaxes the conventional assumption that dependent failures must be simultaneous and result from a severe shock; second, it allows the analyst to use all the failures in a time continuum to estimate the potential for multiple failures in a window of time (e.g., a test interval), therefore arriving at a more accurate value for system unavailability. In addition, the models developed here provide a method for plant-specific analysis of dependency, reflecting the plant-specific maintenance practices that reduce or increase the contribution of dependent failures to system unavailability. The proposed methodology can be used for screening analysis of failure data to estimate the fraction of dependent failures among the failures. In addition, the proposed method can evaluate the impact of the observed dependency on system unavailability and plant risk. The formulations derived in this report have undergone various levels of validations through computer simulation studies and pilot applications. The pilot applications of these methodologies showed that the contribution of dependent failures of diesel generators in one plant was negligible, while in another plant was quite significant. It also showed that in the plant with significant contribution of dependency to Emergency Power System (EPS) unavailability, the contribution changed with time. Similar findings were reported for the Containment Fan Cooler breakers. Drawing such conclusions about system performance would not have been possible with any other reported dependency methodologies

  13. Failure diagnosis and fault tree analysis

    International Nuclear Information System (INIS)

    Weber, G.

    1982-07-01

    In this report a methodology of failure diagnosis for complex systems is presented. Systems which can be represented by fault trees are considered. This methodology is based on switching algebra, failure diagnosis of digital circuits and fault tree analysis. Relations between these disciplines are shown. These relations are due to Boolean algebra and Boolean functions used throughout. It will be shown on this basis that techniques of failure diagnosis and fault tree analysis are useful to solve the following problems: 1. describe an efficient search of all failed components if the system is failed. 2. Describe an efficient search of all states which are close to a system failure if the system is still operating. The first technique will improve the availability, the second the reliability and safety. For these problems, the relation to methods of failure diagnosis for combinational circuits is required. Moreover, the techniques are demonstrated for a number of systems which can be represented by fault trees. (orig./RW) [de

  14. Failure and damage analysis of advanced materials

    CERN Document Server

    Sadowski, Tomasz

    2015-01-01

    The papers in this volume present basic concepts and new developments in failure and damage analysis with focus on advanced materials such as composites, laminates, sandwiches and foams, and also new metallic materials. Starting from some mathematical foundations (limit surfaces, symmetry considerations, invariants) new experimental results and their analysis are shown. Finally, new concepts for failure prediction and analysis will be introduced and discussed as well as new methods of failure and damage prediction for advanced metallic and non-metallic materials. Based on experimental results the traditional methods will be revised.

  15. Failure analysis of real-time systems

    International Nuclear Information System (INIS)

    Jalashgar, A.; Stoelen, K.

    1998-01-01

    This paper highlights essential aspects of real-time software systems that are strongly related to the failures and their course of propagation. The significant influence of means-oriented and goal-oriented system views in the description, understanding and analysing of those aspects is elaborated. The importance of performing failure analysis prior to reliability analysis of real-time systems is equally addressed. Problems of software reliability growth models taking the properties of such systems into account are discussed. Finally, the paper presents a preliminary study of a goal-oriented approach to model the static and dynamic characteristics of real-time systems, so that the corresponding analysis can be based on a more descriptive and informative picture of failures, their effects and the possibility of their occurrence. (author)

  16. Cube or block. Statistical analysis, historial review, failure mode and behaviour; Cubo o bloque. Ajuste estadistico, analisis historico, modo de fallo y comportamiento

    Energy Technology Data Exchange (ETDEWEB)

    Negro, V.; Varela, O.; Campo, J. M. del; Lopez Gutierrez, J. S.

    2010-07-01

    Many different concrete shapes have been developed as armour units for rubble mound breakwaters. Nearly all are mass concrete construction and can be classified as random placed or regular patterns Placed. the majority of artificial armour unit are placed in two layers and they are massive. they intended to function in a similar way to natural rock (cubes, blocks, antifer cubes,...). More complex armour units were designed to achieve greater stability by obtaining a high degree of interlock (dolosse, accropode, Xbloc, core-loc,...). finally, the third group are the regular pattern placed units with a greater percentage of voids for giving a stronger dissipation of cement hydration (cob, shed, hollow cubes,...), This research deals about the comparison between two massive concrete units, the cubes and the blocks and the analysis of the geometry, the porosity, the construction process and the failure mode. The first stage is the statistical analysis. the scope of it is based on the historical reference of the Spanish Breakwaters with main layer of cubes and blocks (ministry of Public Works, General Directorate of Ports, 1988). (Author) 9 refs.

  17. Metallized Film Capacitor Lifetime Evaluation and Failure Mode Analysis

    CERN Document Server

    Gallay, R.

    2015-06-15

    One of the main concerns for power electronic engineers regarding capacitors is to predict their remaining lifetime in order to anticipate costly failures or system unavailability. This may be achieved using a Weibull statistical law combined with acceleration factors for the temperature, the voltage, and the humidity. This paper discusses the different capacitor failure modes and their effects and consequences.

  18. Comparison of stress-based and strain-based creep failure criteria for severe accident analysis

    International Nuclear Information System (INIS)

    Chavez, S.A.; Kelly, D.L.; Witt, R.J.; Stirn, D.P.

    1995-01-01

    We conducted a parametic analysis of stress-based and strain-based creep failure criteria to determine if there is a significant difference between the two criteria for SA533B vessel steel under severe accident conditions. Parametric variables include debris composition, system pressure, and creep strain histories derived from different testing programs and mathematically fit, with and without tertiary creep. Results indicate significant differences between the two criteria. Stress gradient plays an important role in determining which criterion will predict failure first. Creep failure was not very sensitive to different creep strain histories, except near the transition temperature of the vessel steel (900K to 1000K). Statistical analyses of creep failure data of four independent sources indicate that these data may be pooled, with a spline point at 1000K. We found the Manson-Haferd parameter to have better failure predictive capability than the Larson-Miller parameter for the data studied. (orig.)

  19. VALIDATION OF SPRING OPERATED PRESSURE RELIEF VALVE TIME TO FAILURE AND THE IMPORTANCE OF STATISTICALLY SUPPORTED MAINTENANCE INTERVALS

    Energy Technology Data Exchange (ETDEWEB)

    Gross, R; Stephen Harris, S

    2009-02-18

    The Savannah River Site operates a Relief Valve Repair Shop certified by the National Board of Pressure Vessel Inspectors to NB-23, The National Board Inspection Code. Local maintenance forces perform inspection, testing, and repair of approximately 1200 spring-operated relief valves (SORV) each year as the valves are cycled in from the field. The Site now has over 7000 certified test records in the Computerized Maintenance Management System (CMMS); a summary of that data is presented in this paper. In previous papers, several statistical techniques were used to investigate failure on demand and failure rates including a quantal response method for predicting the failure probability as a function of time in service. The non-conservative failure mode for SORV is commonly termed 'stuck shut'; industry defined as the valve opening at greater than or equal to 1.5 times the cold set pressure. Actual time to failure is typically not known, only that failure occurred some time since the last proof test (censored data). This paper attempts to validate the assumptions underlying the statistical lifetime prediction results using Monte Carlo simulation. It employs an aging model for lift pressure as a function of set pressure, valve manufacturer, and a time-related aging effect. This paper attempts to answer two questions: (1) what is the predicted failure rate over the chosen maintenance/ inspection interval; and do we understand aging sufficient enough to estimate risk when basing proof test intervals on proof test results?

  20. Competing failure analysis in phased-mission systems with functional dependence in one of phases

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Levitin, Gregory

    2012-01-01

    This paper proposes an algorithm for the reliability analysis of non-repairable phased-mission systems (PMS) subject to competing failure propagation and isolation effects. A failure originating from a system component which causes extensive damage to other system components is a propagated failure. When the propagated failure affects all the system components, causing the entire system failure, a propagated failure with global effect (PFGE) is said to occur. However, the failure propagation can be isolated in systems subject to functional dependence (FDEP) behavior, where the failure of a component (referred to as trigger component) causes some other components (referred to as dependent components) to become inaccessible or unusable (isolated from the system), and thus further failures from these dependent components have no effect on the system failure behavior. On the other hand, if any PFGE from dependent components occurs before the trigger failure, the failure propagation effect takes place, causing the overall system failure. In summary, there are two distinct consequences of a PFGE due to the competition between the failure isolation and failure propagation effects in the time domain. Existing works on such competing failures focus only on single-phase systems. However, many real-world systems are phased-mission systems (PMS), which involve multiple, consecutive and non-overlapping phases of operations or tasks. Consideration of competing failures for PMS is a challenging and difficult task because PMS exhibit dynamics in the system configuration and component behavior as well as statistical dependencies across phases for a given component. This paper proposes a combinatorial method to address the competing failure effects in the reliability analysis of binary non-repairable PMS. The proposed method is verified using a Markov-based method through a numerical example. Different from the Markov-based approach that is limited to exponential distribution, the

  1. Indoor Soiling Method and Outdoor Statistical Risk Analysis of Photovoltaic Power Plants

    Science.gov (United States)

    Rajasekar, Vidyashree

    This is a two-part thesis. Part 1 presents an approach for working towards the development of a standardized artificial soiling method for laminated photovoltaic (PV) cells or mini-modules. Construction of an artificial chamber to maintain controlled environmental conditions and components/chemicals used in artificial soil formulation is briefly explained. Both poly-Si mini-modules and a single cell mono-Si coupons were soiled and characterization tests such as I-V, reflectance and quantum efficiency (QE) were carried out on both soiled, and cleaned coupons. From the results obtained, poly-Si mini-modules proved to be a good measure of soil uniformity, as any non-uniformity present would not result in a smooth curve during I-V measurements. The challenges faced while executing reflectance and QE characterization tests on poly-Si due to smaller size cells was eliminated on the mono-Si coupons with large cells to obtain highly repeatable measurements. This study indicates that the reflectance measurements between 600-700 nm wavelengths can be used as a direct measure of soil density on the modules. Part 2 determines the most dominant failure modes of field aged PV modules using experimental data obtained in the field and statistical analysis, FMECA (Failure Mode, Effect, and Criticality Analysis). The failure and degradation modes of about 744 poly-Si glass/polymer frameless modules fielded for 18 years under the cold-dry climate of New York was evaluated. Defect chart, degradation rates (both string and module levels) and safety map were generated using the field measured data. A statistical reliability tool, FMECA that uses Risk Priority Number (RPN) is used to determine the dominant failure or degradation modes in the strings and modules by means of ranking and prioritizing the modes. This study on PV power plants considers all the failure and degradation modes from both safety and performance perspectives. The indoor and outdoor soiling studies were jointly

  2. Data needs for common cause failure analysis

    International Nuclear Information System (INIS)

    Parry, G.W.; Paula, H.M.; Rasmuson, D.; Whitehead, D.

    1990-01-01

    The procedures guide for common cause failure analysis published jointly by USNRC and EPRI requires a detailed historical event analysis. Recent work on the further development of the cause-defense picture of common cause failures introduced in that guide identified the information that is necessary to perform the detailed analysis in an objective manner. This paper summarizes these information needs

  3. An analysis of distribution transformer failure using the statistical package for the social sciences (SPSS software

    Directory of Open Access Journals (Sweden)

    María Gabriela Mago Ramos

    2012-05-01

    Full Text Available A methodology was developed for analysing faults in distribution transformers using the statistical package for social sciences (SPSS; it consisted of organising and creating of database regarding failed equipment, incorporating such data into the processing programme and converting all the information into numerical variables to be processed, thereby obtaining descriptive statistics and enabling factor and discriminant analysis. The research was based on information provided by companies in areas served by Corpoelec (Valencia, Venezuela and Codensa (Bogotá, Colombia.

  4. Integrated failure probability estimation based on structural integrity analysis and failure data: Natural gas pipeline case

    International Nuclear Information System (INIS)

    Dundulis, Gintautas; Žutautaitė, Inga; Janulionis, Remigijus; Ušpuras, Eugenijus; Rimkevičius, Sigitas; Eid, Mohamed

    2016-01-01

    In this paper, the authors present an approach as an overall framework for the estimation of the failure probability of pipelines based on: the results of the deterministic-probabilistic structural integrity analysis (taking into account loads, material properties, geometry, boundary conditions, crack size, and defected zone thickness), the corrosion rate, the number of defects and failure data (involved into the model via application of Bayesian method). The proposed approach is applied to estimate the failure probability of a selected part of the Lithuanian natural gas transmission network. The presented approach for the estimation of integrated failure probability is a combination of several different analyses allowing us to obtain: the critical crack's length and depth, the failure probability of the defected zone thickness, dependency of the failure probability on the age of the natural gas transmission pipeline. A model's uncertainty analysis and uncertainty propagation analysis are performed, as well. - Highlights: • Degradation mechanisms of natural gas transmission pipelines. • Fracture mechanic analysis of the pipe with crack. • Stress evaluation of the pipe with critical crack. • Deterministic-probabilistic structural integrity analysis of gas pipeline. • Integrated estimation of pipeline failure probability by Bayesian method.

  5. Preliminary failure mode and effect analysis

    International Nuclear Information System (INIS)

    Addison, J.V.

    1972-01-01

    A preliminary Failure Mode and Effect Analysis (FMEA) was made on the overall 5 Kwe system. A general discussion of the system and failure effect is given in addition to the tabulated FMEA and a primary block diagram of the system. (U.S.)

  6. ANALYSIS OF RELIABILITY OF NONRECTORABLE REDUNDANT POWER SYSTEMS TAKING INTO ACCOUNT COMMON FAILURES

    Directory of Open Access Journals (Sweden)

    V. A. Anischenko

    2014-01-01

    Full Text Available Reliability Analysis of nonrestorable redundant power Systems of industrial plants and other consumers of electric energy was carried out. The main attention was paid to numbers failures influence, caused by failures of all elements of System due to one general reason. Noted the main possible reasons of common failures formation. Two main indicators of reliability of non-restorable systems are considered: average time of no-failure operation and mean probability of no-failure operation. Modeling of failures were carried out by mean of division of investigated system into two in-series connected subsystems, one of them indicated independent failures, but the other indicated common failures. Due to joined modeling of single and common failures resulting intensity of failures is the amount incompatible components: intensity statistically independent failures and intensity of common failures of elements and system in total.It is shown the influence of common failures of elements on average time of no-failure operation of system. There is built the scale of preference of systems according to criterion of  average time maximum of no-failure operation, depending on portion of common failures. It is noticed that such common failures don’t influence on the scale of preference, but  change intervals of time, determining the moments of systems failures and excepting them from the number of comparators. There were discussed two problems  of conditionally optimization of  systems’  reservation choice, taking into account their reliability and cost. The first problem is solved due to criterion of minimum cost of system providing mean probability of no-failure operation, the second problem is solved due to criterion of maximum of mean probability of no-failure operation with cost limitation of system.

  7. Risk assessment of the emergency processes: Healthcare failure mode and effect analysis.

    Science.gov (United States)

    Taleghani, Yasamin Molavi; Rezaei, Fatemeh; Sheikhbardsiri, Hojat

    2016-01-01

    Ensuring about the patient's safety is the first vital step in improving the quality of care and the emergency ward is known as a high-risk area in treatment health care. The present study was conducted to evaluate the selected risk processes of emergency surgery department of a treatment-educational Qaem center in Mashhad by using analysis method of the conditions and failure effects in health care. In this study, in combination (qualitative action research and quantitative cross-sectional), failure modes and effects of 5 high-risk procedures of the emergency surgery department were identified and analyzed according to Healthcare Failure Mode and Effects Analysis (HFMEA). To classify the failure modes from the "nursing errors in clinical management model (NECM)", the classification of the effective causes of error from "Eindhoven model" and determination of the strategies to improve from the "theory of solving problem by an inventive method" were used. To analyze the quantitative data of descriptive statistics (total points) and to analyze the qualitative data, content analysis and agreement of comments of the members were used. In 5 selected processes by "voting method using rating", 23 steps, 61 sub-processes and 217 potential failure modes were identified by HFMEA. 25 (11.5%) failure modes as the high risk errors were detected and transferred to the decision tree. The most and the least failure modes were placed in the categories of care errors (54.7%) and knowledge and skill (9.5%), respectively. Also, 29.4% of preventive measures were in the category of human resource management strategy. "Revision and re-engineering of processes", "continuous monitoring of the works", "preparation and revision of operating procedures and policies", "developing the criteria for evaluating the performance of the personnel", "designing a suitable educational content for needs of employee", "training patients", "reducing the workload and power shortage", "improving team

  8. X-framework: Space system failure analysis framework

    Science.gov (United States)

    Newman, John Steven

    Space program and space systems failures result in financial losses in the multi-hundred million dollar range every year. In addition to financial loss, space system failures may also represent the loss of opportunity, loss of critical scientific, commercial and/or national defense capabilities, as well as loss of public confidence. The need exists to improve learning and expand the scope of lessons documented and offered to the space industry project team. One of the barriers to incorporating lessons learned include the way in which space system failures are documented. Multiple classes of space system failure information are identified, ranging from "sound bite" summaries in space insurance compendia, to articles in journals, lengthy data-oriented (what happened) reports, and in some rare cases, reports that treat not only the what, but also the why. In addition there are periodically published "corporate crisis" reports, typically issued after multiple or highly visible failures that explore management roles in the failure, often within a politically oriented context. Given the general lack of consistency, it is clear that a good multi-level space system/program failure framework with analytical and predictive capability is needed. This research effort set out to develop such a model. The X-Framework (x-fw) is proposed as an innovative forensic failure analysis approach, providing a multi-level understanding of the space system failure event beginning with the proximate cause, extending to the directly related work or operational processes and upward through successive management layers. The x-fw focus is on capability and control at the process level and examines: (1) management accountability and control, (2) resource and requirement allocation, and (3) planning, analysis, and risk management at each level of management. The x-fw model provides an innovative failure analysis approach for acquiring a multi-level perspective, direct and indirect causation of

  9. Importance analysis for the systems with common cause failures

    International Nuclear Information System (INIS)

    Pan Zhijie; Nonaka, Yasuo

    1995-01-01

    This paper extends the importance analysis technique to the research field of common cause failures to evaluate the structure importance, probability importance, and β-importance for the systems with common cause failures. These importance measures would help reliability analysts to limit the common cause failure analysis framework and find efficient defence strategies against common cause failures

  10. Statistical process control methods allow the analysis and improvement of anesthesia care.

    Science.gov (United States)

    Fasting, Sigurd; Gisvold, Sven E

    2003-10-01

    Quality aspects of the anesthetic process are reflected in the rate of intraoperative adverse events. The purpose of this report is to illustrate how the quality of the anesthesia process can be analyzed using statistical process control methods, and exemplify how this analysis can be used for quality improvement. We prospectively recorded anesthesia-related data from all anesthetics for five years. The data included intraoperative adverse events, which were graded into four levels, according to severity. We selected four adverse events, representing important quality and safety aspects, for statistical process control analysis. These were: inadequate regional anesthesia, difficult emergence from general anesthesia, intubation difficulties and drug errors. We analyzed the underlying process using 'p-charts' for statistical process control. In 65,170 anesthetics we recorded adverse events in 18.3%; mostly of lesser severity. Control charts were used to define statistically the predictable normal variation in problem rate, and then used as a basis for analysis of the selected problems with the following results: Inadequate plexus anesthesia: stable process, but unacceptably high failure rate; Difficult emergence: unstable process, because of quality improvement efforts; Intubation difficulties: stable process, rate acceptable; Medication errors: methodology not suited because of low rate of errors. By applying statistical process control methods to the analysis of adverse events, we have exemplified how this allows us to determine if a process is stable, whether an intervention is required, and if quality improvement efforts have the desired effect.

  11. Universal avalanche statistics and triggering close to failure in a mean-field model of rheological fracture

    Science.gov (United States)

    Baró, Jordi; Davidsen, Jörn

    2018-03-01

    The hypothesis of critical failure relates the presence of an ultimate stability point in the structural constitutive equation of materials to a divergence of characteristic scales in the microscopic dynamics responsible for deformation. Avalanche models involving critical failure have determined common universality classes for stick-slip processes and fracture. However, not all empirical failure processes exhibit the trademarks of criticality. The rheological properties of materials introduce dissipation, usually reproduced in conceptual models as a hardening of the coarse grained elements of the system. Here, we investigate the effects of transient hardening on (i) the activity rate and (ii) the statistical properties of avalanches. We find the explicit representation of transient hardening in the presence of generalized viscoelasticity and solve the corresponding mean-field model of fracture. In the quasistatic limit, the accelerated energy release is invariant with respect to rheology and the avalanche propagation can be reinterpreted in terms of a stochastic counting process. A single universality class can be defined from such analogy, and all statistical properties depend only on the distance to criticality. We also prove that interevent correlations emerge due to the hardening—even in the quasistatic limit—that can be interpreted as "aftershocks" and "foreshocks."

  12. Goal-oriented failure analysis - a systems analysis approach to hazard identification

    International Nuclear Information System (INIS)

    Reeves, A.B.; Davies, J.; Foster, J.; Wells, G.L.

    1990-01-01

    Goal-Oriented Failure Analysis, GOFA, is a methodology which is being developed to identify and analyse the potential failure modes of a hazardous plant or process. The technique will adopt a structured top-down approach, with a particular failure goal being systematically analysed. A systems analysis approach is used, with the analysis being organised around a systems diagram of the plant or process under study. GOFA will also use checklists to supplement the analysis -these checklists will be prepared in advance of a group session and will help to guide the analysis and avoid unnecessary time being spent on identifying obvious failure modes or failing to identify certain hazards or failures. GOFA is being developed with the aim of providing a hazard identification methodology which is more efficient and stimulating than the conventional approach to HAZOP. The top-down approach should ensure that the analysis is more focused and the use of a systems diagram will help to pull the analysis together at an early stage whilst also helping to structure the sessions in a more stimulating way than the conventional techniques. GOFA will be, essentially, an extension of the HAZOP methodology. GOFA is currently being computerised using a knowledge-based systems approach for implementation. The Goldworks II expert systems development tool is being used. (author)

  13. Reliability analysis for the creep rupture mode of failure

    International Nuclear Information System (INIS)

    Vaidyanathan, S.

    1975-01-01

    An analytical study has been carried out to relate the factors of safety employed in the design of a component to the probability of failure in the thermal creep rupture mode. The analysis considers the statistical variations in the operating temperature, stress and rupture time, and applies the life fraction damage criterion as the indicator of failure. Typical results for solution annealed type 304-stainless steel material for the temperature and stress variations expected in an LMFBR environment have been obtained. The analytical problem was solved by considering the joint distribution of the independent variables and deriving the distribution for the function associated with the probability of failure by integrating over proper regions as dictated by the deterministic design rule. This leads to a triple integral for the final probability of failure where the coefficients of variation associated with the temperature, stress and rupture time distributions can be specified by the user. The derivation is general, and can be used for time varying stress histories and the case of irradiated material where the rupture time varies with accumulated fluence. Example calculations applied to solution annealed type 304 stainless steel material have been carried out for an assumed coefficient of variation of 2% for temperature and 6% for stress. The results show that the probability of failure associated with dependent stress intensity limits specified in the ASME Boiler and Pressure Vessel Section III Code Case 1592 is less than 5x10 -8 . Rupture under thermal creep conditions is a highly complicated phenomenon. It is believed that the present study will help in quantizing the reliability to be expected with deterministic design factors of safety

  14. Improved methods for dependent failure analysis in PSA

    International Nuclear Information System (INIS)

    Ballard, G.M.; Games, A.M.

    1988-01-01

    The basic design principle used in ensuring the safe operation of nuclear power plant is defence in depth. This normally takes the form of redundant equipment and systems which provide protection even if a number of equipment failures occur. Such redundancy is particularly effective in ensuring that multiple, independent equipment failures with the potential for jeopardising reactor safety will be rare events. However the achievement of high reliability has served to highlight the potentially dominant role of multiple, dependent failures of equipment and systems. Analysis of reactor operating experience has shown that dependent failure events are the major contributors to safety system failures and reactor incidents and accidents. In parallel PSA studies have shown that the results of a safety analysis are sensitive to assumptions made about the dependent failure (CCF) probability for safety systems. Thus a Westinghouse Analysis showed that increasing system dependent failure probabilities by a factor of 5 led to a factor 4 increase in core. This paper particularly refers to the engineering concepts underlying dependent failure assessment touching briefly on aspects of data. It is specifically not the intent of our work to develop a new mathematical model of CCF but to aid the use of existing models

  15. SU-E-T-205: MLC Predictive Maintenance Using Statistical Process Control Analysis.

    Science.gov (United States)

    Able, C; Hampton, C; Baydush, A; Bright, M

    2012-06-01

    MLC failure increases accelerator downtime and negatively affects the clinic treatment delivery schedule. This study investigates the use of Statistical Process Control (SPC), a modern quality control methodology, to retrospectively evaluate MLC performance data thereby predicting the impending failure of individual MLC leaves. SPC, a methodology which detects exceptional variability in a process, was used to analyze MLC leaf velocity data. A MLC velocity test is performed weekly on all leaves during morning QA. The leaves sweep 15 cm across the radiation field with the gantry pointing down. The leaf speed is analyzed from the generated dynalog file using quality assurance software. MLC leaf speeds in which a known motor failure occurred (8) and those in which no motor replacement was performed (11) were retrospectively evaluated for a 71 week period. SPC individual and moving range (I/MR) charts were used in the analysis. The I/MR chart limits were calculated using the first twenty weeks of data and set at 3 standard deviations from the mean. The MLCs in which a motor failure occurred followed two general trends: (a) no data indicating a change in leaf speed prior to failure (5 of 8) and (b) a series of data points exceeding the limit prior to motor failure (3 of 8). I/MR charts for a high percentage (8 of 11) of the non-replaced MLC motors indicated that only a single point exceeded the limit. These single point excesses were deemed false positives. SPC analysis using MLC performance data may be helpful in detecting a significant percentage of impending failures of MLC motors. The ability to detect MLC failure may depend on the method of failure (i.e. gradual or catastrophic). Further study is needed to determine if increasing the sampling frequency could increase reliability. Project was support by a grant from Varian Medical Systems, Inc. © 2012 American Association of Physicists in Medicine.

  16. Mechanistic considerations used in the development of the probability of failure in transient increases in power (PROFIT) pellet-zircaloy cladding (thermo-mechanical-chemical) interactions (pci) fuel failure model

    International Nuclear Information System (INIS)

    Pankaskie, P.J.

    1980-05-01

    A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) interactions (PCI) failure model for estimating the Probability of Failure in Transient Increases in Power (PROFIT) was developed. PROFIT is based on (1) standard statistical methods applied to available PCI fuel failure data and (2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmental and strain-rate dependent Strain Energy Absorption to Failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-dislocation interaction effects in the Zircaloy cladding

  17. Does Bruxism Contribute to Dental Implant Failure? A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Zhou, Yi; Gao, Jinxia; Luo, Le; Wang, Yining

    2016-04-01

    Bruxism was usually considered as a contraindication for oral implanting. The causal relationship between bruxism and dental implant failure was remained controversial in existing literatures. This meta-analysis was performed to investigate the relationship between them. This review conducted an electronic systematic literature search in MEDLINE (PubMed) and EmBase in November 2013 without time and language restrictions. Meanwhile, a hand searching for all the relevant references of included studies was also conducted. Study information extraction and methodological quality assessments were accomplished by two reviewers independently. A discussion ensued if any disagreement occurred, and unresolved issues were solved by consulting a third reviewer. Methodological quality was assessed by using the Newcastle-Ottawa Scale tool. Odds ratio (OR) with 95% confidence interval (CI) was pooled to estimate the relative effect of bruxism on dental implant failures. Fixed effects model was used initially; if the heterogeneity was high, random effects model was chosen for meta-analysis. Statistical analyses were carried out by using Review Manager 5.1. In this meta-analysis review, extracted data were classified into two groups based on different units. Units were based on the number of prostheses (group A) and the number of patients (group B). In group A, the total pooled OR of bruxers versus nonbruxers for all subgroups was 4.72 (95% CI: 2.66-8.36, p = .07). In group B, the total pooled OR of bruxers versus nonbruxers for all subgroups was 3.83 (95% CI: 2.12-6.94, p = .22). This meta-analysis was performed to evaluate the relationship between bruxism and dental implant failure. In contrast to nonbruxers, prostheses in bruxers had a higher failure rate. It suggests that bruxism is a contributing factor of causing the occurrence of dental implant technical/biological complications and plays a role in dental implant failure. © 2015 Wiley Periodicals, Inc.

  18. The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments

    International Nuclear Information System (INIS)

    Pham, Bihn T.; Einerson, Jeffrey J.

    2010-01-01

    This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory's Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automated processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.

  19. The statistical analysis techniques to support the NGNP fuel performance experiments

    Energy Technology Data Exchange (ETDEWEB)

    Pham, Binh T., E-mail: Binh.Pham@inl.gov; Einerson, Jeffrey J.

    2013-10-15

    This paper describes the development and application of statistical analysis techniques to support the Advanced Gas Reactor (AGR) experimental program on Next Generation Nuclear Plant (NGNP) fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel temperature) is regulated by the He–Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the NGNP Data Management and Analysis System for automated processing and qualification of the AGR measured data. The neutronic and thermal code simulation results are used for comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the fuel temperature within a given range.

  20. Service reliability assessment using failure mode and effect analysis ...

    African Journals Online (AJOL)

    user

    Statistical Process Control Teng and Ho (1996) .... are still remaining left on modelling the interaction between impact of internal service failure and ..... Design error proofing: development of automated error-proofing information systems, Proceedings of.

  1. Failure of the statistical hypothesis for compound nucleus decay

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1976-01-01

    In the past five years, conclusive evidence has accumulated that channel correlations are important in resonance reactions. Experiments showing the failure of the statistical hypothesis for compound nucleus decay are described. The emphasis is on the radiative neutron capture reaction, where much detailed work has been done. A short summary of the theory of the (n,γ) reaction is presented; it is demonstrated that this reaction is a sensitive probe of the wave functions for highly excited nuclear states. Various experimental techniques using reactor and accelerator-based neutron sources are presented. The experiments have shown that both resonant and non-resonant reactions can show single particle effects where the external part of configuration space is dominant. In the non-resonant case, hard sphere capture is important; on resonances, valence particle motion and the contributions of 1 and 3 quasi-particle doorway states make up a significant fraction of the radiative width

  2. Data and Statistics: Heart Failure

    Science.gov (United States)

    ... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

  3. Lessons learned from failure analysis

    International Nuclear Information System (INIS)

    Le May, I.

    2006-01-01

    Failure analysis can be a very useful tool to designers and operators of plant and equipment. It is not simply something that is done for lawyers and insurance companies, but is a tool from which lessons can be learned and by means of which the 'breed' can be improved. In this presentation, several failure investigations that have contributed to understanding will be presented. Specifically, the following cases will be discussed: 1) A fire at a refinery that occurred in a desulphurization unit. 2) The failure of a pipeline before it was even put into operation. 3) Failures in locomotive axles that took place during winter operation. The refinery fire was initially blamed on defective Type 321 seamless stainless steel tubing, but there were conflicting views between 'experts' involved as to the mechanism of failure and the writer was called upon to make an in-depth study. This showed that there were a variety of failure mechanism involved, including high temperature fracture, environmentally-induced cracking and possible manufacturing defects. The unraveling of the failure sequence is described and illustrated. The failure of an oil transmission was discovered when the line was pressure tested some months after it had been installed and before it was put into service. Repairs were made and failure occurred in another place upon the next pressure test being conducted. After several more repairs had been made the line was abandoned and a lawsuit was commenced on the basis that the steel was defective. An investigation disclosed that the material was sensitive to embrittlement and the causes of this were determined. As a result, changes were made in the microstructural control of the product to avoid similar problems in future. A series of axle failures occurred in diesel electric locomotives during winter. An investigation was made to determine the nature of the failures which were not by classical fatigue, nor did they correspond to published illustrations of Cu

  4. Beginning statistics with data analysis

    CERN Document Server

    Mosteller, Frederick; Rourke, Robert EK

    2013-01-01

    This introduction to the world of statistics covers exploratory data analysis, methods for collecting data, formal statistical inference, and techniques of regression and analysis of variance. 1983 edition.

  5. Failure analysis for WWER-fuel elements

    International Nuclear Information System (INIS)

    Boehmert, J.; Huettig, W.

    1986-10-01

    If the fuel defect rate proves significantly high, failure analysis has to be performed in order to trace down the defect causes, to implement corrective actions, and to take measures of failure prevention. Such analyses are work-consuming and very skill-demanding technical tasks, which require examination methods and devices excellently developed and a rich stock of experience in evaluation of features of damage. For that this work specifies the procedure of failure analyses in detail. Moreover prerequisites and experimental equipment for the investigation of WWER-type fuel elements are described. (author)

  6. failure analysis of a uav flight control system using markov analysis

    African Journals Online (AJOL)

    Failure analysis of a flight control system proposed for Air Force Institute of Technology (AFIT) Unmanned Aerial Vehicle (UAV) was studied using Markov Analysis (MA). It was perceived that understanding of the number of failure states and the probability of being in those state are of paramount importance in order to ...

  7. Failure analysis of prestressed concrete beam under impact loading

    International Nuclear Information System (INIS)

    Ishikawa, N.; Sonoda, Y.; Kobayashi, N.

    1993-01-01

    This paper presents a failure analysis of prestressed concrete (PC) beam under impact loading. At first, the failure analysis of PC beam section is performed by using the discrete section element method in order to obtain the dynamic bending moment-curvature relation. Secondary, the failure analysis of PC beam is performed by using the rigid panel-spring model. Finally, the numerical calculation is executed and is compared with the experimental results. It is found that this approach can simulate well the experiments at the local and overall failure of the PC beam as well as the impact load and the displacement-time relations. (author)

  8. Micromechanics Based Failure Analysis of Heterogeneous Materials

    Science.gov (United States)

    Sertse, Hamsasew M.

    are performed for both brittle failure/high cycle fatigue (HCF) for negligible plastic strain and ductile failure/low cycle fatigue (LCF) for large plastic strain. The proposed approach is incorporated in SwiftComp and used to predict the initial failure envelope, stress-strain curve for various loading conditions, and fatigue life of heterogeneous materials. The combined effects of strain hardening and progressive fatigue damage on the effective properties of heterogeneous materials are also studied. The capability of the current approach is validated using several representative examples of heterogeneous materials including binary composites, continuous fiber-reinforced composites, particle-reinforced composites, discontinuous fiber-reinforced composites, and woven composites. The predictions of MSG are also compared with the predictions obtained using various micromechanics approaches such as Generalized Methods of Cells (GMC), Mori-Tanaka (MT), and Double Inclusions (DI) and Representative Volume Element (RVE) Analysis (called as 3-dimensional finite element analysis (3D FEA) in this document). This study demonstrates that a micromechanics based failure analysis has a great potential to rigorously and more accurately analyze initiation and progression of damage in heterogeneous materials. However, this approach requires material properties specific to damage analysis, which are needed to be independently calibrated for each constituent.

  9. Fracture criterion for brittle materials based on statistical cells of finite volume

    International Nuclear Information System (INIS)

    Cords, H.; Kleist, G.; Zimmermann, R.

    1986-06-01

    An analytical consideration of the Weibull Statistical Analysis of brittle materials established the necessity of including one additional material constant for a more comprehensive description of the failure behaviour. The Weibull analysis is restricted to infinitesimal volume elements in consequence of the differential calculus applied. It was found that infinitesimally small elements are in conflict with the basic statistical assumption and that the differential calculus is not needed in fact since nowadays most of the stress analyses are based on finite element calculations, and these are most suitable for a subsequent statistical analysis of strength. The size of a finite statistical cell has been introduced as the third material parameter. It should represent the minimum volume containing all statistical features of the material such as distribution of pores, flaws and grains. The new approach also contains a unique treatment of failure under multiaxial stresses. The quantity responsible for failure under multiaxial stresses is introduced as a modified strain energy. Sixteen different tensile specimens including CT-specimens have been investigated experimentally and analyzed with the probabilistic fracture criterion. As a result it can be stated that the failure rates of all types of specimens made from three different grades of graphite are predictable. The accuracy of the prediction is one standard deviation. (orig.) [de

  10. Research design and statistical analysis

    CERN Document Server

    Myers, Jerome L; Lorch Jr, Robert F

    2013-01-01

    Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data.  The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations.  Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions.  Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations

  11. Debugging Nondeterministic Failures in Linux Programs through Replay Analysis

    Directory of Open Access Journals (Sweden)

    Shakaiba Majeed

    2018-01-01

    Full Text Available Reproducing a failure is the first and most important step in debugging because it enables us to understand the failure and track down its source. However, many programs are susceptible to nondeterministic failures that are hard to reproduce, which makes debugging extremely difficult. We first address the reproducibility problem by proposing an OS-level replay system for a uniprocessor environment that can capture and replay nondeterministic events needed to reproduce a failure in Linux interactive and event-based programs. We then present an analysis method, called replay analysis, based on the proposed record and replay system to diagnose concurrency bugs in such programs. The replay analysis method uses a combination of static analysis, dynamic tracing during replay, and delta debugging to identify failure-inducing memory access patterns that lead to concurrency failure. The experimental results show that the presented record and replay system has low-recording overhead and hence can be safely used in production systems to catch rarely occurring bugs. We also present few concurrency bug case studies from real-world applications to prove the effectiveness of the proposed bug diagnosis framework.

  12. A meta-analysis of the association between diabetic patients and AVF failure in dialysis.

    Science.gov (United States)

    Yan, Yan; Ye, Dan; Yang, Liu; Ye, Wen; Zhan, Dandan; Zhang, Li; Xiao, Jun; Zeng, Yan; Chen, Qinkai

    2018-11-01

    The most preferable vascular access for patients with end-stage renal failure needing hemodialysis is native arteriovenous fistula (AVF) on account of its access longevity, patient morbidity, hospitalization costs, lower risks of infection and fewer incidence of thrombotic complications. Meanwhile, according to National Kidney Foundation (NKF)̸Dialysis Out-comes Quality Initiative (DOQI) guidelines, AVF is more used than before. However, a significant percentage of AVF fails to support dialysis therapy due to lack of adequate maturity. Among all factors, the presence of diabetes mellitus was shown to be one of the risk factors for the development of vascular access failure by some authors. Therefore, this review evaluates the current evidence concerning the correlation of diabetes and AVF failure. A search was conducted using MEDLINE, SCIENCE DIRECT, SPRINGER, WILEY-BLACKWELL, KARGER, EMbase, CNKI and WanFang Data from the establishment time of databases to January 2016. The analysis involved studies that contained subgroups of diabetic patients and compared their outcomes with those of non-diabetic adults. In total, 23 articles were retrieved and included in the review. The meta-analysis revealed a statistically significantly higher rate of AVF failure in diabetic patients compared with non-diabetic patients (OR = 1.682; 95% CI, 1.429-1.981, Test of OR = 1: z = 6.25, p <.001). This review found an increased risk of AVF failure in diabetes patients. If confirmed by further prospective studies, preventive measure should be considered when planning AVF in diabetic patients.

  13. Accelerated Testing with Multiple Failure Modes under Several Temperature Conditions

    Directory of Open Access Journals (Sweden)

    Zongyue Yu

    2014-01-01

    Full Text Available A complicated device may have multiple failure modes, and some of the failure modes are sensitive to low temperatures. To assess the reliability of a product with multiple failure modes, this paper presents an accelerated testing in which both of the high temperatures and the low temperatures are applied. Firstly, an acceleration model based on the Arrhenius model but accounting for the influence of both the high temperatures and low temperatures is proposed. Accordingly, an accelerated testing plan including both the high temperatures and low temperatures is designed, and a statistical analysis method is developed. The reliability function of the product with multiple failure modes under variable working conditions is given by the proposed statistical analysis method. Finally, a numerical example is studied to illustrate the proposed accelerated testing. The results show that the proposed accelerated testing is rather efficient.

  14. Failure Modes and Effects Analysis (FMEA): A Bibliography

    Science.gov (United States)

    2000-01-01

    Failure modes and effects analysis (FMEA) is a bottom-up analytical process that identifies process hazards, which helps managers understand vulnerabilities of systems, as well as assess and mitigate risk. It is one of several engineering tools and techniques available to program and project managers aimed at increasing the likelihood of safe and successful NASA programs and missions. This bibliography references 465 documents in the NASA STI Database that contain the major concepts, failure modes or failure analysis, in either the basic index of the major subject terms.

  15. BACFIRE, Minimal Cut Sets Common Cause Failure Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: BACFIRE, designed to aid in common cause failure analysis, searches among the basic events of a minimal cut set of the system logic model for common potential causes of failure. The potential cause of failure is called a qualitative failure characteristics. The algorithm searches qualitative failure characteristics (that are part of the program input) of the basic events contained in a set to find those characteristics common to all basic events. This search is repeated for all cut sets input to the program. Common cause failure analysis is thereby performed without inclusion of secondary failure in the system logic model. By using BACFIRE, a common cause failure analysis can be added to an existing system safety and reliability analysis. 2 - Method of solution: BACFIRE searches the qualitative failure characteristics of the basic events contained in the fault tree minimal cut set to find those characteristics common to all basic events by either of two criteria. The first criterion can be met if all the basic events in a minimal cut set are associated by a condition which alone may increase the probability of multiple component malfunction. The second criterion is met if all the basic events in a minimal cut set are susceptible to the same secondary failure cause and are located in the same domain for that cause of secondary failure. 3 - Restrictions on the complexity of the problem - Maxima of: 1001 secondary failure maps, 101 basic events, 10 cut sets

  16. Failure analysis of vise jaw holders for hacksaw machine

    Directory of Open Access Journals (Sweden)

    Essam Ali Al-Bahkali

    2018-01-01

    Full Text Available Failure analysis in mechanical components has been investigated in many studies in the last few years. Failure analysis and prevention are important functions in all engineering disciplines. Materials engineers are often the lead role in the analysis of failures, where a component or product fails in service or if a failure occurs during manufacturing or production processing. In any case, one must determine the cause of the failure to prevent future occurrences and/or to improve the performance of the device, component or structure. For example, the vise jaw holders of hacksaws can break due to accidental heavy loads or machine misuse. The parts that break are the stationary and movable vise jaw holders and the connecter power screw between the holders. To investigate the failure of these components, a three-dimensional finite element model for stress analysis was performed. First, the analysis identified the broken components of the hacksaw machine. In addition, the type of materials of the broken parts was identified, a CAD model was built, and the hacksaw mechanism was analyzed to determine the accurate applied loads on the broken parts. After analyzing the model using Abaqus CAE software, the results showed that the location of the high stresses was identical with the high-stress locations in the original, broken parts. Furthermore, the power screw was subjected to a high load, which deformed the power screw. Also, the stationary vise jaw holder was broken by impact because it was not touched by the power screw until the movable vise jaw holder broke. A conclusion is drawn from the failure analysis and a way to improve the design of the broken parts is suggested.

  17. Pipe failure probability - the Thomas paper revisited

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    2000-01-01

    Almost twenty years ago, in Volume 2 of Reliability Engineering (the predecessor of Reliability Engineering and System Safety), a paper by H. M. Thomas of Rolls Royce and Associates Ltd. presented a generalized approach to the estimation of piping and vessel failure probability. The 'Thomas-approach' used insights from actual failure statistics to calculate the probability of leakage and conditional probability of rupture given leakage. It was intended for practitioners without access to data on the service experience with piping and piping system components. This article revisits the Thomas paper by drawing on insights from development of a new database on piping failures in commercial nuclear power plants worldwide (SKI-PIPE). Partially sponsored by the Swedish Nuclear Power Inspectorate (SKI), the R and D leading up to this note was performed during 1994-1999. Motivated by data requirements of reliability analysis and probabilistic safety assessment (PSA), the new database supports statistical analysis of piping failure data. Against the background of this database development program, the article reviews the applicability of the 'Thomas approach' in applied risk and reliability analysis. It addresses the question whether a new and expanded database on the service experience with piping systems would alter the original piping reliability correlation as suggested by H. M. Thomas

  18. Failure Propagation Modeling and Analysis via System Interfaces

    Directory of Open Access Journals (Sweden)

    Lin Zhao

    2016-01-01

    Full Text Available Safety-critical systems must be shown to be acceptably safe to deploy and use in their operational environment. One of the key concerns of developing safety-critical systems is to understand how the system behaves in the presence of failures, regardless of whether that failure is triggered by the external environment or caused by internal errors. Safety assessment at the early stages of system development involves analysis of potential failures and their consequences. Increasingly, for complex systems, model-based safety assessment is becoming more widely used. In this paper we propose an approach for safety analysis based on system interface models. By extending interaction models on the system interface level with failure modes as well as relevant portions of the physical system to be controlled, automated support could be provided for much of the failure analysis. We focus on fault modeling and on how to compute minimal cut sets. Particularly, we explore state space reconstruction strategy and bounded searching technique to reduce the number of states that need to be analyzed, which remarkably improves the efficiency of cut sets searching algorithm.

  19. Use of fuel failure correlations in accident analysis

    International Nuclear Information System (INIS)

    O'Dell, L.D.; Baars, R.E.; Waltar, A.E.

    1975-05-01

    The MELT-III code for analysis of a Transient Overpower (TOP) accident in an LMFBR is briefly described, including failure criteria currently applied in the code. Preliminary results of calculations exploring failure patterns in time and space in the reactor core are reported and compared for the two empirical fuel failure correlations employed in the code. (U.S.)

  20. Analysis of the reliability of quality assurance of welded nuclear pressure vessels with regard to catastrophic failure

    Energy Technology Data Exchange (ETDEWEB)

    Ostberg, G [Lund Institute of Technology, Dept. of Materials Engineering (Sweden); Klingenstierna, B [FTL, Military Electronics Laboratory, National Defence Research Institute, Stockholm (Sweden); Sjoberg, L [Goteborg Univ., Dept. of Psychology (Sweden)

    1976-07-01

    The project is described as an analysis of the reliability of quality assurance of welded nuclear pressure vessels with regard to catastrophic failure. Its scope extends both beyond previous statistical evaluations of the risk of catastrophic failure, beyond previous studies of human malfunction, and beyond current studies of probabilistic fracture mechanics. The latter deal only with 'normal' data and 'normal' processes and procedures according to established rules and regulations, where as the present study concerns deficiencies or more or less complete fallacies of normal procedures and processes. Hopefully such events will prove to be rare enough to be characterized as 'unique'; this, in turn, means that the result of the investigation is not a new statistical figure but rather a survey of types and frequencies of errors and error-producing conditions. The emphasis is on the main pressure vessel; related information on the primary circuit is included, only when this can be done without excessive effort or costs. The avenues of approach in terms of technical-academic disciplines are reliability techniques and the psychology of analysis work and control processes.

  1. Statistical data analysis handbook

    National Research Council Canada - National Science Library

    Wall, Francis J

    1986-01-01

    It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

  2. Light water reactor lower head failure analysis

    International Nuclear Information System (INIS)

    Rempe, J.L.; Chavez, S.A.; Thinnes, G.L.

    1993-10-01

    This document presents the results from a US Nuclear Regulatory Commission-sponsored research program to investigate the mode and timing of vessel lower head failure. Major objectives of the analysis were to identify plausible failure mechanisms and to develop a method for determining which failure mode would occur first in different light water reactor designs and accident conditions. Failure mechanisms, such as tube ejection, tube rupture, global vessel failure, and localized vessel creep rupture, were studied. Newly developed models and existing models were applied to predict which failure mechanism would occur first in various severe accident scenarios. So that a broader range of conditions could be considered simultaneously, calculations relied heavily on models with closed-form or simplified numerical solution techniques. Finite element techniques-were employed for analytical model verification and examining more detailed phenomena. High-temperature creep and tensile data were obtained for predicting vessel and penetration structural response

  3. Light water reactor lower head failure analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rempe, J.L.; Chavez, S.A.; Thinnes, G.L. [EG and G Idaho, Inc., Idaho Falls, ID (United States)] [and others

    1993-10-01

    This document presents the results from a US Nuclear Regulatory Commission-sponsored research program to investigate the mode and timing of vessel lower head failure. Major objectives of the analysis were to identify plausible failure mechanisms and to develop a method for determining which failure mode would occur first in different light water reactor designs and accident conditions. Failure mechanisms, such as tube ejection, tube rupture, global vessel failure, and localized vessel creep rupture, were studied. Newly developed models and existing models were applied to predict which failure mechanism would occur first in various severe accident scenarios. So that a broader range of conditions could be considered simultaneously, calculations relied heavily on models with closed-form or simplified numerical solution techniques. Finite element techniques-were employed for analytical model verification and examining more detailed phenomena. High-temperature creep and tensile data were obtained for predicting vessel and penetration structural response.

  4. Observations in the statistical analysis of NBG-18 nuclear graphite strength tests

    International Nuclear Information System (INIS)

    Hindley, Michael P.; Mitchell, Mark N.; Blaine, Deborah C.; Groenwold, Albert A.

    2012-01-01

    Highlights: ► Statistical analysis of NBG-18 nuclear graphite strength test. ► A Weibull distribution and normal distribution is tested for all data. ► A Bimodal distribution in the CS data is confirmed. ► The CS data set has the lowest variance. ► A Combined data set is formed and has Weibull distribution. - Abstract: The purpose of this paper is to report on the selection of a statistical distribution chosen to represent the experimental material strength of NBG-18 nuclear graphite. Three large sets of samples were tested during the material characterisation of the Pebble Bed Modular Reactor and Core Structure Ceramics materials. These sets of samples are tensile strength, flexural strength and compressive strength (CS) measurements. A relevant statistical fit is determined and the goodness of fit is also evaluated for each data set. The data sets are also normalised for ease of comparison, and combined into one representative data set. The validity of this approach is demonstrated. A second failure mode distribution is found on the CS test data. Identifying this failure mode supports the similar observations made in the past. The success of fitting the Weibull distribution through the normalised data sets allows us to improve the basis for the estimates of the variability. This could also imply that the variability on the graphite strength for the different strength measures is based on the same flaw distribution and thus a property of the material.

  5. Two-Sample Statistics for Testing the Equality of Survival Functions Against Improper Semi-parametric Accelerated Failure Time Alternatives: An Application to the Analysis of a Breast Cancer Clinical Trial

    Science.gov (United States)

    BROËT, PHILIPPE; TSODIKOV, ALEXANDER; DE RYCKE, YANN; MOREAU, THIERRY

    2010-01-01

    This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests. PMID:15293627

  6. Two-sample statistics for testing the equality of survival functions against improper semi-parametric accelerated failure time alternatives: an application to the analysis of a breast cancer clinical trial.

    Science.gov (United States)

    Broët, Philippe; Tsodikov, Alexander; De Rycke, Yann; Moreau, Thierry

    2004-06-01

    This paper presents two-sample statistics suited for testing equality of survival functions against improper semi-parametric accelerated failure time alternatives. These tests are designed for comparing either the short- or the long-term effect of a prognostic factor, or both. These statistics are obtained as partial likelihood score statistics from a time-dependent Cox model. As a consequence, the proposed tests can be very easily implemented using widely available software. A breast cancer clinical trial is presented as an example to demonstrate the utility of the proposed tests.

  7. The statistical analysis of failure of a MEVATRON77 DX67 linear accelerator over a ten year period

    CERN Document Server

    Aoyama, H; Tahara, S; Uno, H; Kadohisa, S; Azuma, Y; Nakagiri, Y; Hiraki, Y

    2003-01-01

    A linear accelerator (linac) takes a leading role in radiation therapy. A linac consists of complicated main parts and systems and it is required that highly accurate operational procedures should be maintained. Operational failure occurs for various reasons. In this report, the failure occurrences of one linac over a ten year period were recorded and analyzed. The subject model was a MEVATRON77 DX67 (Siemens, Inc). The failure rate for each system, the form classification of the contents of failure, the operation situation at the time of failure, and the average performance life of the main parts were totaled. Moreover, the relation between the number of therapies that patients received (operating efficiency) and the failure rate within that number and the relation between environment (temperature and humidity) and the failure rate attributed to other systems were analyzed. In this report, irradiation interruption was also included with situations where treatment was unable to begin in total for the number o...

  8. A streamlined failure mode and effects analysis

    International Nuclear Information System (INIS)

    Ford, Eric C.; Smith, Koren; Terezakis, Stephanie; Croog, Victoria; Gollamudi, Smitha; Gage, Irene; Keck, Jordie; DeWeese, Theodore; Sibley, Greg

    2014-01-01

    Purpose: Explore the feasibility and impact of a streamlined failure mode and effects analysis (FMEA) using a structured process that is designed to minimize staff effort. Methods: FMEA for the external beam process was conducted at an affiliate radiation oncology center that treats approximately 60 patients per day. A structured FMEA process was developed which included clearly defined roles and goals for each phase. A core group of seven people was identified and a facilitator was chosen to lead the effort. Failure modes were identified and scored according to the FMEA formalism. A risk priority number,RPN, was calculated and used to rank failure modes. Failure modes with RPN > 150 received safety improvement interventions. Staff effort was carefully tracked throughout the project. Results: Fifty-two failure modes were identified, 22 collected during meetings, and 30 from take-home worksheets. The four top-ranked failure modes were: delay in film check, missing pacemaker protocol/consent, critical structures not contoured, and pregnant patient simulated without the team's knowledge of the pregnancy. These four failure modes hadRPN > 150 and received safety interventions. The FMEA was completed in one month in four 1-h meetings. A total of 55 staff hours were required and, additionally, 20 h by the facilitator. Conclusions: Streamlined FMEA provides a means of accomplishing a relatively large-scale analysis with modest effort. One potential value of FMEA is that it potentially provides a means of measuring the impact of quality improvement efforts through a reduction in risk scores. Future study of this possibility is needed

  9. A streamlined failure mode and effects analysis.

    Science.gov (United States)

    Ford, Eric C; Smith, Koren; Terezakis, Stephanie; Croog, Victoria; Gollamudi, Smitha; Gage, Irene; Keck, Jordie; DeWeese, Theodore; Sibley, Greg

    2014-06-01

    Explore the feasibility and impact of a streamlined failure mode and effects analysis (FMEA) using a structured process that is designed to minimize staff effort. FMEA for the external beam process was conducted at an affiliate radiation oncology center that treats approximately 60 patients per day. A structured FMEA process was developed which included clearly defined roles and goals for each phase. A core group of seven people was identified and a facilitator was chosen to lead the effort. Failure modes were identified and scored according to the FMEA formalism. A risk priority number,RPN, was calculated and used to rank failure modes. Failure modes with RPN > 150 received safety improvement interventions. Staff effort was carefully tracked throughout the project. Fifty-two failure modes were identified, 22 collected during meetings, and 30 from take-home worksheets. The four top-ranked failure modes were: delay in film check, missing pacemaker protocol/consent, critical structures not contoured, and pregnant patient simulated without the team's knowledge of the pregnancy. These four failure modes had RPN > 150 and received safety interventions. The FMEA was completed in one month in four 1-h meetings. A total of 55 staff hours were required and, additionally, 20 h by the facilitator. Streamlined FMEA provides a means of accomplishing a relatively large-scale analysis with modest effort. One potential value of FMEA is that it potentially provides a means of measuring the impact of quality improvement efforts through a reduction in risk scores. Future study of this possibility is needed.

  10. A streamlined failure mode and effects analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ford, Eric C., E-mail: eford@uw.edu; Smith, Koren; Terezakis, Stephanie; Croog, Victoria; Gollamudi, Smitha; Gage, Irene; Keck, Jordie; DeWeese, Theodore; Sibley, Greg [Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287 (United States)

    2014-06-15

    Purpose: Explore the feasibility and impact of a streamlined failure mode and effects analysis (FMEA) using a structured process that is designed to minimize staff effort. Methods: FMEA for the external beam process was conducted at an affiliate radiation oncology center that treats approximately 60 patients per day. A structured FMEA process was developed which included clearly defined roles and goals for each phase. A core group of seven people was identified and a facilitator was chosen to lead the effort. Failure modes were identified and scored according to the FMEA formalism. A risk priority number,RPN, was calculated and used to rank failure modes. Failure modes with RPN > 150 received safety improvement interventions. Staff effort was carefully tracked throughout the project. Results: Fifty-two failure modes were identified, 22 collected during meetings, and 30 from take-home worksheets. The four top-ranked failure modes were: delay in film check, missing pacemaker protocol/consent, critical structures not contoured, and pregnant patient simulated without the team's knowledge of the pregnancy. These four failure modes hadRPN > 150 and received safety interventions. The FMEA was completed in one month in four 1-h meetings. A total of 55 staff hours were required and, additionally, 20 h by the facilitator. Conclusions: Streamlined FMEA provides a means of accomplishing a relatively large-scale analysis with modest effort. One potential value of FMEA is that it potentially provides a means of measuring the impact of quality improvement efforts through a reduction in risk scores. Future study of this possibility is needed.

  11. Risk analysis of geothermal power plants using Failure Modes and Effects Analysis (FMEA) technique

    International Nuclear Information System (INIS)

    Feili, Hamid Reza; Akar, Navid; Lotfizadeh, Hossein; Bairampour, Mohammad; Nasiri, Sina

    2013-01-01

    Highlights: • Using Failure Modes and Effects Analysis (FMEA) to find potential failures in geothermal power plants. • We considered 5 major parts of geothermal power plants for risk analysis. • Risk Priority Number (RPN) is calculated for all failure modes. • Corrective actions are recommended to eliminate or decrease the risk of failure modes. - Abstract: Renewable energy plays a key role in the transition toward a low carbon economy and the provision of a secure supply of energy. Geothermal energy is a versatile source as a form of renewable energy that meets popular demand. Since some Geothermal Power Plants (GPPs) face various failures, the requirement of a technique for team engineering to eliminate or decrease potential failures is considerable. Because no specific published record of considering an FMEA applied to GPPs with common failure modes have been found already, in this paper, the utilization of Failure Modes and Effects Analysis (FMEA) as a convenient technique for determining, classifying and analyzing common failures in typical GPPs is considered. As a result, an appropriate risk scoring of occurrence, detection and severity of failure modes and computing the Risk Priority Number (RPN) for detecting high potential failures is achieved. In order to expedite accuracy and ability to analyze the process, XFMEA software is utilized. Moreover, 5 major parts of a GPP is studied to propose a suitable approach for developing GPPs and increasing reliability by recommending corrective actions for each failure mode

  12. Association of sleep bruxism with ceramic restoration failure: A systematic review and meta-analysis.

    Science.gov (United States)

    de Souza Melo, Gilberto; Batistella, Elis Ângela; Bertazzo-Silveira, Eduardo; Simek Vega Gonçalves, Thais Marques; Mendes de Souza, Beatriz Dulcineia; Porporatti, André Luís; Flores-Mir, Carlos; De Luca Canto, Graziela

    2018-03-01

    Ceramic restorations are popular because of their excellent optical properties. However, failures are still a major concern, and dentists are confronted with the following question: is sleep bruxism (SB) associated with an increased frequency of ceramic restoration failures? The purpose of this systematic review and meta-analysis was to assess whether the presence of SB is associated with increased ceramic restoration failure. Observational studies and clinical trials that evaluated the short- and long-term survival rate of ceramic restorations in SB participants were selected. Sleep bruxism diagnostic criteria must have included at least 1 of the following: questionnaire, clinical evaluation, or polysomnography. Seven databases, in addition to 3 nonpeer-reviewed literature databases, were searched. The risk of bias was assessed by using the meta-analysis of statistics assessment and review instrument (MAStARI) checklist. Eight studies were included for qualitative synthesis, but only 5 for the meta-analysis. Three studies were categorized as moderate risk and 5 as high risk of bias. Clinical and methodological heterogeneity across studies were considered high. Increased hazard ratio (HR=7.74; 95% confidence interval [CI]=2.50 to 23.95) and odds ratio (OR=2.52; 95% CI=1.24 to 5.12) were observed considering only anterior ceramic veneers. Nevertheless, limited data from the meta-analysis and from the restricted number of included studies suggested that differences in the overall odds of failure concerning SB and other types of ceramic restorations did not favor or disfavor any association (OR=1.10; 95% CI=0.43 to 2.8). The overall quality of evidence was considered very low according to the GRADE criteria. Within the limitations of this systematic review, the overall result from the meta-analysis did not favor any association between SB and increased odds of failure for ceramic restorations. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry

  13. Failure analysis of a helicopter's main rotor bearing

    International Nuclear Information System (INIS)

    Shahzad, M.; Qureshi, A.H.; Waqas, H.; Hussain, N.; Ali, N.

    2011-01-01

    Presented results report some of the findings of a detailed failure analysis carried out on a main rotor hub assembly, which had symptoms of burning and mechanical damage. The analysis suggests environmental degradation of the grease which causes pitting on bearing-balls. The consequent inefficient lubrication raises the temperature which leads to the smearing of cage material (brass) on the bearing-balls and ultimately causes the failure. The analysis has been supported by the microstructural studies, thermal analysis and micro-hardness testing performed on the affected main rotor bearing parts. (author)

  14. Failure probability analysis of optical grid

    Science.gov (United States)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  15. Prophylactic antibiotic regimen and dental implant failure: a meta-analysis.

    Science.gov (United States)

    Chrcanovic, B R; Albrektsson, T; Wennerberg, A

    2014-12-01

    The aim of this meta-analysis was to investigate whether there are any positive effects of prophylactic antibiotic regimen on implant failure rates and post-operative infection when performing dental implant treatment in healthy individuals. An electronic search without time or language restrictions was undertaken in March 2014. Eligibility criteria included clinical human studies, either randomised or not. The search strategy resulted in 14 publications. The I(2) statistic was used to express the percentage of the total variation across studies due to heterogeneity. The inverse variance method was used with a fixed- or random-effects model, depending on the heterogeneity. The estimates of relative effect were expressed in risk ratio (RR) with 95% confidence interval. Six studies were judged to be at high risk of bias, whereas one study was considered at moderate risk, and six studies were considered at low risk of bias. The test for overall effect showed that the difference between the procedures (use versus non-use of antibiotics) significantly affected the implant failure rates (P = 0.0002), with a RR of 0.55 (95% CI 0.41-0.75). The number needed to treat (NNT) to prevent one patient having an implant failure was 50 (95% CI 33-100). There were no apparent significant effects of prophylactic antibiotics on the occurrence of post-operative infections in healthy patients receiving implants (P = 0.520). A sensitivity analysis did not reveal difference when studies judged as having high risk of bias were not considered. The results have to be interpreted with caution due to the presence of several confounding factors in the included studies. © 2014 John Wiley & Sons Ltd.

  16. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    Science.gov (United States)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  17. Launch Vehicle Failure Dynamics and Abort Triggering Analysis

    Science.gov (United States)

    Hanson, John M.; Hill, Ashely D.; Beard, Bernard B.

    2011-01-01

    Launch vehicle ascent is a time of high risk for an on-board crew. There are many types of failures that can kill the crew if the crew is still on-board when the failure becomes catastrophic. For some failure scenarios, there is plenty of time for the crew to be warned and to depart, whereas in some there is insufficient time for the crew to escape. There is a large fraction of possible failures for which time is of the essence and a successful abort is possible if the detection and action happens quickly enough. This paper focuses on abort determination based primarily on data already available from the GN&C system. This work is the result of failure analysis efforts performed during the Ares I launch vehicle development program. Derivation of attitude and attitude rate abort triggers to ensure that abort occurs as quickly as possible when needed, but that false positives are avoided, forms a major portion of the paper. Some of the potential failure modes requiring use of these triggers are described, along with analysis used to determine the success rate of getting the crew off prior to vehicle demise.

  18. Failures to further developing orphan medicinal products after designation granted in Europe: an analysis of marketing authorisation failures and abandoned drugs.

    Science.gov (United States)

    Giannuzzi, Viviana; Landi, Annalisa; Bosone, Enrico; Giannuzzi, Floriana; Nicotri, Stefano; Torrent-Farnell, Josep; Bonifazi, Fedele; Felisi, Mariagrazia; Bonifazi, Donato; Ceci, Adriana

    2017-09-11

    The research and development process in the field of rare diseases is characterised by many well-known difficulties, and a large percentage of orphan medicinal products do not reach the marketing approval.This work aims at identifying orphan medicinal products that failed the developmental process and investigating reasons for and possible factors influencing failures. Drugs designated in Europe under Regulation (European Commission) 141/2000 in the period 2000-2012 were investigated in terms of the following failures: (1) marketing authorisation failures (refused or withdrawn) and (2) drugs abandoned by sponsors during development.Possible risk factors for failure were analysed using statistically validated methods. This study points out that 437 out of 788 designations are still under development, while 219 failed the developmental process. Among the latter, 34 failed the marketing authorisation process and 185 were abandoned during the developmental process. In the first group of drugs (marketing authorisation failures), 50% reached phase II, 47% reached phase III and 3% reached phase I, while in the second group (abandoned drugs), the majority of orphan medicinal products apparently never started the development process, since no data on 48.1% of them were published and the 3.2% did not progress beyond the non-clinical stage.The reasons for failures of marketing authorisation were: efficacy/safety issues (26), insufficient data (12), quality issues (7), regulatory issues on trials (4) and commercial reasons (1). The main causes for abandoned drugs were efficacy/safety issues (reported in 54 cases), inactive companies (25.4%), change of company strategy (8.1%) and drug competition (10.8%). No information concerning reasons for failure was available for 23.2% of the analysed products. This analysis shows that failures occurred in 27.8% of all designations granted in Europe, the main reasons being safety and efficacy issues. Moreover, the stage of development

  19. Failure analysis on a ruptured petrochemical pipe

    Energy Technology Data Exchange (ETDEWEB)

    Harun, Mohd [Industrial Technology Division, Malaysian Nuclear Agency, Ministry of Science, Technology and Innovation Malaysia, Bangi, Kajang, Selangor (Malaysia); Shamsudin, Shaiful Rizam; Kamardin, A. [Univ. Malaysia Perlis, Jejawi, Arau (Malaysia). School of Materials Engineering

    2010-08-15

    The failure took place on a welded elbow pipe which exhibited a catastrophic transverse rupture. The failure was located on the welding HAZ region, parallel to the welding path. Branching cracks were detected at the edge of the rupture area. Deposits of corrosion products were also spotted. The optical microscope analysis showed the presence of transgranular failures which were related to the stress corrosion cracking (SCC) and were predominantly caused by the welding residual stress. The significant difference in hardness between the welded area and the pipe confirmed the findings. Moreover, the failure was also caused by the low Mo content in the stainless steel pipe which was detected by means of spark emission spectrometer. (orig.)

  20. Corrosion induced failure analysis of subsea pipelines

    International Nuclear Information System (INIS)

    Yang, Yongsheng; Khan, Faisal; Thodi, Premkumar; Abbassi, Rouzbeh

    2017-01-01

    Pipeline corrosion is one of the main causes of subsea pipeline failure. It is necessary to monitor and analyze pipeline condition to effectively predict likely failure. This paper presents an approach to analyze the observed abnormal events to assess the condition of subsea pipelines. First, it focuses on establishing a systematic corrosion failure model by Bow-Tie (BT) analysis, and subsequently the BT model is mapped into a Bayesian Network (BN) model. The BN model facilitates the modelling of interdependency of identified corrosion causes, as well as the updating of failure probabilities depending on the arrival of new information. Furthermore, an Object-Oriented Bayesian Network (OOBN) has been developed to better structure the network and to provide an efficient updating algorithm. Based on this OOBN model, probability updating and probability adaptation are performed at regular intervals to estimate the failure probabilities due to corrosion and potential consequences. This results in an interval-based condition assessment of subsea pipeline subjected to corrosion. The estimated failure probabilities would help prioritize action to prevent and control failures. Practical application of the developed model is demonstrated using a case study. - Highlights: • A Bow-Tie (BT) based corrosion failure model linking causation with the potential losses. • A novel Object-Oriented Bayesian Network (OOBN) based corrosion failure risk model. • Probability of failure updating and adaptation with respect to time using OOBN model. • Application of the proposed model to develop and test strategies to minimize failure risk.

  1. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  2. Analysis Method of Common Cause Failure on Non-safety Digital Control System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yun Goo; Oh, Eun Gse [KHNP, Daejeon (Korea, Republic of)

    2014-08-15

    The effects of common cause failure on safety digital instrumentation and control system had been considered in defense in depth analysis with safety analysis method. However, the effects of common cause failure on non-safety digital instrumentation and control system also should be evaluated. The common cause failure can be included in credible failure on the non-safety system. In the I and C architecture of nuclear power plant, many design feature has been applied for the functional integrity of control system. One of that is segmentation. Segmentation defenses the propagation of faults in the I and C architecture. Some of effects from common cause failure also can be limited by segmentation. Therefore, in this paper there are two type of failure mode, one is failures in one control group which is segmented, and the other is failures in multiple control group because that the segmentation cannot defense all effects from common cause failure. For each type, the worst failure scenario is needed to be determined, so the analysis method has been proposed in this paper. The evaluation can be qualitative when there is sufficient justification that the effects are bounded in previous safety analysis. When it is not bounded in previous safety analysis, additional analysis should be done with conservative assumptions method of previous safety analysis or best estimation method with realistic assumptions.

  3. Dependent failures of diesel generators

    International Nuclear Information System (INIS)

    Mankamo, T.; Pulkkinen, U.

    1982-01-01

    This survey of dependent failures (common-cause failures) is based on the data of diesel generator failures in U. S. nuclear power plants as reported in Licensee Event Reports. Failures were classified into random and potentially dependent failures. All failures due to design errors, manufacturing or installation errors, maintenance errors, or deviations in the operational environment were classified as potentially dependent failures.The statistical dependence between failures was estimated from the relative portion of multiple failures. Results confirm the earlier view of the significance of statistical dependence, a strong dependence on the age of the diesel generator was found in each failure class excluding random failures and maintenance errors, which had a nearly constant frequency independent of diesel generator age

  4. Computer aided approach to qualitative and quantitative common cause failure analysis for complex systems

    International Nuclear Information System (INIS)

    Cate, C.L.; Wagner, D.P.; Fussell, J.B.

    1977-01-01

    Common cause failure analysis, also called common mode failure analysis, is an integral part of a complete system reliability analysis. Existing methods of computer aided common cause failure analysis are extended by allowing analysis of the complex systems often encountered in practice. The methods aid in identifying potential common cause failures and also address quantitative common cause failure analysis

  5. Role of scanning electron microscope )SEM) in metal failure analysis

    International Nuclear Information System (INIS)

    Shaiful Rizam Shamsudin; Hafizal Yazid; Mohd Harun; Siti Selina Abd Hamid; Nadira Kamarudin; Zaiton Selamat; Mohd Shariff Sattar; Muhamad Jalil

    2005-01-01

    Scanning electron microscope (SEM) is a scientific instrument that uses a beam of highly energetic electrons to examine the surface and phase distribution of specimens on a micro scale through the live imaging of secondary electrons (SE) and back-scattered electrons (BSE) images. One of the main activities of SEM Laboratory at MINT is for failure analysis on metal part and components. The capability of SEM is excellent for determining the root cause of metal failures such as ductility or brittleness, stress corrosion, fatigue and other types of failures. Most of our customers that request for failure analysis are from local petrochemical plants, manufacturers of automotive components, pipeline maintenance personnel and engineers who involved in the development of metal parts and component. This paper intends to discuss some of the technical concepts in failure analysis associated with SEM. (Author)

  6. Failure analysis of medical Linac (LMR-15)

    International Nuclear Information System (INIS)

    Kato, Kiyotaka; Nakamura, Katsumi; Ogihara, Kiyoshi; Takahashi, Katsuhiko; Sato, Kazuhisa.

    1994-01-01

    In August 1978, Linac (LMR-15, Z4 Toshiba) was installed at our hospital and in use for 12 years up to September 1990. Recently, we completed working and failure records on this apparatus during the 12-year period, for the purpose of their analysis in the basis of reliability engineering. The results revealed operation rate of 97.85% on the average, mean time between failures (MTBF) from 40-70 hours about the beginning of its working to 280 hours for 2 years before renewal and practically satisfactory values of mean life of parts of life such as magnetron, thyratron and electron gun; the above respective values proved to be above those reported by other literature. On the other hand, we classified, by occurring system, the contents of failures in the apparatus and determined the number of failures and the temperature and humidities in case of failures to examine the correlation between the working environment and failure. The results indicated a change in humidity to gain control of failures in the dosimetric system, especially the monitoring chamber and we could back up the strength of the above correlation from a coefficient of correlation value of 0.84. (author)

  7. Stochastic failure modelling of unidirectional composite ply failure

    International Nuclear Information System (INIS)

    Whiteside, M.B.; Pinho, S.T.; Robinson, P.

    2012-01-01

    Stochastic failure envelopes are generated through parallelised Monte Carlo Simulation of a physically based failure criteria for unidirectional carbon fibre/epoxy matrix composite plies. Two examples are presented to demonstrate the consequence on failure prediction of both statistical interaction of failure modes and uncertainty in global misalignment. Global variance-based Sobol sensitivity indices are computed to decompose the observed variance within the stochastic failure envelopes into contributions from physical input parameters. The paper highlights a selection of the potential advantages stochastic methodologies offer over the traditional deterministic approach.

  8. Failure analysis of the boiler water-wall tube

    Directory of Open Access Journals (Sweden)

    S.W. Liu

    2017-10-01

    Full Text Available Failure analysis of the boiler water-wall tube is presented in this work. In order to examine the causes of failure, various techniques including visual inspection, chemical analysis, optical microscopy, scanning electron microscopy and energy dispersive spectroscopy were carried out. Tube wall thickness measurements were performed on the ruptured tube. The fire-facing side of the tube was observed to have experienced significant wall thinning. The composition of the matrix material of the tube meets the requirements of the relevant standards. Microscopic examinations showed that the spheroidization of pearlite is not very obvious. The failure mechanism is identified as a result of the significant localized wall thinning of the boiler water-wall tube due to oxidation.

  9. Failure analysis of the boiler water-wall tube

    OpenAIRE

    S.W. Liu; W.Z. Wang; C.J. Liu

    2017-01-01

    Failure analysis of the boiler water-wall tube is presented in this work. In order to examine the causes of failure, various techniques including visual inspection, chemical analysis, optical microscopy, scanning electron microscopy and energy dispersive spectroscopy were carried out. Tube wall thickness measurements were performed on the ruptured tube. The fire-facing side of the tube was observed to have experienced significant wall thinning. The composition of the matrix material of the tu...

  10. Fuzzy logic prioritization of failures in a system failure mode, effects and criticality analysis

    International Nuclear Information System (INIS)

    Bowles, John B.; Pelaez, C.E.

    1995-01-01

    This paper describes a new technique, based on fuzzy logic, for prioritizing failures for corrective actions in a Failure Mode, Effects and Criticality Analysis (FMECA). As in a traditional criticality analysis, the assessment is based on the severity, frequency of occurrence, and detectability of an item failure. However, these parameters are here represented as members of a fuzzy set, combined by matching them against rules in a rule base, evaluated with min-max inferencing, and then defuzzified to assess the riskiness of the failure. This approach resolves some of the problems in traditional methods of evaluation and it has several advantages compared to strictly numerical methods: 1) it allows the analyst to evaluate the risk associated with item failure modes directly using the linguistic terms that are employed in making the criticality assessment; 2) ambiguous, qualitative, or imprecise information, as well as quantitative data, can be used in the assessment and they are handled in a consistent manner; and 3) it gives a more flexible structure for combining the severity, occurrence, and detectability parameters. Two fuzzy logic based approaches for assessing criticality are presented. The first is based on the numerical rankings used in a conventional Risk Priority Number (RPN) calculation and uses crisp inputs gathered from the user or extracted from a reliability analysis. The second, which can be used early in the design process when less detailed information is available, allows fuzzy inputs and also illustrates the direct use of the linguistic rankings defined for the RPN calculations

  11. Analysis of dependent failures in the ORNL precursor study

    International Nuclear Information System (INIS)

    Ballard, G.M.

    1985-01-01

    The study of dependent failures (or common cause/mode failures) in the safety assessment of potentially hazardous plant is one of the significant areas of uncertainty in performing probabilistic safety studies. One major reason for this uncertainty is that data on dependent failures is apparently not readily available in sufficient quantity to assist in the development and validation of models. The incident reports that were compiled for the ORNL study on Precursors to Severe Core Damage Accidents (NUREG/CR-2497) provide an opportunity to look at the importance of dependent failures in the most significant incidents of recent reactor operations, to look at the success of probabilistic risk assessment (PRA) methods in accounting for the contribution of dependent failures, and to look at the dependent failure incidents with the aim of identifying the most significant problem areas. In this paper an analysis has been made of the incidents compiled in NUREG/CR-2497 and events involving multiple failures which were not independent have been identified. From this analysis it is clear that dependent failures are a very significant contributor to the precursor incidents. The method of enumeration of accident frequency used in NUREG-2497 can be shown to take account of dependent failures and this may be a significant factor contributing to the apparent difference between the precursor accident frequency and typical PRA frequencies

  12. Rweb:Web-based Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Jeff Banfield

    1999-03-01

    Full Text Available Rweb is a freely accessible statistical analysis environment that is delivered through the World Wide Web (WWW. It is based on R, a well known statistical analysis package. The only requirement to run the basic Rweb interface is a WWW browser that supports forms. If you want graphical output you must, of course, have a browser that supports graphics. The interface provides access to WWW accessible data sets, so you may run Rweb on your own data. Rweb can provide a four window statistical computing environment (code input, text output, graphical output, and error information through browsers that support Javascript. There is also a set of point and click modules under development for use in introductory statistics courses.

  13. Failure modes and effects analysis of fusion magnet systems

    International Nuclear Information System (INIS)

    Zimmermann, M.; Kazimi, M.S.; Siu, N.O.; Thome, R.J.

    1988-12-01

    A failure modes and consequence analysis of fusion magnet system is an important contributor towards enhancing the design by improving the reliability and reducing the risk associated with the operation of magnet systems. In the first part of this study, a failure mode analysis of a superconducting magnet system is performed. Building on the functional breakdown and the fault tree analysis of the Toroidal Field (TF) coils of the Next European Torus (NET), several subsystem levels are added and an overview of potential sources of failures in a magnet system is provided. The failure analysis is extended to the Poloidal Field (PF) magnet system. Furthermore, an extensive analysis of interactions within the fusion device caused by the operation of the PF magnets is presented in the form of an Interaction Matrix. A number of these interactions may have significant consequences for the TF magnet system particularly interactions triggered by electrical failures in the PF magnet system. In the second part of this study, two basic categories of electrical failures in the PF magnet system are examined: short circuits between the terminals of external PF coils, and faults with a constant voltage applied at external PF coil terminals. An electromagnetic model of the Compact Ignition Tokamak (CIT) is used to examine the mechanical load conditions for the PF and the TF coils resulting from these fault scenarios. It is found that shorts do not pose large threats to the PF coils. Also, the type of plasma disruption has little impact on the net forces on the PF and the TF coils. 39 refs., 30 figs., 12 tabs

  14. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  15. Failure analysis of fractured dental zirconia implants.

    Science.gov (United States)

    Gahlert, M; Burtscher, D; Grunert, I; Kniha, H; Steinhauser, E

    2012-03-01

    The purpose of the present study was the macroscopic and microscopic failure analysis of fractured zirconia dental implants. Thirteen fractured one-piece zirconia implants (Z-Look3) out of 170 inserted implants with an average in situ period of 36.75±5.34 months (range from 20 to 56 months, median 38 months) were prepared for macroscopic and microscopic (scanning electron microscopy [SEM]) failure analysis. These 170 implants were inserted in 79 patients. The patient histories were compared with fracture incidences to identify the reasons for the failure of the implants. Twelve of these fractured implants had a diameter of 3.25 mm and one implant had a diameter of 4 mm. All fractured implants were located in the anterior side of the maxilla and mandibula. The patient with the fracture of the 4 mm diameter implant was adversely affected by strong bruxism. By failure analysis (SEM), it could be demonstrated that in all cases, mechanical overloading caused the fracture of the implants. Inhomogeneities and internal defects of the ceramic material could be excluded, but notches and scratches due to sandblasting of the surface led to local stress concentrations that led to the mentioned mechanical overloading by bending loads. The present study identified a fracture rate of nearly 10% within a follow-up period of 36.75 months after prosthetic loading. Ninety-two per cent of the fractured implants were so-called diameter reduced implants (diameter 3.25 mm). These diameter reduced implants cannot be recommended for further clinical use. Improvement of the ceramic material and modification of the implant geometry has to be carried out to reduce the failure rate of small-sized ceramic implants. Nevertheless, due to the lack of appropriate laboratory testing, only clinical studies will demonstrate clearly whether and how far the failure rate can be reduced. © 2011 John Wiley & Sons A/S.

  16. Dependency Defence and Dependency Analysis Guidance. Volume 2: Appendix 3-8. How to analyse and protect against dependent failures. Summary report of the Nordic Working Group on Common Cause Failure Analysis

    International Nuclear Information System (INIS)

    Johanson, Gunnar; Hellstroem, Per; Makamo, Tuomas; Bento, Jean-Pierre; Knochenhauer, Michael; Poern, Kurt

    2003-10-01

    The safety systems in Nordic nuclear power plants are characterised by substantial redundancy and/or diversification in safety critical functions, as well as by physical separation of critical safety systems, including their support functions. Viewed together with the evident additional fact, that the single failure criterion has been systematically applied in the design of safety systems, this means that the plant risk profile as calculated in existing PSA:s is usually strongly dominated by failures caused by dependencies resulting in the loss of more than one system sub. The overall objective with the working group is to support safety by studying potential and real CCF events, process statistical data and report conclusions and recommendations that can improve the understanding of these events eventually resulting in increased safety. The result is intended for application in NPP operation, maintenance, inspection and risk assessments. The NAFCS project is part of the activities of the Nordic PSA Group (NPSAG), and is financed jointly by the Nordic utilities and authorities. The work is divided into one quantitative and one qualitative part with the following specific objectives: Qualitative objectives-The goal with the qualitative analysis is to compile experience data and generate insights in terms of relevant failure mechanisms and effective CCF protection measures. The results shall be presented as a guide with checklists and recommendations on how to identify current CCF protection standard and improvement possibilities regarding CCF defences decreasing the CCF vulnerability. Quantitative objectives-The goal with the quantitative analysis is to prepare a Nordic C-book where quantitative insights as Impact Vectors and CCF parameters for different redundancy levels are presented. Uncertainties in CCF data shall be reduced as much as possible. The high redundancy systems sensitivity to CCF events demand a well structured quantitative analysis in support of

  17. Reliability of piping system components. Volume 4: The pipe failure event database

    Energy Technology Data Exchange (ETDEWEB)

    Nyman, R; Erixon, S [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Tomic, B [ENCONET Consulting GmbH, Vienna (Austria); Lydell, B [RSA Technologies, Visat, CA (United States)

    1996-07-01

    Available public and proprietary databases on piping system failures were searched for relevant information. Using a relational database to identify groupings of piping failure modes and failure mechanisms, together with insights from published PSAs, the project team determined why, how and where piping systems fail. This report represents a compendium of technical issues important to the analysis of pipe failure events, and statistical estimation of failure rates. Inadequacies of traditional PSA methodology are addressed, with directions for PSA methodology enhancements. A `data driven and systems oriented` analysis approach is proposed to enable assignment of unique identities to risk-significant piping system component failure. Sufficient operating experience does exist to generate quality data on piping failures. Passive component failures should be addressed by today`s PSAs to allow for aging analysis and effective, on-line risk management. 42 refs, 25 figs.

  18. Reliability of piping system components. Volume 4: The pipe failure event database

    International Nuclear Information System (INIS)

    Nyman, R.; Erixon, S.; Tomic, B.; Lydell, B.

    1996-07-01

    Available public and proprietary databases on piping system failures were searched for relevant information. Using a relational database to identify groupings of piping failure modes and failure mechanisms, together with insights from published PSAs, the project team determined why, how and where piping systems fail. This report represents a compendium of technical issues important to the analysis of pipe failure events, and statistical estimation of failure rates. Inadequacies of traditional PSA methodology are addressed, with directions for PSA methodology enhancements. A 'data driven and systems oriented' analysis approach is proposed to enable assignment of unique identities to risk-significant piping system component failure. Sufficient operating experience does exist to generate quality data on piping failures. Passive component failures should be addressed by today's PSAs to allow for aging analysis and effective, on-line risk management. 42 refs, 25 figs

  19. Failure mode and effects analysis: an empirical comparison of failure mode scoring procedures.

    Science.gov (United States)

    Ashley, Laura; Armitage, Gerry

    2010-12-01

    To empirically compare 2 different commonly used failure mode and effects analysis (FMEA) scoring procedures with respect to their resultant failure mode scores and prioritization: a mathematical procedure, where scores are assigned independently by FMEA team members and averaged, and a consensus procedure, where scores are agreed on by the FMEA team via discussion. A multidisciplinary team undertook a Healthcare FMEA of chemotherapy administration. This included mapping the chemotherapy process, identifying and scoring failure modes (potential errors) for each process step, and generating remedial strategies to counteract them. Failure modes were scored using both an independent mathematical procedure and a team consensus procedure. Almost three-fifths of the 30 failure modes generated were scored differently by the 2 procedures, and for just more than one-third of cases, the score discrepancy was substantial. Using the Healthcare FMEA prioritization cutoff score, almost twice as many failure modes were prioritized by the consensus procedure than by the mathematical procedure. This is the first study to empirically demonstrate that different FMEA scoring procedures can score and prioritize failure modes differently. It found considerable variability in individual team members' opinions on scores, which highlights the subjective and qualitative nature of failure mode scoring. A consensus scoring procedure may be most appropriate for FMEA as it allows variability in individuals' scores and rationales to become apparent and to be discussed and resolved by the team. It may also yield team learning and communication benefits unlikely to result from a mathematical procedure.

  20. Failure analysis of stainless steel femur fixation plate.

    Science.gov (United States)

    Hussain, P B; Mohammad, M

    2004-05-01

    Failure analysis was performed to investigate the failure of the femur fixation plate which was previously fixed on the femur of a girl. Radiography, metallography, fractography and mechanical testing were conducted in this study. The results show that the failure was due to the formation of notches on the femur plate. These notches act as stress raisers from where the cracks start to propagate. Finally fracture occurred on the femur plate and subsequently, the plate failed.

  1. failure analysis of a uav flight control system using markov analysis

    African Journals Online (AJOL)

    eobe

    2016-01-01

    Jan 1, 2016 ... Tree Analysis (FTA), Dependence Diagram Analysis. (DDA) and Markov Analysis (MA) are the most widely-used methods of probabilistic safety and reliability analysis for airborne system [1]. Fault trees analysis is a backward failure searching ..... [4] Christopher Dabrowski and Fern Hunt Markov Chain.

  2. Early failure analysis of machining centers: a case study

    International Nuclear Information System (INIS)

    Wang Yiqiang; Jia Yazhou; Jiang Weiwei

    2001-01-01

    To eliminate the early failures and improve the reliability, nine ex-factory machining centers are traced under field conditions in workshops. Their early failure information throughout the ex-factory run-in test is collected. The field early failure database is constructed based on the collection of field early failure data and the codification of data. Early failure mode and effects analysis is performed to indicate the weak subsystem of a machining center or the troublemaker. The distribution of the time between early failures is analyzed and the optimal ex-factory run-in test time for machining center that may expose sufficiently the early failures and cost minimum is discussed. Suggestions how to arrange ex-factory run-in test and how to take actions to reduce early failures for machining center is proposed

  3. Statistics Analysis Measures Painting of Cooling Tower

    Directory of Open Access Journals (Sweden)

    A. Zacharopoulou

    2013-01-01

    Full Text Available This study refers to the cooling tower of Megalopolis (construction 1975 and protection from corrosive environment. The maintenance of the cooling tower took place in 2008. The cooling tower was badly damaged from corrosion of reinforcement. The parabolic cooling towers (factory of electrical power are a typical example of construction, which has a special aggressive environment. The protection of cooling towers is usually achieved through organic coatings. Because of the different environmental impacts on the internal and external side of the cooling tower, a different system of paint application is required. The present study refers to the damages caused by corrosion process. The corrosive environments, the application of this painting, the quality control process, the measures and statistics analysis, and the results were discussed in this study. In the process of quality control the following measurements were taken into consideration: (1 examination of the adhesion with the cross-cut test, (2 examination of the film thickness, and (3 controlling of the pull-off resistance for concrete substrates and paintings. Finally, this study refers to the correlations of measurements, analysis of failures in relation to the quality of repair, and rehabilitation of the cooling tower. Also this study made a first attempt to apply the specific corrosion inhibitors in such a large structure.

  4. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    Science.gov (United States)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  5. Reliability analysis based on the losses from failures.

    Science.gov (United States)

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  6. Extending Failure Modes and Effects Analysis Approach for Reliability Analysis at the Software Architecture Design Level

    NARCIS (Netherlands)

    Sözer, Hasan; Tekinerdogan, B.; Aksit, Mehmet; de Lemos, Rogerio; Gacek, Cristina

    2007-01-01

    Several reliability engineering approaches have been proposed to identify and recover from failures. A well-known and mature approach is the Failure Mode and Effect Analysis (FMEA) method that is usually utilized together with Fault Tree Analysis (FTA) to analyze and diagnose the causes of failures.

  7. Machinery failure analysis and troubleshooting practical machinery management for process plants

    CERN Document Server

    Bloch, Heinz P

    2012-01-01

    Solve the machinery failure problems costing you time and money with this classic, comprehensive guide to analysis and troubleshooting  Provides detailed, complete and accurate information on anticipating risk of component failure and avoiding equipment downtime Includes numerous photographs of failed parts to ensure you are familiar with the visual evidence you need to recognize Covers proven approaches to failure definition and offers failure identification and analysis methods that can be applied to virtually all problem situations Demonstr

  8. Root cause of failure analysis and the system engineer

    International Nuclear Information System (INIS)

    Coppock, M.S.; Hartwig, A.W.

    1990-01-01

    In an industry where ever-increasing emphasis is being placed on root cause of failure determination, it is imperative that a successful nuclear utility have an effective means of identifying failures and performing the necessary analyses. The current Institute of Nuclear Power Operations (INPO) good practice, OE-907, root-cause analysis, gives references to methodology that will help determine breakdowns in procedures, programs, or design but gives very little guidance on how or when to perform component root cause of failure analyses. The system engineers of nuclear utilities are considered the focal point for their respective systems and are required by most programs to investigate component failures. The problem that the system engineer faces in determining a component root cause of failures lies in acquisition of the necessary data to identify the need to perform the analysis and in having the techniques and equipment available to perform it. The system engineers at the Palo Verde nuclear generating station routinely perform detailed component root cause of failure analyses. The Palo Verde program provides the system engineers with the information necessary to identify when a component root cause of failure is required. Palo Verde also has the necessary equipment on-site to perform the analyses

  9. Statistical analysis of manufacturing defects on fatigue life of wind turbine casted Component

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Mukherjee, Krishnendu

    2014-01-01

    Wind turbine components experience heavily variable loads during its lifetime and fatigue failure is a main failure mode of casted components during their design working life. The fatigue life is highly dependent on the microstructure (grain size and graphite form and size), number, type, location...... and size of defects in the casted components and is therefore rather uncertain and needs to be described by stochastic models. Uncertainties related to such defects influence prediction of the fatigue strengths and are therefore important in modelling and assessment of the reliability of wind turbine...... for the fatigue life, namely LogNormal and Weibull distributions. The statistical analyses are performed using the Maximum Likelihood Method and the statistical uncertainty is estimated. Further, stochastic models for the fatigue life obtained from the statistical analyses are used for illustration to assess...

  10. Statistical methods for astronomical data analysis

    CERN Document Server

    Chattopadhyay, Asis Kumar

    2014-01-01

    This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...

  11. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  12. Process Equipment Failure Mode Analysis in a Chemical Industry

    Directory of Open Access Journals (Sweden)

    J. Nasl Seraji

    2008-04-01

    Full Text Available Background and aims   Prevention of potential accidents and safety promotion in chemical processes requires systematic safety management in them. The main objective of this study was analysis of important process equipment components failure modes and effects in H2S and CO2  isolation from extracted natural gas process.   Methods   This study was done in sweetening unit of an Iranian gas refinery. Failure Mode and Effect Analysis (FMEA used for identification of process equipments failures.   Results   Totally 30 failures identified and evaluated using FMEA. P-1 blower's blade breaking and sour gas pressure control valve bearing tight moving had maximum risk Priority number (RPN, P-1 body corrosion and increasing plug lower side angle of reach DEAlevel control valve  in tower - 1 were minimum calculated RPN.   Conclusion   By providing a reliable documentation system for equipment failures and  incidents recording, maintaining of basic information for later safety assessments would be  possible. Also, the probability of failures and effects could be minimized by conducting preventive maintenance.

  13. Dissimilar weld failure analysis and development program

    International Nuclear Information System (INIS)

    Holko, K.H.; Li, C.C.

    1982-01-01

    The problem of dissimilar weld cracking and failure is examined. This problem occurs in boiler superheater and reheater sections as well as main steam piping. Typically, a dissimilar weld joins low-alloy steel tubing such as Fe-2-1/4 Cr-1Mo to stainless steel tubing such as 321H and 304H. Cracking and failure occur in the low-alloy steel heat-affected zone very close to the weld interface. The 309 stainless steel filler previously used has been replaced with nickel-base fillers such as Inconel 132, Inconel 182, and Incoweld A. This change has extended the time to cracking and failure, but has not solved the problem. To illustrate and define the problem, the metallography of damaged and failed dissimilar welds is described. Results of mechanical tests of dissimilar welds removed from service are presented, and factors believed to be influential in causing damage and failure are discussed. In addition, the importance of dissimilar weldment service history is demonstrated, and the Dissimilar Weld Failure Analysis and Development Program is described. 15 figures

  14. Analysis of reactor trips involving balance-of-plant failures

    International Nuclear Information System (INIS)

    Seth, S.; Skinner, L.; Ettlinger, L.; Lay, R.

    1986-01-01

    The relatively high frequency of plant transients leading to reactor trips at nuclear power plants in the US is of economic and safety concern to the industry. A majority of such transients is due to failures in the balance-of-plant (BOP) systems. As a part of a study conducted for the US Nuclear Regulatory Commission, Mitre has carried out a further analysis of the BOP failures associated with reactor trips. The major objectives of the analysis were to examine plant-to-plant variations in BOP-related trips, to understand the causes of failures, and to determine the extent of any associated safety system challenges. The analysis was based on the Licensee Event Reports submitted on all commercial light water reactors during the 2-yr period, 1984-1985

  15. Study on shielded pump system failure analysis method based on Bayesian network

    International Nuclear Information System (INIS)

    Bao Yilan; Huang Gaofeng; Tong Lili; Cao Xuewu

    2012-01-01

    This paper applies Bayesian network to the system failure analysis, with an aim to improve knowledge representation of the uncertainty logic and multi-fault states in system failure analysis. A Bayesian network for shielded pump failure analysis is presented, conducting fault parameter learning, updating Bayesian network parameter based on new samples. Finally, through the Bayesian network inference, vulnerability in this system, the largest possible failure modes, and the fault probability are obtained. The powerful ability of Bayesian network to analyze system fault is illustrated by examples. (authors)

  16. Comprehensive method of common-mode failure analysis for LMFBR safety systems

    International Nuclear Information System (INIS)

    Unione, A.J.; Ritzman, R.L.; Erdmann, R.C.

    1976-01-01

    A technique is demonstrated which allows the systematic treatment of common-mode failures of safety system performance. The technique uses log analysis in the form of fault and success trees to qualitatively assess the sources of common-mode failure and quantitatively estimate the contribution to the overall risk of system failure. The analysis is applied to the secondary control rod system of an early sized LMFBR

  17. Challenges in Resolution for IC Failure Analysis

    Science.gov (United States)

    Martinez, Nick

    1999-10-01

    Resolution is becoming more and more of a challenge in the world of Failure Analysis in integrated circuits. This is a result of the ongoing size reduction in microelectronics. Determining the cause of a failure depends upon being able to find the responsible defect. The time it takes to locate a given defect is extremely important so that proper corrective actions can be taken. The limits of current microscopy tools are being pushed. With sub-micron feature sizes and even smaller killing defects, optical microscopes are becoming obsolete. With scanning electron microscopy (SEM), the resolution is high but the voltage involved can make these small defects transparent due to the large mean-free path of incident electrons. In this presentation, I will give an overview of the use of inspection methods in Failure Analysis and show example studies of my work as an Intern student at Texas Instruments. 1. Work at Texas Instruments, Stafford, TX, was supported by TI. 2. Work at Texas Tech University, was supported by NSF Grant DMR9705498.

  18. Probabilistic analysis on the failure of reactivity control for the PWR

    Science.gov (United States)

    Sony Tjahyani, D. T.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The fundamental safety function of the power reactor is to control reactivity, to remove heat from the reactor, and to confine radioactive material. The safety analysis is used to ensure that each parameter is fulfilled during the design and is done by deterministic and probabilistic method. The analysis of reactivity control is important to be done because it will affect the other of fundamental safety functions. The purpose of this research is to determine the failure probability of the reactivity control and its failure contribution on a PWR design. The analysis is carried out by determining intermediate events, which cause the failure of reactivity control. Furthermore, the basic event is determined by deductive method using the fault tree analysis. The AP1000 is used as the object of research. The probability data of component failure or human error, which is used in the analysis, is collected from IAEA, Westinghouse, NRC and other published documents. The results show that there are six intermediate events, which can cause the failure of the reactivity control. These intermediate events are uncontrolled rod bank withdrawal at low power or full power, malfunction of boron dilution, misalignment of control rod withdrawal, malfunction of improper position of fuel assembly and ejection of control rod. The failure probability of reactivity control is 1.49E-03 per year. The causes of failures which are affected by human factor are boron dilution, misalignment of control rod withdrawal and malfunction of improper position for fuel assembly. Based on the assessment, it is concluded that the failure probability of reactivity control on the PWR is still within the IAEA criteria.

  19. Inverse statistical approach in heartbeat time series

    International Nuclear Information System (INIS)

    Ebadi, H; Shirazi, A H; Mani, Ali R; Jafari, G R

    2011-01-01

    We present an investigation on heart cycle time series, using inverse statistical analysis, a concept borrowed from studying turbulence. Using this approach, we studied the distribution of the exit times needed to achieve a predefined level of heart rate alteration. Such analysis uncovers the most likely waiting time needed to reach a certain change in the rate of heart beat. This analysis showed a significant difference between the raw data and shuffled data, when the heart rate accelerates or decelerates to a rare event. We also report that inverse statistical analysis can distinguish between the electrocardiograms taken from healthy volunteers and patients with heart failure

  20. A quantitative method for Failure Mode and Effects Analysis

    NARCIS (Netherlands)

    Braaksma, Anne Johannes Jan; Meesters, A.J.; Klingenberg, W.; Hicks, C.

    2012-01-01

    Failure Mode and Effects Analysis (FMEA) is commonly used for designing maintenance routines by analysing potential failures, predicting their effect and facilitating preventive action. It is used to make decisions on operational and capital expenditure. The literature has reported that despite its

  1. Failure mode analysis of a PCRV. Influence of some hypothesis

    International Nuclear Information System (INIS)

    Zimmermann, T.; Saugy, B.; Rebora, B.

    1975-01-01

    This paper is concerned with the most recent developments and results obtained using a mathematical model for the non-linear analysis of massive reinforced and prestressed concrete strucures developed by the IPEN at the Swiss Federal Institute of Technology, in Lausanne. The method is based on three-dimensional isoparametric finite elements. A linear solution is adapted step by step to the idealized behavior laws of the materials up to the failure of the structure. The laws proposed here for the non-linear behavior of concrete and steel have been described elsewhere but a simple extension to the time-dependent behavior is presented. A numerical algorithm for the superposition of creep deformations is also proposed, the basic creep law being supposed to satisfy a power expression. Time-dependent failure is discussed. The calculus of a PCRV of a helium cooled fast reactor is then performed and the influence of the liner on the failure mode is analyzed. The failure analysis under increasing internal pressure is run at the present time and the influence of an eventual pressure in the cracks is being investigated. The paper aims mainly to demonstrate the accuracy of a failure analysis by three-dimensional finite-elements and to compare it with a model test, in particular when complete deformation and failure tests of the materials are available. The proposed model has already been extensively tested on simple structures and has proved to be useful for the analysis of different simplifying hypotheses

  2. Failure analysis of a Francis turbine runner

    Energy Technology Data Exchange (ETDEWEB)

    Frunzaverde, D; Campian, V [Research Center in Hydraulics, Automation and Heat Transfer, ' Eftimie Murgu' University of Resita P-ta Traian Vuia 1-4, RO-320085, Resita (Romania); Muntean, S [Centre of Advanced Research in Engineering Sciences, Romanian Academy - Timisoara Branch Bv. Mihai Viteazu 24, RO-300223, Timisoara (Romania); Marginean, G [University of Applied Sciences Gelsenkirchen, Neidenburger Str. 10, 45877 Gelsenkirchen (Germany); Marsavina, L [Department of Strength, ' Politehnica' University of Timisoara, Bv. Mihai Viteazu 1, RO-300222, Timisoara (Romania); Terzi, R; Serban, V, E-mail: gabriela.marginean@fh-gelsenkirchen.d, E-mail: d.frunzaverde@uem.r [Ramnicu Valcea Subsidiary, S.C. Hidroelectrica S.A., Str. Decebal 11, RO-240255, Ramnicu Valcea (Romania)

    2010-08-15

    The variable demand on the energy market requires great flexibility in operating hydraulic turbines. Therefore, turbines are frequently operated over an extended range of regimes. Francis turbines operating at partial load present pressure fluctuations due to the vortex rope in the draft tube cone. This phenomenon generates strong vibrations and noise that may produce failures on the mechanical elements of the machine. This paper presents the failure analysis of a broken Francis turbine runner blade. The failure appeared some months after the welding repair work realized in situ on fatigue cracks initiated near to the trailing edge at the junction with the crown, where stress concentration occurs. In order to determine the causes that led to the fracture of the runner blade, the metallographic investigations on a sample obtained from the blade is carried out. The metallographic investigations included macroscopic and microscopic examinations, both performed with light and scanning electron microscopy, as well as EDX - analyses. These investigations led to the conclusion, that the cracking of the blade was caused by fatigue, initiated by the surface unevenness of the welding seam. The failure was accelerated by the hydrogen embrittlement of the filling material, which appeared as a consequence of improper welding conditions. In addition to the metallographic investigations, numerical computations with finite element analysis are performed in order to evaluate the deformation and stress distribution on blade.

  3. Failure analysis of a Francis turbine runner

    International Nuclear Information System (INIS)

    Frunzaverde, D; Campian, V; Muntean, S; Marginean, G; Marsavina, L; Terzi, R; Serban, V

    2010-01-01

    The variable demand on the energy market requires great flexibility in operating hydraulic turbines. Therefore, turbines are frequently operated over an extended range of regimes. Francis turbines operating at partial load present pressure fluctuations due to the vortex rope in the draft tube cone. This phenomenon generates strong vibrations and noise that may produce failures on the mechanical elements of the machine. This paper presents the failure analysis of a broken Francis turbine runner blade. The failure appeared some months after the welding repair work realized in situ on fatigue cracks initiated near to the trailing edge at the junction with the crown, where stress concentration occurs. In order to determine the causes that led to the fracture of the runner blade, the metallographic investigations on a sample obtained from the blade is carried out. The metallographic investigations included macroscopic and microscopic examinations, both performed with light and scanning electron microscopy, as well as EDX - analyses. These investigations led to the conclusion, that the cracking of the blade was caused by fatigue, initiated by the surface unevenness of the welding seam. The failure was accelerated by the hydrogen embrittlement of the filling material, which appeared as a consequence of improper welding conditions. In addition to the metallographic investigations, numerical computations with finite element analysis are performed in order to evaluate the deformation and stress distribution on blade.

  4. TU-AB-BRD-02: Failure Modes and Effects Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Huq, M. [University of Pittsburgh Medical Center (United States)

    2015-06-15

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before a failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to

  5. TU-AB-BRD-02: Failure Modes and Effects Analysis

    International Nuclear Information System (INIS)

    Huq, M.

    2015-01-01

    Current quality assurance and quality management guidelines provided by various professional organizations are prescriptive in nature, focusing principally on performance characteristics of planning and delivery devices. However, published analyses of events in radiation therapy show that most events are often caused by flaws in clinical processes rather than by device failures. This suggests the need for the development of a quality management program that is based on integrated approaches to process and equipment quality assurance. Industrial engineers have developed various risk assessment tools that are used to identify and eliminate potential failures from a system or a process before a failure impacts a customer. These tools include, but are not limited to, process mapping, failure modes and effects analysis, fault tree analysis. Task Group 100 of the American Association of Physicists in Medicine has developed these tools and used them to formulate an example risk-based quality management program for intensity-modulated radiotherapy. This is a prospective risk assessment approach that analyzes potential error pathways inherent in a clinical process and then ranks them according to relative risk, typically before implementation, followed by the design of a new process or modification of the existing process. Appropriate controls are then put in place to ensure that failures are less likely to occur and, if they do, they will more likely be detected before they propagate through the process, compromising treatment outcome and causing harm to the patient. Such a prospective approach forms the basis of the work of Task Group 100 that has recently been approved by the AAPM. This session will be devoted to a discussion of these tools and practical examples of how these tools can be used in a given radiotherapy clinic to develop a risk based quality management program. Learning Objectives: Learn how to design a process map for a radiotherapy process Learn how to

  6. Failure mode and effects analysis of software-based automation systems

    International Nuclear Information System (INIS)

    Haapanen, P.; Helminen, A.

    2002-08-01

    Failure mode and effects analysis (FMEA) is one of the well-known analysis methods having an established position in the traditional reliability analysis. The purpose of FMEA is to identify possible failure modes of the system components, evaluate their influences on system behaviour and propose proper countermeasures to suppress these effects. The generic nature of FMEA has enabled its wide use in various branches of industry reaching from business management to the design of spaceships. The popularity and diverse use of the analysis method has led to multiple interpretations, practices and standards presenting the same analysis method. FMEA is well understood at the systems and hardware levels, where the potential failure modes usually are known and the task is to analyse their effects on system behaviour. Nowadays, more and more system functions are realised on software level, which has aroused the urge to apply the FMEA methodology also on software based systems. Software failure modes generally are unknown - 'software modules do not fail, they only display incorrect behaviour' - and depend on dynamic behaviour of the application. These facts set special requirements on the FMEA of software based systems and make it difficult to realise. In this report the failure mode and effects analysis is studied for the use of reliability analysis of software-based systems. More precisely, the target system of FMEA is defined to be a safety-critical software-based automation application in a nuclear power plant, implemented on an industrial automation system platform. Through a literature study the report tries to clarify the intriguing questions related to the practical use of software failure mode and effects analysis. The study is a part of the research project 'Programmable Automation System Safety Integrity assessment (PASSI)', belonging to the Finnish Nuclear Safety Research Programme (FINNUS, 1999-2002). In the project various safety assessment methods and tools for

  7. Improving failure analysis efficiency by combining FTA and FMEA in a recursive manner

    NARCIS (Netherlands)

    Peeters, J.F.W.; Basten, R.J.I.; Tinga, Tiedo

    2018-01-01

    When designing a maintenance programme for a capital good, especially a new one, it is of key importance to accurately understand its failure behaviour. Failure mode and effects analysis (FMEA) and fault tree analysis (FTA) are two commonly used methods for failure analysis. FMEA is a bottom-up

  8. Improving failure analysis efficiency by combining FTA and FMEA in a recursive manner

    NARCIS (Netherlands)

    Peeters, J.F.W.; Basten, R.J.I.; Tinga, T.

    When designing a maintenance programme for a capital good, especially a new one, it is of key importance to accurately understand its failure behaviour. Failure mode and effects analysis (FMEA) and fault tree analysis (FTA) are two commonly used methods for failure analysis. FMEA is a bottom-up

  9. Spacecraft electrical power subsystem: Failure behavior, reliability, and multi-state failure analyses

    International Nuclear Information System (INIS)

    Kim, So Young; Castet, Jean-Francois; Saleh, Joseph H.

    2012-01-01

    This article investigates the degradation and failure behavior of spacecraft electrical power subsystem (EPS) on orbit. First, this work provides updated statistical reliability and multi-state failure analyses of spacecraft EPS and its different constituents, namely the batteries, the power distribution, and the solar arrays. The EPS is shown to suffer from infant mortality and to be a major driver of spacecraft unreliability. Over 25% of all spacecraft failures are the result of EPS failures. As a result, satellite manufacturers may wish to pursue targeted improvement to this subsystem, either through better testing or burn-in procedures, better design or parts selection, or additional redundancy. Second, this work investigates potential differences in the EPS degradation and failure behavior for spacecraft in low earth orbits (LEO) and geosynchronous orbits (GEO). This analysis was motivated by the recognition that the power/load cycles and the space environment are significantly different in LEO and GEO, and as such, they may result in different failure behavior for the EPS in these two types of orbits. The results indicate, and quantify the extent to which, the EPS fails differently in LEO and GEO, both in terms of frequency and severity of failure events. A casual summary of the findings can be stated as follows: the EPS fails less frequently but harder (with fatal consequences to the spacecraft) in LEO than in GEO.

  10. Pipework failures - a review of historical incidents

    International Nuclear Information System (INIS)

    Blything, K.W.; Parry, S.T.

    1988-01-01

    A description is presented of the gathering of historical pipework incident data and its analysis to determine the causes and underlying reasons for failure. The following terms of reference were agreed: (a) To review data on failures associated with pipework to establish the principal causes of failure. This should include not only rupture of the pipe itself, but also pipework induced failures, such as severe flange leaks and excessive strains resulting in failure of connected equipment. (b) To suggest an incident classification for pipework systems which will alert design, construction, maintenance, and operating personnel to the need for special care. (c) To advise non-piping specialists of the type of situation which could result in failure if not allowed for in the design, e.g. dynamic and transient conditions. (d) To recommend, possibly as the result of (a) above, areas where present procedures and codes of practice may require amplification. Brief descriptions are given of selected incidents where the consequences are considered to be serious in terms of damage and financial loss. For consequence analysis, the release rate is an important parameter and, where possible, the proportion of incidents in the failure mode categories, leaks, ''ruptures/severances'' are given. Although not one of the agreed objectives, the determination of failure rates was recognised as an important requirement in the risk assessment of pipework systems. The quality of data gathered however was found to be inadequate for any statistical analysis and no failure rate values are given in this report. (author)

  11. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  12. Reliability prediction system based on the failure rate model for electronic components

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Lee, Hwa Ki

    2008-01-01

    Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components

  13. Failure rate modeling using fault tree analysis and Bayesian network: DEMO pulsed operation turbine study case

    International Nuclear Information System (INIS)

    Dongiovanni, Danilo Nicola; Iesmantas, Tomas

    2016-01-01

    Highlights: • RAMI (Reliability, Availability, Maintainability and Inspectability) assessment of secondary heat transfer loop for a DEMO nuclear fusion plant. • Definition of a fault tree for a nuclear steam turbine operated in pulsed mode. • Turbine failure rate models update by mean of a Bayesian network reflecting the fault tree analysis in the considered scenario. • Sensitivity analysis on system availability performance. - Abstract: Availability will play an important role in the Demonstration Power Plant (DEMO) success from an economic and safety perspective. Availability performance is commonly assessed by Reliability Availability Maintainability Inspectability (RAMI) analysis, strongly relying on the accurate definition of system components failure modes (FM) and failure rates (FR). Little component experience is available in fusion application, therefore requiring the adaptation of literature FR to fusion plant operating conditions, which may differ in several aspects. As a possible solution to this problem, a new methodology to extrapolate/estimate components failure rate under different operating conditions is presented. The DEMO Balance of Plant nuclear steam turbine component operated in pulse mode is considered as study case. The methodology moves from the definition of a fault tree taking into account failure modes possibly enhanced by pulsed operation. The fault tree is then translated into a Bayesian network. A statistical model for the turbine system failure rate in terms of subcomponents’ FR is hence obtained, allowing for sensitivity analyses on the structured mixture of literature and unknown FR data for which plausible value intervals are investigated to assess their impact on the whole turbine system FR. Finally, the impact of resulting turbine system FR on plant availability is assessed exploiting a Reliability Block Diagram (RBD) model for a typical secondary cooling system implementing a Rankine cycle. Mean inherent availability

  14. Failure rate modeling using fault tree analysis and Bayesian network: DEMO pulsed operation turbine study case

    Energy Technology Data Exchange (ETDEWEB)

    Dongiovanni, Danilo Nicola, E-mail: danilo.dongiovanni@enea.it [ENEA, Nuclear Fusion and Safety Technologies Department, via Enrico Fermi 45, Frascati 00040 (Italy); Iesmantas, Tomas [LEI, Breslaujos str. 3 Kaunas (Lithuania)

    2016-11-01

    Highlights: • RAMI (Reliability, Availability, Maintainability and Inspectability) assessment of secondary heat transfer loop for a DEMO nuclear fusion plant. • Definition of a fault tree for a nuclear steam turbine operated in pulsed mode. • Turbine failure rate models update by mean of a Bayesian network reflecting the fault tree analysis in the considered scenario. • Sensitivity analysis on system availability performance. - Abstract: Availability will play an important role in the Demonstration Power Plant (DEMO) success from an economic and safety perspective. Availability performance is commonly assessed by Reliability Availability Maintainability Inspectability (RAMI) analysis, strongly relying on the accurate definition of system components failure modes (FM) and failure rates (FR). Little component experience is available in fusion application, therefore requiring the adaptation of literature FR to fusion plant operating conditions, which may differ in several aspects. As a possible solution to this problem, a new methodology to extrapolate/estimate components failure rate under different operating conditions is presented. The DEMO Balance of Plant nuclear steam turbine component operated in pulse mode is considered as study case. The methodology moves from the definition of a fault tree taking into account failure modes possibly enhanced by pulsed operation. The fault tree is then translated into a Bayesian network. A statistical model for the turbine system failure rate in terms of subcomponents’ FR is hence obtained, allowing for sensitivity analyses on the structured mixture of literature and unknown FR data for which plausible value intervals are investigated to assess their impact on the whole turbine system FR. Finally, the impact of resulting turbine system FR on plant availability is assessed exploiting a Reliability Block Diagram (RBD) model for a typical secondary cooling system implementing a Rankine cycle. Mean inherent availability

  15. A Statistical Toolkit for Data Analysis

    International Nuclear Information System (INIS)

    Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.

    2006-01-01

    The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation

  16. Robustness Analysis of Real Network Topologies Under Multiple Failure Scenarios

    DEFF Research Database (Denmark)

    Manzano, M.; Marzo, J. L.; Calle, E.

    2012-01-01

    on topological characteristics. Recently approaches also consider the services supported by such networks. In this paper we carry out a robustness analysis of five real backbone telecommunication networks under defined multiple failure scenarios, taking into account the consequences of the loss of established......Nowadays the ubiquity of telecommunication networks, which underpin and fulfill key aspects of modern day living, is taken for granted. Significant large-scale failures have occurred in the last years affecting telecommunication networks. Traditionally, network robustness analysis has been focused...... connections. Results show which networks are more robust in response to a specific type of failure....

  17. Statistical considerations on safety analysis

    International Nuclear Information System (INIS)

    Pal, L.; Makai, M.

    2004-01-01

    The authors have investigated the statistical methods applied to safety analysis of nuclear reactors and arrived at alarming conclusions: a series of calculations with the generally appreciated safety code ATHLET were carried out to ascertain the stability of the results against input uncertainties in a simple experimental situation. Scrutinizing those calculations, we came to the conclusion that the ATHLET results may exhibit chaotic behavior. A further conclusion is that the technological limits are incorrectly set when the output variables are correlated. Another formerly unnoticed conclusion of the previous ATHLET calculations that certain innocent looking parameters (like wall roughness factor, the number of bubbles per unit volume, the number of droplets per unit volume) can influence considerably such output parameters as water levels. The authors are concerned with the statistical foundation of present day safety analysis practices and can only hope that their own misjudgment will be dispelled. Until then, the authors suggest applying correct statistical methods in safety analysis even if it makes the analysis more expensive. It would be desirable to continue exploring the role of internal parameters (wall roughness factor, steam-water surface in thermal hydraulics codes, homogenization methods in neutronics codes) in system safety codes and to study their effects on the analysis. In the validation and verification process of a code one carries out a series of computations. The input data are not precisely determined because measured data have an error, calculated data are often obtained from a more or less accurate model. Some users of large codes are content with comparing the nominal output obtained from the nominal input, whereas all the possible inputs should be taken into account when judging safety. At the same time, any statement concerning safety must be aleatory, and its merit can be judged only when the probability is known with which the

  18. Statistical shape analysis with applications in R

    CERN Document Server

    Dryden, Ian L

    2016-01-01

    A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...

  19. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  20. Reliability analysis of multi-trigger binary systems subject to competing failures

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Levitin, Gregory

    2013-01-01

    This paper suggests two combinatorial algorithms for the reliability analysis of multi-trigger binary systems subject to competing failure propagation and failure isolation effects. Propagated failure with global effect (PFGE) is referred to as a failure that not only causes outage to the component from which the failure originates, but also propagates through all other system components causing the entire system failure. However, the propagation effect from the PFGE can be isolated in systems with functional dependence (FDEP) behavior. This paper studies two distinct consequences of PFGE resulting from a competition in the time domain between the failure isolation and failure propagation effects. As compared to existing works on competing failures that are limited to systems with a single FDEP group, this paper considers more complicated cases where the systems have multiple dependent FDEP groups. Analysis of such systems is more challenging because both the occurrence order between the trigger failure event and PFGE from the dependent components and the occurrence order among the multiple trigger failure events have to be considered. Two combinatorial and analytical algorithms are proposed. Both of them have no limitation on the type of time-to-failure distributions for the system components. Their correctness is verified using a Markov-based method. An example of memory systems is analyzed to demonstrate and compare the applications and advantages of the two proposed algorithms. - Highlights: ► Reliability of binary systems with multiple dependent functional dependence groups is analyzed. ► Competing failure propagation and failure isolation effect is considered. ► The proposed algorithms are combinatorial and applicable to any arbitrary type of time-to-failure distributions for system components.

  1. Recognition and Analysis of Corrosion Failure Mechanisms

    Directory of Open Access Journals (Sweden)

    Steven Suess

    2006-02-01

    Full Text Available Corrosion has a vast impact on the global and domestic economy, and currently incurs losses of nearly $300 billion annually to the U.S. economy alone. Because of the huge impact of corrosion, it is imperative to have a systematic approach to recognizing and mitigating corrosion problems as soon as possible after they become apparent. A proper failure analysis includes collection of pertinent background data and service history, followed by visual inspection, photographic documentation, material evaluation, data review and conclusion procurement. In analyzing corrosion failures, one must recognize the wide range of common corrosion mechanisms. The features of any corrosion failure give strong clues as to the most likely cause of the corrosion. This article details a proven approach to properly determining the root cause of a failure, and includes pictographic illustrations of the most common corrosion mechanisms, including general corrosion, pitting, galvanic corrosion, dealloying, crevice corrosion, microbiologically-influenced corrosion (MIC, corrosion fatigue, stress corrosion cracking (SCC, intergranular corrosion, fretting, erosion corrosion and hydrogen damage.

  2. Statistics of the Von Mises Stress Response For Structures Subjected To Random Excitations

    Directory of Open Access Journals (Sweden)

    Mu-Tsang Chen

    1998-01-01

    Full Text Available Finite element-based random vibration analysis is increasingly used in computer aided engineering software for computing statistics (e.g., root-mean-square value of structural responses such as displacements, stresses and strains. However, these statistics can often be computed only for Cartesian responses. For the design of metal structures, a failure criterion based on an equivalent stress response, commonly known as the von Mises stress, is more appropriate and often used. This paper presents an approach for computing the statistics of the von Mises stress response for structures subjected to random excitations. Random vibration analysis is first performed to compute covariance matrices of Cartesian stress responses. Monte Carlo simulation is then used to perform scatter and failure analyses using the von Mises stress response.

  3. Propagated failure analysis for non-repairable systems considering both global and selective effects

    International Nuclear Information System (INIS)

    Wang Chaonan; Xing Liudong; Levitin, Gregory

    2012-01-01

    This paper proposes an algorithm for the reliability analysis of non-repairable binary systems subject to competing failure propagation and failure isolation events with both global and selective failure effects. A propagated failure that originates from a system component causes extensive damage to the rest of the system. Global effect happens when the propagated failure causes the entire system to fail; whereas selective effect happens when the propagated failure causes only failure of a subset of system components. In both cases, the failure propagation that originates from some system components (referred to as dependent components) can be isolated because of functional dependence between the dependent components and a component that prevents the failure propagation (trigger components) when the failure of the trigger component happens before the occurrence of the propagated failure. Most existing studies focus on the analysis of propagated failures with global effect. However, in many cases, propagated failures affect only a subset of system components not the entire system. Existing approaches for analyzing propagated failures with selective effect are limited to series-parallel systems. This paper proposes a combinatorial method for the propagated failure analysis considering both global and selective effects as well as the competition with the failure isolation in the time domain. The proposed method is not limited to series-parallel systems and has no limitation on the type of time-to-failure distributions for the system components. The method is verified using the Markov-based method. An example of computer memory systems is analyzed to demonstrate the application of the proposed method.

  4. Economic impact of heart failure according to the effects of kidney failure.

    Science.gov (United States)

    Sicras Mainar, Antoni; Navarro Artieda, Ruth; Ibáñez Nolla, Jordi

    2015-01-01

    To evaluate the use of health care resources and their cost according to the effects of kidney failure in heart failure patients during 2-year follow-up in a population setting. Observational retrospective study based on a review of medical records. The study included patients ≥ 45 years treated for heart failure from 2008 to 2010. The patients were divided into 2 groups according to the presence/absence of KF. Main outcome variables were comorbidity, clinical status (functional class, etiology), metabolic syndrome, costs, and new cases of cardiovascular events and kidney failure. The cost model included direct and indirect health care costs. Statistical analysis included multiple regression models. The study recruited 1600 patients (prevalence, 4.0%; mean age 72.4 years; women, 59.7%). Of these patients, 70.1% had hypertension, 47.1% had dyslipidemia, and 36.2% had diabetes mellitus. We analyzed 433 patients (27.1%) with kidney failure and 1167 (72.9%) without kidney failure. Patients with kidney failure were associated with functional class III-IV (54.1% vs 40.8%) and metabolic syndrome (65.3% vs 51.9%, P<.01). The average unit cost was €10,711.40. The corrected cost in the presence of kidney failure was €14,868.20 vs €9,364.50 (P=.001). During follow-up, 11.7% patients developed ischemic heart disease, 18.8% developed kidney failure, and 36.1% developed heart failure exacerbation. Comorbidity associated with heart failure is high. The presence of kidney failure increases the use of health resources and leads to higher costs within the National Health System. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.

  5. Study of deformation evolution during failure of rock specimens using laser-based vibration measurements

    Science.gov (United States)

    Smolin, I. Yu.; Kulkov, A. S.; Makarov, P. V.; Tunda, V. A.; Krasnoveikin, V. A.; Eremin, M. O.; Bakeev, R. A.

    2017-12-01

    The aim of the paper is to analyze experimental data on the dynamic response of the marble specimen in uniaxial compression. To make it we use the methods of mathematical statistics. The lateral surface velocity evolution obtained by the laser Doppler vibrometer represents the data for analysis. The registered data were regarded as a time series that reflects deformation evolution of the specimen loaded up to failure. The revealed changes in statistical parameters were considered as precursors of failure. It is shown that before failure the deformation response is autocorrelated and reflects the states of dynamic chaos and self-organized criticality.

  6. Safety shutdowns and failures of the RA reactor equipment; Sigurnosna zaustavljanja i kvarovi opreme na reaktoru RA

    Energy Technology Data Exchange (ETDEWEB)

    Mitrovic, S [Institut za nuklearne nauke ' Boris Kidric' , Vinca, Belgrade (Yugoslavia)

    1966-07-01

    This report is an attempt of statistical analysis of the failures occurred during RA reactor operation. A list of failures occurred on the RA equipment during 1965 is included. Failures were related to the following systems: dosimetry system (22%), safety and control system (7%), heavy water system (2%), technical water (4%), helium system (2%), measuring instruments (30%), transport, ventilation, power supply systems (32%). A review of safety shutdowns from 1962 to 1966 is included as well, as a comparison with three similar reactors. Although the number of events used for statistical analysis was not adequate, it has been concluded that RA reactor operation was stable and reliable.

  7. Application of descriptive statistics in analysis of experimental data

    OpenAIRE

    Mirilović Milorad; Pejin Ivana

    2008-01-01

    Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...

  8. Analysis of calculating methods for failure distribution function based on maximal entropy principle

    International Nuclear Information System (INIS)

    Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli

    2009-01-01

    The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)

  9. The study of Influencing Maintenance Factors on Failures of Two gypsum Kilns by Failure Modes and Effects Analysis (FMEA

    Directory of Open Access Journals (Sweden)

    Iraj Alimohammadi

    2014-06-01

    Full Text Available Developing technology and using equipment in Iranian industries caused that maintenance system would be more important to use. Using proper management techniques not only increase the performance of production system but also reduce the failures and costs. The aim of this study was to determine the quality of maintenance system and the effects of its components on failures of kilns in two gypsum production companies using Failure Modes and Effects Analysis (FMEA. Furthermore the costs of failures were studied. After the study of gypsum production steps in the factories, FMEA was conducted by the determination of analysis insight, information gathering, making list of kilns’ component and filling up the FMEA’s tables. The effects of failures on production, how to fail, failure rate, failure severity, and control measures were studied. The evaluation of maintenance system was studied by a check list including questions related to system components. The costs of failures were determined by refer in accounting notebooks and interview with the head of accounting department. It was found the total qualities of maintenance system in NO.1 was more than NO.2 but because of lower quality of NO.1’s kiln design, number of failures and their costs were more. In addition it was determined that repair costs in NO.2’s kiln were about one third of NO.1’s. The low severity failures caused the most costs in comparison to the moderate and low ones. The technical characteristics of kilns were appeared to be the most important factors in reducing of failures and costs.

  10. Failure criterion of concrete type material and punching failure analysis of thick mortar plate

    International Nuclear Information System (INIS)

    Ohno, T.; Kuroiwa, M.; Irobe, M.

    1979-01-01

    In this paper falure surface of concrete type material is proposed and its validity to structural analysis is examined. The study is an introductory part of evaluation for ultimate strength of reinforced and prestressed concrete structures in reactor technology. The failure surface is expressed in a linear form in terms of octahedral normal and shear stresses. Coefficient of the latter stress is given by a trigonometric series in threefold angle of similarity. Hence, its meridians are multilinear and traces of its deviatoric sections are smooth curves having periodicity of 2π/3 around space diagonal in principal stress space. The mathematical expression of the surface has an arbitraty number of parameters so that material test results are well reflected. To confirm the effectiveness of proposed failure criterion, experiment and numerical analysis by the finite element method on punching failure of thick mortar plate in axial symmetry are compared. In the numerical procedure yield surface of the material is assumed to exist mainly in compression region, since a brittle cleavage or elastic fracture occurs in the concrete type material under stress state with tension, while a ductile or plastic fracture occurs under compressive stress state. (orig.)

  11. Statistical Analysis of Research Data | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

  12. Probabilistic Design in a Sheet Metal Stamping Process under Failure Analysis

    International Nuclear Information System (INIS)

    Buranathiti, Thaweepat; Cao, Jian; Chen, Wei; Xia, Z. Cedric

    2005-01-01

    Sheet metal stamping processes have been widely implemented in many industries due to its repeatability and productivity. In general, the simulations for a sheet metal forming process involve nonlinearity, complex material behavior and tool-material interaction. Instabilities in terms of tearing and wrinkling are major concerns in many sheet metal stamping processes. In this work, a sheet metal stamping process of a mild steel for a wheelhouse used in automobile industry is studied by using an explicit nonlinear finite element code and incorporating failure analysis (tearing and wrinkling) and design under uncertainty. Margins of tearing and wrinkling are quantitatively defined via stress-based criteria for system-level design. The forming process utilizes drawbeads instead of using the blank holder force to restrain the blank. The main parameters of interest in this work are friction conditions, drawbead configurations, sheet metal properties, and numerical errors. A robust design model is created to conduct a probabilistic design, which is made possible for this complex engineering process via an efficient uncertainty propagation technique. The method called the weighted three-point-based method estimates the statistical characteristics (mean and variance) of the responses of interest (margins of failures), and provide a systematic approach in designing a sheet metal forming process under the framework of design under uncertainty

  13. Statistical analysis with Excel for dummies

    CERN Document Server

    Schmuller, Joseph

    2013-01-01

    Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro

  14. Failure rate of piping in hydrogen sulphide systems

    International Nuclear Information System (INIS)

    Hare, M.G.

    1993-08-01

    The objective of this study is to provide information about piping failures in hydrogen sulphide service that could be used to establish failures rates for piping in 'sour service'. Information obtained from the open literature, various petrochemical industries and the Bruce Heavy Water Plant (BHWP) was used to quantify the failure analysis data. On the basis of this background information, conclusions from the study and recommendations for measures that could reduce the frequency of failures for piping systems at heavy water plants are presented. In general, BHWP staff should continue carrying out their present integrity and leak detection programmes. The failure rate used in the safety studies for the BHWP appears to be based on the rupture statistics for pipelines carrying sweet natural gas. The failure rate should be based on the rupture rate for sour gas lines, adjusted for the unique conditions at Bruce

  15. Common Cause Failure Analysis for the Digital Plant Protection System

    International Nuclear Information System (INIS)

    Kagn, Hyun Gook; Jang, Seung Cheol

    2005-01-01

    Safety-critical systems such as nuclear power plants adopt the multiple-redundancy design in order to reduce the risk from the single component failure. The digitalized safety-signal generation system is also designed based on the multiple-redundancy strategy which consists of more redundant components. The level of the redundant design of digital systems is usually higher than those of conventional mechanical systems. This higher redundancy would clearly reduce the risk from the single failure of components, but raise the importance of the common cause failure (CCF) analysis. This research aims to develop the practical and realistic method for modeling the CCF in digital safety-critical systems. We propose a simple and practical framework for assessing the CCF probability of digital equipment. Higher level of redundancy causes the difficulty of CCF analysis because it results in impractically large number of CCF events in the fault tree model when we use conventional CCF modeling methods. We apply the simplified alpha-factor (SAF) method to the digital system CCF analysis. The precedent study has shown that SAF method is quite realistic but simple when we consider carefully system success criteria. The first step for using the SAF method is the analysis of target system for determining the function failure cases. That is, the success criteria of the system could be derived from the target system's function and configuration. Based on this analysis, we can calculate the probability of single CCF event which represents the CCF events resulting in the system failure. In addition to the application of SAF method, in order to accommodate the other characteristics of digital technology, we develop a simple concept and several equations for practical use

  16. Signal analysis for failure detection

    International Nuclear Information System (INIS)

    Parpaglione, M.C.; Perez, L.V.; Rubio, D.A.; Czibener, D.; D'Attellis, C.E.; Brudny, P.I.; Ruzzante, J.E.

    1994-01-01

    Several methods for analysis of acoustic emission signals are presented. They are mainly oriented to detection of changes in noisy signals and characterization of higher amplitude discrete pulses or bursts. The aim was to relate changes and events with failure, crack or wear in materials, being the final goal to obtain automatic means of detecting such changes and/or events. Performance evaluation was made using both simulated and laboratory test signals. The methods being presented are the following: 1. Application of the Hopfield Neural Network (NN) model for classifying faults in pipes and detecting wear of a bearing. 2. Application of the Kohonnen and Back Propagation Neural Network model for the same problem. 3. Application of Kalman filtering to determine time occurrence of bursts. 4. Application of a bank of Kalman filters (KF) for failure detection in pipes. 5. Study of amplitude distribution of signals for detecting changes in their shape. 6. Application of the entropy distance to measure differences between signals. (author). 10 refs, 11 figs

  17. NDT in failure analysis - some case studies [Paper IIIA-g

    International Nuclear Information System (INIS)

    Raj, Baldev; Bhattacharya, D.K.; Lopez, E.C.; Jayakumar, T.

    1986-01-01

    The effective uses of several non-destructive techniques in failure analysis are discussed. The techniques considered are: dye penetrant testing, radiography, ultrasonic testing, hardness measurement and in-situ metallography. A few failure cases are discussed to highlight the usefulness of the techniques. (author)

  18. A statistical model for prediction of fuel element failure using the Markov process and entropy minimax principles

    International Nuclear Information System (INIS)

    Choi, K.Y.; Yoon, Y.K.; Chang, S.H.

    1991-01-01

    This paper reports on a new statistical fuel failure model developed to take into account the effects of damaging environmental conditions and the overall operating history of the fuel elements. The degradation of material properties and damage resistance of the fuel cladding is mainly caused by the combined effects of accumulated dynamic stresses, neutron irradiation, and chemical and stress corrosion at operating temperature. Since the degradation of material properties due to these effects can be considered as a stochastic process, a dynamic reliability function is derived based on the Markov process. Four damage parameters, namely, dynamic stresses, magnitude of power increase from the preceding power level and with ramp rate, and fatigue cycles, are used to build this model. The dynamic reliability function and damage parameters are used to obtain effective damage parameters. The entropy maximization principle is used to generate a probability density function of the effective damage parameters. The entropy minimization principle is applied to determine weighting factors for amalgamation of the failure probabilities due to the respective failure modes. In this way, the effects of operating history, damaging environmental conditions, and damage sequence are taken into account

  19. [Hazard function and life table: an introduction to the failure time analysis].

    Science.gov (United States)

    Matsushita, K; Inaba, H

    1987-04-01

    Failure time analysis has become popular in demographic studies. It can be viewed as a part of regression analysis with limited dependent variables as well as a special case of event history analysis and multistate demography. The idea of hazard function and failure time analysis, however, has not been properly introduced to nor commonly discussed by demographers in Japan. The concept of hazard function in comparison with life tables is briefly described, where the force of mortality is interchangeable with the hazard rate. The basic idea of failure time analysis is summarized for the cases of exponential distribution, normal distribution, and proportional hazard models. The multiple decrement life table is also introduced as an example of lifetime data analysis with cause-specific hazard rates.

  20. A Big Data Analysis Approach for Rail Failure Risk Assessment.

    Science.gov (United States)

    Jamshidi, Ali; Faghih-Roohi, Shahrzad; Hajizadeh, Siamak; Núñez, Alfredo; Babuska, Robert; Dollevoet, Rolf; Li, Zili; De Schutter, Bart

    2017-08-01

    Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  1. Perspectives on the application of order-statistics in best-estimate plus uncertainty nuclear safety analysis

    International Nuclear Information System (INIS)

    Martin, Robert P.; Nutt, William T.

    2011-01-01

    Research highlights: → Historical recitation on application of order-statistics models to nuclear power plant thermal-hydraulics safety analysis. → Interpretation of regulatory language regarding 10 CFR 50.46 reference to a 'high level of probability'. → Derivation and explanation of order-statistics-based evaluation methodologies considering multi-variate acceptance criteria. → Summary of order-statistics models and recommendations to the nuclear power plant thermal-hydraulics safety analysis community. - Abstract: The application of order-statistics in best-estimate plus uncertainty nuclear safety analysis has received a considerable amount of attention from methodology practitioners, regulators, and academia. At the root of the debate are two questions: (1) what is an appropriate quantitative interpretation of 'high level of probability' in regulatory language appearing in the LOCA rule, 10 CFR 50.46 and (2) how best to mathematically characterize the multi-variate case. An original derivation is offered to provide a quantitative basis for 'high level of probability.' At root of the second question is whether one should recognize a probability statement based on the tolerance region method of Wald and Guba, et al., for multi-variate problems, one explicitly based on the regulatory limits, best articulated in the Wallis-Nutt 'Testing Method', or something else entirely. This paper reviews the origins of the different positions, key assumptions, limitations, and relationship to addressing acceptance criteria. It presents a mathematical interpretation of the regulatory language, including a complete derivation of uni-variate order-statistics (as credited in AREVA's Realistic Large Break LOCA methodology) and extension to multi-variate situations. Lastly, it provides recommendations for LOCA applications, endorsing the 'Testing Method' and addressing acceptance methods allowing for limited sample failures.

  2. Risk of renal failure with the non-vitamin K antagonist oral anticoagulants: systematic review and meta-analysis.

    Science.gov (United States)

    Caldeira, Daniel; Gonçalves, Nilza; Pinto, Fausto J; Costa, João; Ferreira, Joaquim J

    2015-07-01

    Vitamin K antagonists (VKA)-related nephropathy is a novel entity characterized by acute kidney injury related to International Normalized Ratio supratherapeutic levels. Non-vitamin K antagonists oral anticoagulants (NOACs) have a predictable dose-response relationship and an improved safety profile. We hypothesized that these drugs do not have an increased risk of incident renal failure, which may be detrimental for the use of NOACs. Systematic review and meta-analysis of phase III randomized controlled trials (RCTs). Trials were searched through Medline, Cochrane Library and public assessment reports in August 2014. Primary outcome was renal failure. NOACs were evaluated against any comparator. Random-effects meta-analysis was performed by default, and pooled estimates were expressed as Risk Ratio (RR) and 95%CI. Heterogeneity was evaluated with I(2) test. Ten RCTs fulfilled inclusion criteria (one apixaban RCT, three dabigatran RCTs, and six rivaroxaban RCTs), enrolling 75 100 patients. Overall NOACs did not increase the risk of renal failure with an RR 0.96, 95%CI 0.88-1.05 compared with VKA or Low-molecular weight heparin (LMWH), without significant statistical heterogeneity (I(2)  = 3.5%). Compared with VKA, NOACs did not increase the risk of renal failure (RR 0.96, 95%CI 0.87-1.07; I(2)  = 17.8%; six RCTs). Rivaroxaban did not show differences in the incidence of renal failure compared with LMWH (RR 1.20, 95%CI 0.37-3.94; four trials), but there was an increased risk of creatinine elevation RR 1.25, 95%CI 1.08-1.45; I(2)  = 0%. NOACs had a similar risk of renal failure compared with VKA/LMWH in phase III RCTs. Post-marketing surveillance should be warranted. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Statistical analysis of dynamic parameters of the core

    International Nuclear Information System (INIS)

    Ionov, V.S.

    2007-01-01

    The transients of various types were investigated for the cores of zero power critical facilities in RRC KI and NPP. Dynamic parameters of neutron transients were explored by tool statistical analysis. Its have sufficient duration, few channels for currents of chambers and reactivity and also some channels for technological parameters. On these values the inverse period. reactivity, lifetime of neutrons, reactivity coefficients and some effects of a reactivity are determinate, and on the values were restored values of measured dynamic parameters as result of the analysis. The mathematical means of statistical analysis were used: approximation(A), filtration (F), rejection (R), estimation of parameters of descriptive statistic (DSP), correlation performances (kk), regression analysis(KP), the prognosis (P), statistician criteria (SC). The calculation procedures were realized by computer language MATLAB. The reasons of methodical and statistical errors are submitted: inadequacy of model operation, precision neutron-physical parameters, features of registered processes, used mathematical model in reactivity meters, technique of processing for registered data etc. Examples of results of statistical analysis. Problems of validity of the methods used for definition and certification of values of statistical parameters and dynamic characteristics are considered (Authors)

  4. Statistical cluster analysis and diagnosis of nuclear system level performance

    International Nuclear Information System (INIS)

    Teichmann, T.; Levine, M.M.; Samanta, P.K.; Kato, W.Y.

    1985-01-01

    The complexity of individual nuclear power plants and the importance of maintaining reliable and safe operations makes it desirable to complement the deterministic analyses of these plants by corresponding statistical surveys and diagnoses. Based on such investigations, one can then explore, statistically, the anticipation, prevention, and when necessary, the control of such failures and malfunctions. This paper, and the accompanying one by Samanta et al., describe some of the initial steps in exploring the feasibility of setting up such a program on an integrated and global (industry-wide) basis. The conceptual statistical and data framework was originally outlined in BNL/NUREG-51609, NUREG/CR-3026, and the present work aims at showing how some important elements might be implemented in a practical way (albeit using hypothetical or simulated data)

  5. Failure analysis of PB-1 (EBTS Be/Cu mockup)

    International Nuclear Information System (INIS)

    Odegard, B.C. Jr.; Cadden, C.H.

    1996-11-01

    Failure analysis was done on PB-1 (series of Be tiles joined to Cu alloy) following a tile failure during a high heat flux experiment in EBTS (electron beam test system). This heat flux load simulated ambient conditions inside ITER; the Be tiles were bonded to the Cu alloy using low-temperature diffusion bonding, which is being considered for fabricating plasma facing components in ITER. Results showed differences between the EBTS failure and a failure during a room temperature tensile test. The latter occurred at the Cu-Be interface in an intermetallic phase formed by reaction of the two metals at the bonding temperature. Fracture strengths measured by these tests were over 300 MPa. The high heat flux specimens failed at the Cu-Cu diffusion bond. Fracture morphology in both cases was a mixed mode of dimple rupture and transgranular cleavage. Several explanations for this difference in failure mechanism are suggested

  6. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  7. Sequentially linear analysis for simulating brittle failure

    NARCIS (Netherlands)

    van de Graaf, A.V.

    2017-01-01

    The numerical simulation of brittle failure at structural level with nonlinear finite
    element analysis (NLFEA) remains a challenge due to robustness issues. We attribute these problems to the dimensions of real-world structures combined with softening behavior and negative tangent stiffness at

  8. Collecting operational event data for statistical analysis

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-09-01

    This report gives guidance for collecting operational data to be used for statistical analysis, especially analysis of event counts. It discusses how to define the purpose of the study, the unit (system, component, etc.) to be studied, events to be counted, and demand or exposure time. Examples are given of classification systems for events in the data sources. A checklist summarizes the essential steps in data collection for statistical analysis

  9. Lower head failure analysis

    International Nuclear Information System (INIS)

    Rempe, J.L.; Thinnes, G.L.; Allison, C.M.; Cronenberg, A.W.

    1991-01-01

    The US Nuclear Regulatory Commission is sponsoring a lower vessel head research program to investigate plausible modes of reactor vessel failure in order to determine (a) which modes have the greatest likelihood of occurrence during a severe accident and (b) the range of core debris and accident conditions that lead to these failures. This paper presents the methodology and preliminary results of an investigation of reactor designs and thermodynamic conditions using analytic closed-form approximations to assess the important governing parameters in non-dimensional form. Preliminary results illustrate the importance of vessel and tube geometrical parameters, material properties, and external boundary conditions on predicting vessel failure. Thermal analyses indicate that steady-state temperature distributions will occur in the vessel within several hours, although the exact time is dependent upon vessel thickness. In-vessel tube failure is governed by the tube-to-debris mass ratio within the lower head, where most penetrations are predicted to fail if surrounded by molten debris. Melt penetration distance is dependent upon the effective flow diameter of the tube. Molten debris is predicted to penetrate through tubes with a larger effective flow diameter, such as a boiling water reactor (BWR) drain nozzle. Ex-vessel tube failure for depressurized reactor vessels is predicted to be more likely for a BWR drain nozzle penetration because of its larger effective diameter. At high pressures (between ∼0.1 MPa and ∼12 MPa) ex-vessel tube rupture becomes a dominant failure mechanism, although tube ejection dominates control rod guide tube failure at lower temperatures. However, tube ejection and tube rupture predictions are sensitive to the vessel and tube radial gap size and material coefficients of thermal expansion

  10. A prospective analysis of 179 type 2 superior labrum anterior and posterior repairs: outcomes and factors associated with success and failure.

    Science.gov (United States)

    Provencher, Matthew T; McCormick, Frank; Dewing, Christopher; McIntire, Sean; Solomon, Daniel

    2013-04-01

    There is a paucity of type 2 superior labrum anterior and posterior (SLAP) surgical outcomes with prospective data. To prospectively analyze the clinical outcomes of the arthroscopic treatment of type 2 SLAP tears in a young, active patient population, and to determine factors associated with treatment success and failure. Case-control study; Level of evidence, 3. Over a 4-year period, 225 patients with a type 2 SLAP tear were prospectively enrolled. Two sports/shoulder-fellowship-trained orthopaedic surgeons performed repairs with suture anchors and a vertical suture construct. Patients were excluded if they underwent any additional repairs, including rotator cuff repair, labrum repair outside of the SLAP region, biceps tenodesis or tenotomy, or distal clavicle excision. Dependent variables were preoperative and postoperative assessments with the American Shoulder and Elbow Surgeons (ASES), Single Assessment Numeric Evaluation (SANE), and Western Ontario Shoulder Instability (WOSI) scores and independent physical examinations. A failure analysis was conducted to determine factors associated with failure: age, mechanism of injury, preoperative outcome scores, and smoking. Failure was defined as revision surgery, mean ASES score below 70, or an inability to return to sports and work duties, which was assessed statistically with the Student t test and stepwise logarithmic regression. There were 179 of 225 patients who completed the follow-up for the study (80%) at a mean of 40.4 months (range, 26-62 months). The mean preoperative scores (WOSI, 54%; SANE, 50%; ASES, 65) improved postoperatively (WOSI, 82%; SANE, 85%; ASES, 88) (P failure criteria. Fifty patients elected revision surgery. Advanced age within the cohort (>36 years) was the only factor associated with a statistically significant increase in the incidence of failure. Those who were deemed failed had a mean age of 39.2 years (range, 29-45 years) versus those who were deemed healed with a mean age of 29

  11. Failure analysis of buried tanks

    International Nuclear Information System (INIS)

    Watkins, R.K.

    1994-01-01

    Failure of a buried tank can be hazardous. Failure may be a leak through which product is lost from the tank; but also through which contamination can occur. Failures are epidemic -- because buried tanks are out of sight, but also because designers of buried tanks have adopted analyses developed for pressure tanks. So why do pressure tanks fail when they are buried? Most failures of buried tanks are really soil failures. Soil compresses, or slips, or liquefies. Soil is not only a load, it is a support without which the tank deforms. A high water table adds to the load on the tank. It also reduces the strength of the soil. Based on tests, structural analyses are proposed for empty tanks buried in soils of various quality, with the water table at various levels, and with internal vacuum. Failure may be collapse tank. Such collapse is a sudden, audible inversion of the cylinder when the sidefill soil slips. Failure may be flotation. Failure may be a leak. Most leaks are fractures in the welds in overlap seams at flat spots. Flat spots are caused by a hard bedding or a heavy surface wheel load. Because the tank wall is double thick at the overlap, shearing stress in the weld is increased. Other weld failures occur when an end plate shears down past a cylinder; or when the tank is supported only at its ends like a beam. These, and other, failures can be analyzed with justifiable accuracy using basic principles of mechanics of materials. 10 figs

  12. Reliability test and failure analysis of high power LED packages

    International Nuclear Information System (INIS)

    Chen Zhaohui; Zhang Qin; Wang Kai; Luo Xiaobing; Liu Sheng

    2011-01-01

    A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 0 C/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing. (semiconductor devices)

  13. Failure rates in Barsebaeck-1 reactor coolant pressure boundary piping. An application of a piping failure database

    International Nuclear Information System (INIS)

    Lydell, B.

    1999-05-01

    This report documents an application of a piping failure database to estimate the frequency of leak and rupture in reactor coolant pressure boundary piping. The study used Barsebaeck-1 as reference plant. The study tried two different approaches to piping failure rate estimation: 1) PSA-style, simple estimation using Bayesian statistics, and 2) fitting of statistical distribution to failure data. A large, validated database on piping failures (like the SKI-PIPE database) supports both approaches. In addition to documenting leak and rupture frequencies, the SKI report describes the use of piping failure data to estimate frequency of medium and large loss of coolant accidents (LOCAs). This application study was co sponsored by Barsebaeck Kraft AB and SKI Research

  14. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2013-01-01

    Statistics and Analysis of Scientific Data covers the foundations of probability theory and statistics, and a number of numerical and analytical methods that are essential for the present-day analyst of scientific data. Topics covered include probability theory, distribution functions of statistics, fits to two-dimensional datasheets and parameter estimation, Monte Carlo methods and Markov chains. Equal attention is paid to the theory and its practical application, and results from classic experiments in various fields are used to illustrate the importance of statistics in the analysis of scientific data. The main pedagogical method is a theory-then-application approach, where emphasis is placed first on a sound understanding of the underlying theory of a topic, which becomes the basis for an efficient and proactive use of the material for practical applications. The level is appropriate for undergraduates and beginning graduate students, and as a reference for the experienced researcher. Basic calculus is us...

  15. Method for statistical data analysis of multivariate observations

    CERN Document Server

    Gnanadesikan, R

    1997-01-01

    A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte

  16. Advances in statistical models for data analysis

    CERN Document Server

    Minerva, Tommaso; Vichi, Maurizio

    2015-01-01

    This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.

  17. Progressive Damage and Failure Analysis of Composite Laminates

    Science.gov (United States)

    Joseph, Ashith P. K.

    Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis

  18. Dependency Analysis Guidance Nordic/German Working Group on Common Cause Failure analysis. Phase 2, Development of Harmonized Approach and Applications for Common Cause Failure Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Guenter; Johanson, Gunnar; Lindberg, Sandra; Vaurio, Jussi

    2009-03-15

    The Regulatory Code SSMFS 2008:1 of Swedish Radiation Safety Authority (SSM) includes requirements regarding the performance of probabilistic safety assessments (PSA), as well as PSA activities in general. Therefore, the follow-up of these activities is part of the inspection tasks of SSM. According to the SSMFS 2008:1, the safety analyses shall be based on a systematic identification and evaluation of such events, event sequences and other conditions which may lead to a radiological accident. The research report Nordic/German Working Group on Common cause Failure analysis. Phase 2 project report: Development of Harmonized Approach and Applications for Common Cause Failure Quantification has been developed under a contract with the Nordic PSA Group (NPSAG) and its German counterpart VGB, with the aim to create a common experience base for defence and analysis of dependent failures i.e. Common Cause Failures CCF. Phase 2 in this project if a deepened data analyses of CCF events and a demonstration on how the so called impact vectors can be constructed and on how CCF parameters are estimated. The word Guidance in the report title is used in order to indicate a common methodological guidance accepted by the NPSAG, based on current state of the art concerning the analysis of dependent failures and adapted to conditions relevant for Nordic sites. This will make it possible for the utilities to perform cost effective improvements and analyses. The report presents a common attempt by the authorities and the utilities to create a methodology and experience base for defence and analysis of dependent failures. The performed benchmark application has shown how important the interpretation of base data is to obtain robust CCF data and data analyses results. Good features were found in all benchmark approaches. The obtained experiences and approaches should now be used in harmonised procedures. A next step could be to develop and agree on event and formula driven impact vector

  19. Dependency Analysis Guidance Nordic/German Working Group on Common Cause Failure analysis. Phase 2, Development of Harmonized Approach and Applications for Common Cause Failure Quantification

    International Nuclear Information System (INIS)

    Becker, Guenter; Johanson, Gunnar; Lindberg, Sandra; Vaurio, Jussi

    2009-03-01

    The Regulatory Code SSMFS 2008:1 of Swedish Radiation Safety Authority (SSM) includes requirements regarding the performance of probabilistic safety assessments (PSA), as well as PSA activities in general. Therefore, the follow-up of these activities is part of the inspection tasks of SSM. According to the SSMFS 2008:1, the safety analyses shall be based on a systematic identification and evaluation of such events, event sequences and other conditions which may lead to a radiological accident. The research report Nordic/German Working Group on Common cause Failure analysis. Phase 2 project report: Development of Harmonized Approach and Applications for Common Cause Failure Quantification has been developed under a contract with the Nordic PSA Group (NPSAG) and its German counterpart VGB, with the aim to create a common experience base for defence and analysis of dependent failures i.e. Common Cause Failures CCF. Phase 2 in this project if a deepened data analyses of CCF events and a demonstration on how the so called impact vectors can be constructed and on how CCF parameters are estimated. The word Guidance in the report title is used in order to indicate a common methodological guidance accepted by the NPSAG, based on current state of the art concerning the analysis of dependent failures and adapted to conditions relevant for Nordic sites. This will make it possible for the utilities to perform cost effective improvements and analyses. The report presents a common attempt by the authorities and the utilities to create a methodology and experience base for defence and analysis of dependent failures. The performed benchmark application has shown how important the interpretation of base data is to obtain robust CCF data and data analyses results. Good features were found in all benchmark approaches. The obtained experiences and approaches should now be used in harmonised procedures. A next step could be to develop and agree on event and formula driven impact vector

  20. Failure analysis of the cement mantle in total hip arthroplasty with an efficient probabilistic method.

    Science.gov (United States)

    Kaymaz, Irfan; Bayrak, Ozgu; Karsan, Orhan; Celik, Ayhan; Alsaran, Akgun

    2014-04-01

    Accurate prediction of long-term behaviour of cemented hip implants is very important not only for patient comfort but also for elimination of any revision operation due to failure of implants. Therefore, a more realistic computer model was generated and then used for both deterministic and probabilistic analyses of the hip implant in this study. The deterministic failure analysis was carried out for the most common failure states of the cement mantle. On the other hand, most of the design parameters of the cemented hip are inherently uncertain quantities. Therefore, the probabilistic failure analysis was also carried out considering the fatigue failure of the cement mantle since it is the most critical failure state. However, the probabilistic analysis generally requires large amount of time; thus, a response surface method proposed in this study was used to reduce the computation time for the analysis of the cemented hip implant. The results demonstrate that using an efficient probabilistic approach can significantly reduce the computation time for the failure probability of the cement from several hours to minutes. The results also show that even the deterministic failure analyses do not indicate any failure of the cement mantle with high safety factors, the probabilistic analysis predicts the failure probability of the cement mantle as 8%, which must be considered during the evaluation of the success of the cemented hip implants.

  1. Systematic analysis of coding and noncoding DNA sequences using methods of statistical linguistics

    Science.gov (United States)

    Mantegna, R. N.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    We compare the statistical properties of coding and noncoding regions in eukaryotic and viral DNA sequences by adapting two tests developed for the analysis of natural languages and symbolic sequences. The data set comprises all 30 sequences of length above 50 000 base pairs in GenBank Release No. 81.0, as well as the recently published sequences of C. elegans chromosome III (2.2 Mbp) and yeast chromosome XI (661 Kbp). We find that for the three chromosomes we studied the statistical properties of noncoding regions appear to be closer to those observed in natural languages than those of coding regions. In particular, (i) a n-tuple Zipf analysis of noncoding regions reveals a regime close to power-law behavior while the coding regions show logarithmic behavior over a wide interval, while (ii) an n-gram entropy measurement shows that the noncoding regions have a lower n-gram entropy (and hence a larger "n-gram redundancy") than the coding regions. In contrast to the three chromosomes, we find that for vertebrates such as primates and rodents and for viral DNA, the difference between the statistical properties of coding and noncoding regions is not pronounced and therefore the results of the analyses of the investigated sequences are less conclusive. After noting the intrinsic limitations of the n-gram redundancy analysis, we also briefly discuss the failure of the zeroth- and first-order Markovian models or simple nucleotide repeats to account fully for these "linguistic" features of DNA. Finally, we emphasize that our results by no means prove the existence of a "language" in noncoding DNA.

  2. Failure rates in Barsebaeck-1 reactor coolant pressure boundary piping. An application of a piping failure database

    Energy Technology Data Exchange (ETDEWEB)

    Lydell, B. [RSA Technologies, Vista, CA (United States)

    1999-05-01

    This report documents an application of a piping failure database to estimate the frequency of leak and rupture in reactor coolant pressure boundary piping. The study used Barsebaeck-1 as reference plant. The study tried two different approaches to piping failure rate estimation: 1) PSA-style, simple estimation using Bayesian statistics, and 2) fitting of statistical distribution to failure data. A large, validated database on piping failures (like the SKI-PIPE database) supports both approaches. In addition to documenting leak and rupture frequencies, the SKI report describes the use of piping failure data to estimate frequency of medium and large loss of coolant accidents (LOCAs). This application study was co sponsored by Barsebaeck Kraft AB and SKI Research 41 refs, figs, tabs

  3. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  4. Classification, (big) data analysis and statistical learning

    CERN Document Server

    Conversano, Claudio; Vichi, Maurizio

    2018-01-01

    This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...

  5. Statistical hot spot analysis of reactor cores

    International Nuclear Information System (INIS)

    Schaefer, H.

    1974-05-01

    This report is an introduction into statistical hot spot analysis. After the definition of the term 'hot spot' a statistical analysis is outlined. The mathematical method is presented, especially the formula concerning the probability of no hot spots in a reactor core is evaluated. A discussion with the boundary conditions of a statistical hot spot analysis is given (technological limits, nominal situation, uncertainties). The application of the hot spot analysis to the linear power of pellets and the temperature rise in cooling channels is demonstrated with respect to the test zone of KNK II. Basic values, such as probability of no hot spots, hot spot potential, expected hot spot diagram and cumulative distribution function of hot spots, are discussed. It is shown, that the risk of hot channels can be dispersed equally over all subassemblies by an adequate choice of the nominal temperature distribution in the core

  6. Natural disaster risk analysis for critical infrastructure systems: An approach based on statistical learning theory

    International Nuclear Information System (INIS)

    Guikema, Seth D.

    2009-01-01

    Probabilistic risk analysis has historically been developed for situations in which measured data about the overall reliability of a system are limited and expert knowledge is the best source of information available. There continue to be a number of important problem areas characterized by a lack of hard data. However, in other important problem areas the emergence of information technology has transformed the situation from one characterized by little data to one characterized by data overabundance. Natural disaster risk assessments for events impacting large-scale, critical infrastructure systems such as electric power distribution systems, transportation systems, water supply systems, and natural gas supply systems are important examples of problems characterized by data overabundance. There are often substantial amounts of information collected and archived about the behavior of these systems over time. Yet it can be difficult to effectively utilize these large data sets for risk assessment. Using this information for estimating the probability or consequences of system failure requires a different approach and analysis paradigm than risk analysis for data-poor systems does. Statistical learning theory, a diverse set of methods designed to draw inferences from large, complex data sets, can provide a basis for risk analysis for data-rich systems. This paper provides an overview of statistical learning theory methods and discusses their potential for greater use in risk analysis

  7. PCI fuel failure analysis: a report on a cooperative program undertaken by Pacific Northwest Laboratory and Chalk River Nuclear Laboratories

    International Nuclear Information System (INIS)

    Mohr, C.L.; Pankaskie, P.J.; Heasler, P.G.; Wood, J.C.

    1979-12-01

    Reactor fuel failure data sets in the form of initial power (P/sub i/), final power (P/sub f/), transient increase in power (ΔP), and burnup (Bu) were obtained for pressurized heavy water reactors (PHWRs), boiling water reactors (BWRs), and pressurized water reactors (PWRs). These data sets were evaluated and used as the basis for developing two predictive fuel failure models, a graphical concept called the PCI-OGRAM, and a nonlinear regression based model called PROFIT. The PCI-OGRAM is an extension of the FUELOGRAM developed by AECL. It is based on a critical threshold concept for stress dependent stress corrosion cracking. The PROFIT model, developed at Pacific Northwest Laboratory, is the result of applying standard statistical regression methods to the available PCI fuel failure data and an analysis of the environmental and strain rate dependent stress-strain properties of the Zircaloy cladding

  8. Tests and analysis on steam generator tube failure propagation

    International Nuclear Information System (INIS)

    Tanabe, Hiromi

    1990-01-01

    The understanding of leak enlargement and failure propagation behavior is essential to select a design basis leak (DBL) of LMFBR steam generators. Therefore, various series of experiments, such as self-enlargement tests, target wastage tests, failure propagation tests were conducted in a wide range of leak using test facilities of SWAT at PNC/OEC. Especially, in the large leak tests, potential of overheating failure was investigated under a prototypical steam cooling condition inside target tubes. In the small leak, the difference of wastage resistivity was clarified among several tube materials such as 9-chrome steels. In regard to an analytical approach, a computer code LEAP (Leak Enlargement and Propagation) was developed on the basis of all of these experimental results. The code was used to validate the previously selected DBL of the prototype reactor, Monju, steam generator. This approach proved to be successful in spite of somewhat over-conservatism in the analysis. Moreover, LEAP clarified the effectiveness of a rapid steam dump and an enhanced leak detection system. The code improvement toward a realistic analysis is desired, however, to lessen the DBL for a future large plant and then the re-evaluation of the experimental data such as the size of secondary failure is under way. (author). 4 refs, 8 figs, 1 tab

  9. Failure analysis of high strength pipeline with single and multiple corrosions

    International Nuclear Information System (INIS)

    Chen, Yanfei; Zhang, Hong; Zhang, Juan; Li, Xin; Zhou, Jing

    2015-01-01

    Highlights: • We study failure of high strength pipelines with single corrosion. • We give regression equations for failure pressure prediction. • We propose assessment procedure for pipelines with multiple corrosions. - Abstract: Corrosion will compromise safety operation of oil and gas pipelines, accurate determination of failure pressure finds importance in residual strength assessment and corrosion allowance design of onshore and offshore pipelines. This paper investigates failure pressure of high strength pipeline with single and multiple corrosions using nonlinear finite element analysis. On the basis of developed regression equations for failure pressure prediction of high strength pipeline with single corrosion, the paper proposes an assessment procedure for predicting failure pressure of high strength pipeline with multiple corrosions. Furthermore, failure pressures predicted by proposed solutions are compared with experimental results and various assessment methods available in literature, where accuracy and versatility are demonstrated

  10. The statistical analysis of anisotropies

    International Nuclear Information System (INIS)

    Webster, A.

    1977-01-01

    One of the many uses to which a radio survey may be put is an analysis of the distribution of the radio sources on the celestial sphere to find out whether they are bunched into clusters or lie in preferred regions of space. There are many methods of testing for clustering in point processes and since they are not all equally good this contribution is presented as a brief guide to what seems to be the best of them. The radio sources certainly do not show very strong clusering and may well be entirely unclustered so if a statistical method is to be useful it must be both powerful and flexible. A statistic is powerful in this context if it can efficiently distinguish a weakly clustered distribution of sources from an unclustered one, and it is flexible if it can be applied in a way which avoids mistaking defects in the survey for true peculiarities in the distribution of sources. The paper divides clustering statistics into two classes: number density statistics and log N/log S statistics. (Auth.)

  11. Basic statistical tools in research and data analysis

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali

    2016-01-01

    Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  12. Failure analysis on a chemical waste pipe

    International Nuclear Information System (INIS)

    Ambler, J.R.

    1985-01-01

    A failure analysis of a chemical waste pipe illustrates how nuclear technology can spin off metallurgical consultant services. The pipe, made of zirconium alloy (Zr-2.5 wt percent Nb, UNS 60705), had cracked in several places, all at butt welds. A combination of fractography and metallography indicated delayed hydride cracking

  13. PACC information management code for common cause failures analysis

    International Nuclear Information System (INIS)

    Ortega Prieto, P.; Garcia Gay, J.; Mira McWilliams, J.

    1987-01-01

    The purpose of this paper is to present the PACC code, which, through an adequate data management, makes the task of computerized common-mode failure analysis easier. PACC processes and generates information in order to carry out the corresponding qualitative analysis, by means of the boolean technique of transformation of variables, and the quantitative analysis either using one of several parametric methods or a direct data-base. As far as the qualitative analysis is concerned, the code creates several functional forms for the transformation equations according to the user's choice. These equations are subsequently processed by boolean manipulation codes, such as SETS. The quantitative calculations of the code can be carried out in two different ways: either starting from a common cause data-base, or through parametric methods, such as the Binomial Failure Rate Method, the Basic Parameters Method or the Multiple Greek Letter Method, among others. (orig.)

  14. A STATISTICAL ANALYSIS OF LARYNGEAL MALIGNANCIES AT OUR INSTITUTION

    Directory of Open Access Journals (Sweden)

    Bharathi Mohan Mathan

    2017-03-01

    Full Text Available BACKGROUND Malignancies of larynx are an increasing global burden with a distribution of approximately 2-5% of all malignancies with an incidence of 3.6/1,00,000 for men and 1.3/1,00,000 for women with a male-to-female ratio of 4:1. Smoking and alcohol are major established risk factors. More than 90-95% of all malignancies are squamous cell type. Three main subsite of laryngeal malignancies are glottis, supraglottis and subglottis. Improved surgical techniques and advanced chemoradiotherapy has increased the overall 5 year survival rate. The above study is statistical analysis of laryngeal malignancies at our institution for a period of one year and analysis of pattern of distribution, aetiology, sites and subsites and causes for recurrence. MATERIALS AND METHODS Based on the statistical data available in the institution for the period of one year from January 2016-December 2016, all laryngeal malignancies were analysed with respect to demographic pattern, age, gender, site, subsite, aetiology, staging, treatment received and probable cause for failure of treatment. Patients were followed up for 12 months period during the study. RESULTS Total number of cases studied are 27 (twenty seven. Male cases are 23 and female cases are 4, male-to-female ratio is 5.7:1, most common age is above 60 years, most common site is supraglottis, most common type is moderately-differentiated squamous cell carcinoma, most common cause for relapse or recurrence is advanced stage of disease and poor differentiation. CONCLUSION The commonest age occurrence at the end of the study is above 60 years and male-to-female ratio is 5.7:1, which is slightly above the international standards. Most common site is supraglottis and not glottis. The relapse and recurrences are higher compared to the international standards.

  15. The analysis of failure data in the presence of critical and degraded failures

    International Nuclear Information System (INIS)

    Haugen, Knut; Hokstad, Per; Sandtorv, Helge

    1997-01-01

    Reported failures are often classified into severityclasses, e.g., as critical or degraded. The critical failures correspond to loss of function(s) and are those of main concern. The rate of critical failures is usually estimated by the number of observed critical failures divided by the exposure time, thus ignoring the observed degraded failures. In the present paper failure data are analyzed, applying an alternative estimate for the critical failure rate, also taking the number of observed degraded failures into account. The model includes two alternative failure mechanisms, one being of the shock type, immediately leading to a critical failure, another resulting in a gradual deterioration, leading to a degraded failure before the critical failure occurs. Failure data on safety valves from the OREDA (Offshore REliability DAta) data base are analyzed using this model. The estimate for the critical failure rate is obtained and compared with the standard estimate

  16. Failure mode and effects analysis and fault tree analysis of surface image guided cranial radiosurgery.

    Science.gov (United States)

    Manger, Ryan P; Paxton, Adam B; Pawlicki, Todd; Kim, Gwe-Ya

    2015-05-01

    Surface image guided, Linac-based radiosurgery (SIG-RS) is a modern approach for delivering radiosurgery that utilizes optical stereoscopic imaging to monitor the surface of the patient during treatment in lieu of using a head frame for patient immobilization. Considering the novelty of the SIG-RS approach and the severity of errors associated with delivery of large doses per fraction, a risk assessment should be conducted to identify potential hazards, determine their causes, and formulate mitigation strategies. The purpose of this work is to investigate SIG-RS using the combined application of failure modes and effects analysis (FMEA) and fault tree analysis (FTA), report on the effort required to complete the analysis, and evaluate the use of FTA in conjunction with FMEA. A multidisciplinary team was assembled to conduct the FMEA on the SIG-RS process. A process map detailing the steps of the SIG-RS was created to guide the FMEA. Failure modes were determined for each step in the SIG-RS process, and risk priority numbers (RPNs) were estimated for each failure mode to facilitate risk stratification. The failure modes were ranked by RPN, and FTA was used to determine the root factors contributing to the riskiest failure modes. Using the FTA, mitigation strategies were formulated to address the root factors and reduce the risk of the process. The RPNs were re-estimated based on the mitigation strategies to determine the margin of risk reduction. The FMEA and FTAs for the top two failure modes required an effort of 36 person-hours (30 person-hours for the FMEA and 6 person-hours for two FTAs). The SIG-RS process consisted of 13 major subprocesses and 91 steps, which amounted to 167 failure modes. Of the 91 steps, 16 were directly related to surface imaging. Twenty-five failure modes resulted in a RPN of 100 or greater. Only one of these top 25 failure modes was specific to surface imaging. The riskiest surface imaging failure mode had an overall RPN-rank of eighth

  17. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....

  18. Clinical risk analysis with failure mode and effect analysis (FMEA) model in a dialysis unit.

    Science.gov (United States)

    Bonfant, Giovanna; Belfanti, Pietro; Paternoster, Giuseppe; Gabrielli, Danila; Gaiter, Alberto M; Manes, Massimo; Molino, Andrea; Pellu, Valentina; Ponzetti, Clemente; Farina, Massimo; Nebiolo, Pier E

    2010-01-01

    The aim of clinical risk management is to improve the quality of care provided by health care organizations and to assure patients' safety. Failure mode and effect analysis (FMEA) is a tool employed for clinical risk reduction. We applied FMEA to chronic hemodialysis outpatients. FMEA steps: (i) process study: we recorded phases and activities. (ii) Hazard analysis: we listed activity-related failure modes and their effects; described control measures; assigned severity, occurrence and detection scores for each failure mode and calculated the risk priority numbers (RPNs) by multiplying the 3 scores. Total RPN is calculated by adding single failure mode RPN. (iii) Planning: we performed a RPNs prioritization on a priority matrix taking into account the 3 scores, and we analyzed failure modes causes, made recommendations and planned new control measures. (iv) Monitoring: after failure mode elimination or reduction, we compared the resulting RPN with the previous one. Our failure modes with the highest RPN came from communication and organization problems. Two tools have been created to ameliorate information flow: "dialysis agenda" software and nursing datasheets. We scheduled nephrological examinations, and we changed both medical and nursing organization. Total RPN value decreased from 892 to 815 (8.6%) after reorganization. Employing FMEA, we worked on a few critical activities, and we reduced patients' clinical risk. A priority matrix also takes into account the weight of the control measures: we believe this evaluation is quick, because of simple priority selection, and that it decreases action times.

  19. rpsftm: An R Package for Rank Preserving Structural Failure Time Models.

    Science.gov (United States)

    Allison, Annabel; White, Ian R; Bond, Simon

    2017-12-04

    Treatment switching in a randomised controlled trial occurs when participants change from their randomised treatment to the other trial treatment during the study. Failure to account for treatment switching in the analysis (i.e. by performing a standard intention-to-treat analysis) can lead to biased estimates of treatment efficacy. The rank preserving structural failure time model (RPSFTM) is a method used to adjust for treatment switching in trials with survival outcomes. The RPSFTM is due to Robins and Tsiatis (1991) and has been developed by White et al. (1997, 1999). The method is randomisation based and uses only the randomised treatment group, observed event times, and treatment history in order to estimate a causal treatment effect. The treatment effect, ψ , is estimated by balancing counter-factual event times (that would be observed if no treatment were received) between treatment groups. G-estimation is used to find the value of ψ such that a test statistic Z ( ψ ) = 0. This is usually the test statistic used in the intention-to-treat analysis, for example, the log rank test statistic. We present an R package that implements the method of rpsftm.

  20. Feasibility and acceptability of a self-measurement using a portable bioelectrical impedance analysis, by the patient with chronic heart failure, in acute decompensated heart failure.

    Science.gov (United States)

    Huguel, Benjamin; Vaugrenard, Thibaud; Saby, Ludivine; Benhamou, Lionel; Arméro, Sébastien; Camilleri, Élise; Langar, Aida; Alitta, Quentin; Grino, Michel; Retornaz, Frédérique

    2018-06-01

    Chronic heart failure (CHF) is a major public health matter. Mainly affecting the elderly, it is responsible for a high rate of hospitalization due to the frequency of acute heart failure (ADHF). This represents a disabling pathology for the patient and very costly for the health care system. Our study is designed to assess a connected and portable bioelectrical impedance analysis (BIA) that could reduce these hospitalizations by preventing early ADHF. This prospective study included patients hospitalized in cardiology for ADHF. Patients achieved 3 self-measurements using the BIA during their hospitalization and answered a questionnaire evaluating the acceptability of this self-measurement. The results of these measures were compared with the clinical, biological and echocardiographic criteria of patients at the same time. Twenty-three patients were included, the self-measurement during the overall duration of the hospitalization was conducted autonomously by more than 80% of the patients. The acceptability (90%) for the use of the portable BIA was excellent. Some correlations were statistically significant, such as the total water difference to the weight difference (p=0.001). There were common trends between the variation of impedance analysis measures and other evaluation criteria. The feasibility and acceptability of a self-measurement of bioelectrical impedance analysis by the patient in AHF opens up major prospects in the management of monitoring patients in CHF. The interest of this tool is the prevention of ADHF leading to hospitalization or re-hospitalizations now requires to be presented by new studies.

  1. Application of failure mode and effect analysis in a radiology department.

    Science.gov (United States)

    Thornton, Eavan; Brook, Olga R; Mendiratta-Lala, Mishal; Hallett, Donna T; Kruskal, Jonathan B

    2011-01-01

    With increasing deployment, complexity, and sophistication of equipment and related processes within the clinical imaging environment, system failures are more likely to occur. These failures may have varying effects on the patient, ranging from no harm to devastating harm. Failure mode and effect analysis (FMEA) is a tool that permits the proactive identification of possible failures in complex processes and provides a basis for continuous improvement. This overview of the basic principles and methodology of FMEA provides an explanation of how FMEA can be applied to clinical operations in a radiology department to reduce, predict, or prevent errors. The six sequential steps in the FMEA process are explained, and clinical magnetic resonance imaging services are used as an example for which FMEA is particularly applicable. A modified version of traditional FMEA called Healthcare Failure Mode and Effect Analysis, which was introduced by the U.S. Department of Veterans Affairs National Center for Patient Safety, is briefly reviewed. In conclusion, FMEA is an effective and reliable method to proactively examine complex processes in the radiology department. FMEA can be used to highlight the high-risk subprocesses and allows these to be targeted to minimize the future occurrence of failures, thus improving patient safety and streamlining the efficiency of the radiology department. RSNA, 2010

  2. Exact combinatorial reliability analysis of dynamic systems with sequence-dependent failures

    International Nuclear Information System (INIS)

    Xing Liudong; Shrestha, Akhilesh; Dai Yuanshun

    2011-01-01

    Many real-life fault-tolerant systems are subjected to sequence-dependent failure behavior, in which the order in which the fault events occur is important to the system reliability. Such systems can be modeled by dynamic fault trees (DFT) with priority-AND (pAND) gates. Existing approaches for the reliability analysis of systems subjected to sequence-dependent failures are typically state-space-based, simulation-based or inclusion-exclusion-based methods. Those methods either suffer from the state-space explosion problem or require long computation time especially when results with high degree of accuracy are desired. In this paper, an analytical method based on sequential binary decision diagrams is proposed. The proposed approach can analyze the exact reliability of non-repairable dynamic systems subjected to the sequence-dependent failure behavior. Also, the proposed approach is combinatorial and is applicable for analyzing systems with any arbitrary component time-to-failure distributions. The application and advantages of the proposed approach are illustrated through analysis of several examples. - Highlights: → We analyze the sequence-dependent failure behavior using combinatorial models. → The method has no limitation on the type of time-to-failure distributions. → The method is analytical and based on sequential binary decision diagrams (SBDD). → The method is computationally more efficient than existing methods.

  3. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  4. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  5. Development of component failure data for seismic risk analysis

    International Nuclear Information System (INIS)

    Fray, R.R.; Moulia, T.A.

    1981-01-01

    This paper describes the quantification and utilization of seismic failure data used in the Diablo Canyon Seismic Risk Study. A single variable representation of earthquake severity that uses peak horizontal ground acceleration to characterize earthquake severity was employed. The use of a multiple variable representation would allow direct consideration of vertical accelerations and the spectral nature of earthquakes but would have added such complexity that the study would not have been feasible. Vertical accelerations and spectral nature were indirectly considered because component failure data were derived from design analyses, qualification tests and engineering judgment that did include such considerations. Two types of functions were used to describe component failure probabilities. Ramp functions were used for components, such as piping and structures, qualified by stress analysis. 'Anchor points' for ramp functions were selected by assuming a zero probability of failure at code allowable stress levels and unity probability of failure at ultimate stress levels. The accelerations corresponding to allowable and ultimate stress levels were determined by conservatively assuming a linear relationship between seismic stress and ground acceleration. Step functions were used for components, such as mechanical and electrical equipment, qualified by testing. Anchor points for step functions were selected by assuming a unity probability of failure above the qualification acceleration. (orig./HP)

  6. Statistical models for competing risk analysis

    International Nuclear Information System (INIS)

    Sather, H.N.

    1976-08-01

    Research results on three new models for potential applications in competing risks problems. One section covers the basic statistical relationships underlying the subsequent competing risks model development. Another discusses the problem of comparing cause-specific risk structure by competing risks theory in two homogeneous populations, P1 and P2. Weibull models which allow more generality than the Berkson and Elveback models are studied for the effect of time on the hazard function. The use of concomitant information for modeling single-risk survival is extended to the multiple failure mode domain of competing risks. The model used to illustrate the use of this methodology is a life table model which has constant hazards within pre-designated intervals of the time scale. Two parametric models for bivariate dependent competing risks, which provide interesting alternatives, are proposed and examined

  7. Timing analysis of PWR fuel pin failures

    International Nuclear Information System (INIS)

    Jones, K.R.; Wade, N.L.; Katsma, K.R.; Siefken, L.J.; Straka, M.

    1992-09-01

    Research has been conducted to develop and demonstrate a methodology for calculation of the time interval between receipt of the containment isolation signals and the first fuel pin failure for loss-of-coolant accidents (LOCAs). Demonstration calculations were performed for a Babcock and Wilcox (B ampersand W) design (Oconee) and a Westinghouse (W) four-loop design (Seabrook). Sensitivity studies were performed to assess the impacts of fuel pin bumup, axial peaking factor, break size, emergency core cooling system availability, and main coolant pump trip on these times. The analysis was performed using the following codes: FRAPCON-2, for the calculation of steady-state fuel behavior; SCDAP/RELAP5/MOD3 and TRACPF1/MOD1, for the calculation of the transient thermal-hydraulic conditions in the reactor system; and FRAP-T6, for the calculation of transient fuel behavior. In addition to the calculation of fuel pin failure timing, this analysis provides a comparison of the predicted results of SCDAP/RELAP5/MOD3 and TRAC-PFL/MOD1 for large-break LOCA analysis. Using SCDAP/RELAP5/MOD3 thermal-hydraulic data, the shortest time intervals calculated between initiation of containment isolation and fuel pin failure are 10.4 seconds and 19.1 seconds for the B ampersand W and W plants, respectively. Using data generated by TRAC-PF1/MOD1, the shortest intervals are 10.3 seconds and 29.1 seconds for the B ampersand W and W plants, respectively. These intervals are for a double-ended, offset-shear, cold leg break, using the technical specification maximum peaking factor and applied to fuel with maximum design bumup. Using peaking factors commensurate widi actual bumups would result in longer intervals for both reactor designs. This document also contains appendices A through J of this report

  8. Failure Mode and Effect Analysis for Wind Turbine Systems in China

    DEFF Research Database (Denmark)

    Zhu, Jiangsheng; Ma, Kuichao; N. Soltani, Mohsen

    2017-01-01

    This paper discusses a cost based Failure Mode and Effect Analysis (FMEA) approch for the Wind Turbine (WT) with condition monitoring system in China. Normally, the traditional FMEA uses the Risk Priority Number (RPN) to rank failure modes. But the RPN can be changed with the Condition Monitoring...... Systems (CMS) due to change of the score of detection. The cost of failure mode should also be considered because faults can be detected at an incipient level, and condition-based maintenance can be scheduled. The results show that the proposed failure mode priorities considering their cost consequences...

  9. Retrospective analysis of 56 edentulous dental arches restored with 344 single-stage implants using an immediate loading fixed provisional protocol: statistical predictors of implant failure.

    Science.gov (United States)

    Kinsel, Richard P; Liss, Mindy

    2007-01-01

    The purpose of this retrospective study was to evaluate the effects of implant dimensions, surface treatment, location in the dental arch, numbers of supporting implant abutments, surgical technique, and generally recognized risk factors on the survival of a series of single-stage Straumann dental implants placed into edentulous arches using an immediate loading protocol. Each patient received between 4 and 18 implants in one or both dental arches. Periapical radiographs were obtained over a 2- to 10-year follow-up period to evaluate crestal bone loss following insertion of the definitive metal-ceramic fixed prostheses. Univariate tests for failure rates as a function of age ( or = 60 years), gender, smoking, bone grafting, dental arch, surface type, anterior versus posterior, number of implants per arch, and surgical technique were made using Fisher exact tests. The Cochran-Armitage test for trend was used to evaluate the presence of a linear trend in failure rates regarding implant length and implant diameter. Logistic regression modeling was used to determine which, if any, of the aforementioned factors would predict patient and implant failure. A significance criterion of P = .05 was utilized. Data were collected for 344 single-stage implants placed into 56 edentulous arches (39 maxillae and 17 mandibles) of 43 patients and immediately loaded with a 1-piece provisional fixed prosthesis. A total of 16 implants failed to successfully integrate, for a survival rate of 95.3%. Increased rates of failure were associated with reduced implant length, placement in the posterior region of the jaw, increased implant diameter, and surface treatment. Implant length emerged as the sole significant predictor of implant failure. In this retrospective analysis of 56 consecutively treated edentulous arches with multiple single-stage dental implants loaded immediately, reduced implant length was the sole significant predictor of failure.

  10. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2017-01-01

    The revised second edition of this textbook provides the reader with a solid foundation in probability theory and statistics as applied to the physical sciences, engineering and related fields. It covers a broad range of numerical and analytical methods that are essential for the correct analysis of scientific data, including probability theory, distribution functions of statistics, fits to two-dimensional data and parameter estimation, Monte Carlo methods and Markov chains. Features new to this edition include: • a discussion of statistical techniques employed in business science, such as multiple regression analysis of multivariate datasets. • a new chapter on the various measures of the mean including logarithmic averages. • new chapters on systematic errors and intrinsic scatter, and on the fitting of data with bivariate errors. • a new case study and additional worked examples. • mathematical derivations and theoretical background material have been appropriately marked,to improve the readabili...

  11. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  12. Bayesian Inference in Statistical Analysis

    CERN Document Server

    Box, George E P

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob

  13. Optimal tread design for agricultural lug tires determined through failure analysis

    Directory of Open Access Journals (Sweden)

    Hyun Seok Song

    2018-04-01

    Full Text Available Agricultural lug tires, commonly used in tractors, must provide safe and stable support for the body of the vehicle and bear any additional load while effectively traversing rough, poor-quality ground surfaces. Many agricultural lug tires fail unexpectedly. In this study, we optimised and validated a tread design for agricultural lug tires intended to increase their durability using failure analysis. Specifically, we identified tire failure modes using indoor driving tests and failure mode effects analysis. Next, we developed a threedimensional tire model using the Ogden material model and finite element method. Using sensitivity analysis and response surface methodology, we optimised the tread design. Finally, we evaluated the durability of the new design using a tire prototype and drum test equipment. Results indicated that the optimised tread design decreased the tire tread stress by 16% and increased its time until cracking by 38% compared to conventional agricultural lug tires.

  14. Software failure events derivation and analysis by frame-based technique

    International Nuclear Information System (INIS)

    Huang, H.-W.; Shih, C.; Yih, Swu; Chen, M.-H.

    2007-01-01

    A frame-based technique, including physical frame, logical frame, and cognitive frame, was adopted to perform digital I and C failure events derivation and analysis for generic ABWR. The physical frame was structured with a modified PCTran-ABWR plant simulation code, which was extended and enhanced on the feedwater system, recirculation system, and steam line system. The logical model is structured with MATLAB, which was incorporated into PCTran-ABWR to improve the pressure control system, feedwater control system, recirculation control system, and automated power regulation control system. As a result, the software failure of these digital control systems can be properly simulated and analyzed. The cognitive frame was simulated by the operator awareness status in the scenarios. Moreover, via an internal characteristics tuning technique, the modified PCTran-ABWR can precisely reflect the characteristics of the power-core flow. Hence, in addition to the transient plots, the analysis results can then be demonstrated on the power-core flow map. A number of postulated I and C system software failure events were derived to achieve the dynamic analyses. The basis for event derivation includes the published classification for software anomalies, the digital I and C design data for ABWR, chapter 15 accident analysis of generic SAR, and the reported NPP I and C software failure events. The case study of this research includes: (1) the software CMF analysis for the major digital control systems; and (2) postulated ABWR digital I and C software failure events derivation from the actual happening of non-ABWR digital I and C software failure events, which were reported to LER of USNRC or IRS of IAEA. These events were analyzed by PCTran-ABWR. Conflicts among plant status, computer status, and human cognitive status are successfully identified. The operator might not easily recognize the abnormal condition, because the computer status seems to progress normally. However, a well

  15. Failure mode, effect and criticality analysis (FMECA) on mechanical subsystems of diesel generator at NPP

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Singh, Brijendra; Sung, Tae Yong; Park, Jin Hee; Lee, Yoon Hwan

    1996-06-01

    Largely, the RCM approach can be divided in three phases; (1) Functional failure analysis (FFA) on the selected system or subsystem, (2) Failure mode, effect and criticality analysis (FMECA) to identify the impact of failure to plant safety or economics, (3) Logical tree analysis (LTA) to select appropriate preventive maintenance and surveillance tasks. This report presents FMECA results for six mechanical subsystems of the diesel generators of nuclear power plants. The six mechanical subsystems are Starting air, Lub oil, Governor, Jacket water cooling, Fuel, and Engine subsystems. Generic and plant-specific failure and maintenance records are reviewed to identify critical components/failure modes. FMECA was performed for these critical component/failure modes. After reviewing current preventive maintenance activities of Wolsung unit 1, draft RCM recommendations are developed. 6 tabs., 16 refs. (Author)

  16. Failure mode, effect and criticality analysis (FMECA) on mechanical subsystems of diesel generator at NPP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Woon; Singh, Brijendra; Sung, Tae Yong; Park, Jin Hee; Lee, Yoon Hwan [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-06-01

    Largely, the RCM approach can be divided in three phases; (1) Functional failure analysis (FFA) on the selected system or subsystem, (2) Failure mode, effect and criticality analysis (FMECA) to identify the impact of failure to plant safety or economics, (3) Logical tree analysis (LTA) to select appropriate preventive maintenance and surveillance tasks. This report presents FMECA results for six mechanical subsystems of the diesel generators of nuclear power plants. The six mechanical subsystems are Starting air, Lub oil, Governor, Jacket water cooling, Fuel, and Engine subsystems. Generic and plant-specific failure and maintenance records are reviewed to identify critical components/failure modes. FMECA was performed for these critical component/failure modes. After reviewing current preventive maintenance activities of Wolsung unit 1, draft RCM recommendations are developed. 6 tabs., 16 refs. (Author).

  17. IEEE Std 101-1987: IEEE guide for the statistical analysis of thermal life test data

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This revision of IEEE Std 101-1972 describes statistical analyses for data from thermally accelerated aging tests. It explains the basis and use of statistical calculations for an engineer or scientist. Accelerated test procedures usually call for a number of specimens to be aged at each of several temperatures appreciably above normal operating temperatures. High temperatures are chosen to produce specimen failures (according to specified failure criteria) in typically one week to one year. The test objective is to determine the dependence of median life on temperature from the data, and to estimate, by extrapolation, the median life to be expected at service temperature. This guide presents methods for analyzing such data and for comparing test data on different materials

  18. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    Science.gov (United States)

    Flores, Melissa D.; Malin, Jane T.; Fleming, Land D.

    2013-09-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component's functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  19. Failure Modes and Effects Analysis (FMEA) Assistant Tool Feasibility Study

    Science.gov (United States)

    Flores, Melissa; Malin, Jane T.

    2013-01-01

    An effort to determine the feasibility of a software tool to assist in Failure Modes and Effects Analysis (FMEA) has been completed. This new and unique approach to FMEA uses model based systems engineering concepts to recommend failure modes, causes, and effects to the user after they have made several selections from pick lists about a component s functions and inputs/outputs. Recommendations are made based on a library using common failure modes identified over the course of several major human spaceflight programs. However, the tool could be adapted for use in a wide range of applications from NASA to the energy industry.

  20. Causes of liver failure and impact analysis of prognostic risk factors

    Directory of Open Access Journals (Sweden)

    WU Xiaoqing

    2013-04-01

    Full Text Available ObjectiveTo perform a retrospective analysis of patients with liver failure to investigate the causative factors and related risk factors that may affect patient prognosis. MethodsThe clinical, demographic, and laboratory data of 79 consecutive patients diagnosed with liver failure and treated at our hospital between January 2010 and January 2012 (58 males and 21 females; age range: 16-74 years old were collected from the medical records. To identify risk factors of liver failure, the patient variables were assessed by Student’s t-test (continuous variables or Chi-squared test (categorical variables. Multivariate logistic regression analysis was used to investigate the relation between patient outcome and independent risk factors. ResultsThe 79 cases of liver failure were grouped according to disease severity: acute liver failure (n=6; 5 died, subacute liver failure (n=35; 19 died, and chronic liver failure (n=38; 28 died. The overall rate of death was 66%. The majority of cases (81% were related to hepatitis B virus infection. While the three groups of liver failure severity did not show significant differences in sex, mean age, occupation, presence of potassium disorder, total bilirubin (TBil or total cholesterol (CHO at admission, or lowest recorded level of CHO during hospitalization, there were significant intergroup differences in highest recorded TBil level, prothrombin activity (PTA at admission, and highest and lowest recorded PTA, and highest recorded level of CHO. Five independent risk factors were identified: the highest recorded TBil level during hospitalization, presence of infection, hepatorenal syndrome, gastrointestinal bleeding, and hepatic encephalopathy. ConclusionThe major cause of liver failure in this cohort of patients was hepatitis infection, and common biomarkers of liver function, such as TBil, CHO and PTA, may indicate patients with poor prognosis despite clinical intervention. Complications should be addressed as

  1. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  2. Dependency Defence and Dependency Analysis Guidance. Volume 1: Summary and Guidance (Appendix 1-2). How to analyse and protect against dependent failures. Summary report of the Nordic Working group on Common Cause Failure Analysis

    International Nuclear Information System (INIS)

    Johanson, Gunnar; Hellstroem, Per; Makamo, Tuomas; Bento, Jean-Pierre; Knochenhauer, Michael; Poern, Kurt

    2003-10-01

    The safety systems in Nordic nuclear power plants are characterised by substantial redundancy and/or diversification in safety critical functions, as well as by physical separation of critical safety systems, including their support functions. Viewed together with the evident additional fact, that the single failure criterion has been systematically applied in the design of safety systems, this means that the plant risk profile as calculated in existing PSA:s is usually strongly dominated by failures caused by dependencies resulting in the loss of more than one system sub. The overall objective with the working group is to support safety by studying potential and real CCF events, process statistical data and report conclusions and recommendations that can improve the understanding of these events eventually resulting in increased safety. The result is intended for application in NPP operation, maintenance, inspection and risk assessments. The NAFCS project is part of the activities of the Nordic PSA Group (NPSAG), and is financed jointly by the Nordic utilities and authorities. The work is divided into one quantitative and one qualitative part with the following specific objectives: Qualitative objectives-The goal with the qualitative analysis is to compile experience data and generate insights in terms of relevant failure mechanisms and effective CCF protection measures. The results shall be presented as a guide with checklists and recommendations on how to identify current CCF protection standard and improvement possibilities regarding CCF defences decreasing the CCF vulnerability. Quantitative objectives-The goal with the quantitative analysis is to prepare a Nordic C-book where quantitative insights as Impact Vectors and CCF parameters for different redundancy levels are presented. Uncertainties in CCF data shall be reduced as much as possible. The high redundancy systems sensitivity to CCF events demand a well structured quantitative analysis in support of

  3. Failure analysis a practical guide for manufacturers of electronic components and systems

    CERN Document Server

    Bâzu, Marius

    2011-01-01

    Failure analysis is the preferred method to investigate product or process reliability and to ensure optimum performance of electrical components and systems. The physics-of-failure approach is the only internationally accepted solution for continuously improving the reliability of materials, devices and processes. The models have been developed from the physical and chemical phenomena that are responsible for degradation or failure of electronic components and materials and now replace popular distribution models for failure mechanisms such as Weibull or lognormal. Reliability engineers nee

  4. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    Science.gov (United States)

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  5. Analysis of the failure of a vacuum spin-pit drive turbine spindle shaft

    OpenAIRE

    Pettitt, Jason M.

    2005-01-01

    The Naval Postgraduate School's Rotor Spin Research Facility experienced a failure in the Spring of 2005 in which the rotor dropped from the drive turbine and caused extensive damage. A failure analysis of the drive turbine spindle shaft was conducted in order to determine the cause of failure: whether due to a material or design flaw. Also, a dynamic analysis was conducted in order to determine the natural modes present in the system and the associated frequencies that could have contributed...

  6. Practical guidance for statistical analysis of operational event data

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1995-10-01

    This report presents ways to avoid mistakes that are sometimes made in analysis of operational event data. It then gives guidance on what to do when a model is rejected, a list of standard types of models to consider, and principles for choosing one model over another. For estimating reliability, it gives advice on which failure modes to model, and moment formulas for combinations of failure modes. The issues are illustrated with many examples and case studies

  7. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  8. A Statistical-Probabilistic Pattern for Determination of Tunnel Advance Step by Quantitative Risk Analysis

    Directory of Open Access Journals (Sweden)

    sasan ghorbani

    2017-12-01

    Full Text Available One of the main challenges faced in design and construction phases of tunneling projects is the determination of maximum allowable advance step to maximize excavation rate and reduce project delivery time. Considering the complexity of determining this factor and unexpected risks associated with inappropriate determination of that, it is necessary to employ a method which is capable of accounting for interactions among uncertain geotechnical parameters and advance step. The main objective in the present research is to undertake optimization and risk management of advance step length in water diversion tunnel at Shahriar Dam based on uncertainty of geotechnical parameters following a statistic-probabilistic approach. In the present research, in order to determine optimum advance step for excavation operation, two hybrid methods were used: strength reduction method-discrete element method- Monte Carlo simulation (SRM/DEM/MCS and strength reduction method- discrete element method- point estimate method (SRM/DEM/PEM. Moreover, Taguchi analysis was used to investigate the sensitivity of advance step to changes in statistical distribution function of input parameters under three tunneling scenarios at sections of poor to good qualities (as per RMR classification system. Final results implied the optimality of the advance step defined in scenario 2 where 2 m advance per excavation round was proposed, according to shear strain criterion and SRM/DEM/MCS, with minimum failure probability and risk of 8.05% and 75281.56 $, respectively, at 95% confidence level. Moreover, in either of normal, lognormal, and gamma distributions, as the advance step increased from Scenario 1 to 2, failure probability was observed to increase at lower rate than that observed when advance step in scenario 2 was increased to that In Scenario 3. In addition, Taguchi tests were subjected to signal-to-noise analysis and the results indicated that, considering the three statistical

  9. Failure analysis of motor bearing of sea water pump in nuclear power plant

    International Nuclear Information System (INIS)

    Bian Chunhua; Zhang Wei

    2015-01-01

    The motor bearing of sea water pump in Qinshan Phase II Nuclear Power plant broke after only one year's using. This paper introduces failure analysis process of the motor bearing. Chemical composition analysis, metallic phase analysis, micrographic examination, and hardness analysis, dimension analysis of each part of the bearing, as well as the high temperature and low temperature performance analysis of lubricating grease are performed. According to the analysis above mentioned, the failure mode of the bearing is wearing, and the reason of wearing is inappropriate installation of the bearing. (authors)

  10. A Costing Analysis for Decision Making Grid Model in Failure-Based Maintenance

    Directory of Open Access Journals (Sweden)

    Burhanuddin M. A.

    2011-01-01

    Full Text Available Background. In current economic downturn, industries have to set good control on production cost, to maintain their profit margin. Maintenance department as an imperative unit in industries should attain all maintenance data, process information instantaneously, and subsequently transform it into a useful decision. Then act on the alternative to reduce production cost. Decision Making Grid model is used to identify strategies for maintenance decision. However, the model has limitation as it consider two factors only, that is, downtime and frequency of failures. We consider third factor, cost, in this study for failure-based maintenance. The objective of this paper is to introduce the formulae to estimate maintenance cost. Methods. Fish bone analysis conducted with Ishikawa model and Decision Making Grid methods are used in this study to reveal some underlying risk factors that delay failure-based maintenance. The goal of the study is to estimate the risk factor that is, repair cost to fit in the Decision Making Grid model. Decision Making grid model consider two variables, frequency of failure and downtime in the analysis. This paper introduces third variable, repair cost for Decision Making Grid model. This approaches give better result to categorize the machines, reduce cost, and boost the earning for the manufacturing plant. Results. We collected data from one of the food processing factories in Malaysia. From our empirical result, Machine C, Machine D, Machine F, and Machine I must be in the Decision Making Grid model even though their frequency of failures and downtime are less than Machine B and Machine N, based on the costing analysis. The case study and experimental results show that the cost analysis in Decision Making Grid model gives more promising strategies in failure-based maintenance. Conclusions. The improvement of Decision Making Grid model for decision analysis with costing analysis is our contribution in this paper for

  11. Equipment Maintenance management support system based on statistical analysis of maintenance history data

    International Nuclear Information System (INIS)

    Shimizu, S.; Ando, Y.; Morioka, T.

    1990-01-01

    Plant maintenance is recently becoming important with the increase in the number of nuclear power stations and in plant operating time. Various kinds of requirements for plant maintenance, such as countermeasures for equipment degradation and saving maintenance costs while keeping up plant reliability and productivity, are proposed. For this purpose, plant maintenance programs should be improved based on equipment reliability estimated by field data. In order to meet these requirements, it is planned to develop an equipment maintenance management support system for nuclear power plants based on statistical analysis of equipment maintenance history data. The large difference between this proposed new method and current similar methods is to evaluate not only failure data but maintenance data, which includes normal termination data and some degree of degradation or functional disorder data for equipment and parts. So, it is possible to utilize these field data for improving maintenance schedules and to evaluate actual equipment and parts reliability under the current maintenance schedule. In the present paper, the authors show the objectives of this system, an outline of this system and its functions, and the basic technique for collecting and managing of maintenance history data on statistical analysis. It is shown, from the results of feasibility tests using simulation data of maintenance history, that this system has the ability to provide useful information for maintenance and the design enhancement

  12. A multivariate statistical methodology for detection of degradation and failure trends using nuclear power plant operational data

    International Nuclear Information System (INIS)

    Samanta, P.K.; Teichmann, T.

    1990-01-01

    In this paper, a multivariate statistical method is presented and demonstrated as a means for analyzing nuclear power plant transients (or events) and safety system performance for detection of malfunctions and degradations within the course of the event based on operational data. The study provides the methodology and illustrative examples based on data gathered from simulation of nuclear power plant transients (due to lack of easily accessible operational data). Such an approach, once fully developed, can be used to detect failure trends and patterns and so can lead to prevention of conditions with serious safety implications

  13. Failure analysis of parameter-induced simulation crashes in climate models

    Science.gov (United States)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y.

    2013-08-01

    Simulations using IPCC (Intergovernmental Panel on Climate Change)-class climate models are subject to fail or crash for a variety of reasons. Quantitative analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation crashes within the Parallel Ocean Program (POP2) component of the Community Climate System Model (CCSM4). About 8.5% of our CCSM4 simulations failed for numerical reasons at combinations of POP2 parameter values. We applied support vector machine (SVM) classification from machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. A committee of SVM classifiers readily predicted model failures in an independent validation ensemble, as assessed by the area under the receiver operating characteristic (ROC) curve metric (AUC > 0.96). The causes of the simulation failures were determined through a global sensitivity analysis. Combinations of 8 parameters related to ocean mixing and viscosity from three different POP2 parameterizations were the major sources of the failures. This information can be used to improve POP2 and CCSM4 by incorporating correlations across the relevant parameters. Our method can also be used to quantify, predict, and understand simulation crashes in other complex geoscientific models.

  14. Pressure Load Analysis during Severe Accidents for the Evaluation of Late Containment Failure in OPR-1000

    Energy Technology Data Exchange (ETDEWEB)

    Park, S. Y.; Ahn, K. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    The MAAP code is a system level computer code capable of performing integral analyses of potential severe accident progressions in nuclear power plants, whose main purpose is to support a level 2 probabilistic safety assessment or severe accident management strategy developments. The code employs lots of user-options for supporting a sensitivity and uncertainty analysis. The present application is mainly focused on determining an estimate of the containment building pressure load caused by severe accident sequences. Key modeling parameters and phenomenological models employed for the present uncertainty analysis are closely related to in-vessel hydrogen generation, gas combustion in the containment, corium distribution in the containment after a reactor vessel failure, corium coolability in the reactor cavity, and molten-corium interaction with concrete. The phenomenology of severe accidents is extremely complex. In this paper, a sampling-based phenomenological uncertainty analysis was performed to statistically quantify uncertainties associated with the pressure load of a containment building for a late containment failure evaluation, based on the key modeling parameters employed in the MAAP code and random samples for those parameters. Phenomenological issues surrounding the late containment failure mode are highly complex. Included are the pressurization owing to steam generation in the cavity, molten corium-concrete interaction, late hydrogen burn in the containment, and the secondary heat removal availability. The methodology and calculation results can be applied for the optimum assessment of a late containment failure model. The accident sequences considered were a loss of coolant accidents and loss of offsite accidents expected in the OPR-1000 plant. As a result, uncertainties addressed in the pressure load of the containment building were quantified as a function of time. A realistic evaluation of the mean and variance estimates provides a more complete

  15. Statistical uncertainties and unrecognized relationships

    International Nuclear Information System (INIS)

    Rankin, J.P.

    1985-01-01

    Hidden relationships in specific designs directly contribute to inaccuracies in reliability assessments. Uncertainty factors at the system level may sometimes be applied in attempts to compensate for the impact of such unrecognized relationships. Often uncertainty bands are used to relegate unknowns to a miscellaneous category of low-probability occurrences. However, experience and modern analytical methods indicate that perhaps the dominant, most probable and significant events are sometimes overlooked in statistical reliability assurances. The author discusses the utility of two unique methods of identifying the otherwise often unforeseeable system interdependencies for statistical evaluations. These methods are sneak circuit analysis and a checklist form of common cause failure analysis. Unless these techniques (or a suitable equivalent) are also employed along with the more widely-known assurance tools, high reliability of complex systems may not be adequately assured. This concern is indicated by specific illustrations. 8 references, 5 figures

  16. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  17. Application of Ontology Technology in Health Statistic Data Analysis.

    Science.gov (United States)

    Guo, Minjiang; Hu, Hongpu; Lei, Xingyun

    2017-01-01

    Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.

  18. Patterns of Failure After MammoSite Brachytherapy Partial Breast Irradiation: A Detailed Analysis

    International Nuclear Information System (INIS)

    Chen, Sea; Dickler, Adam; Kirk, Michael; Shah, Anand; Jokich, Peter; Solmos, Gene; Strauss, Jonathan; Dowlatshahi, Kambiz; Nguyen, Cam; Griem, Katherine

    2007-01-01

    Purpose: To report the results of a detailed analysis of treatment failures after MammoSite breast brachytherapy for partial breast irradiation from our single-institution experience. Methods and Materials: Between October 14, 2002 and October 23, 2006, 78 patients with early-stage breast cancer were treated with breast-conserving surgery and accelerated partial breast irradiation using the MammoSite brachytherapy applicator. We identified five treatment failures in the 70 patients with >6 months' follow-up. Pathologic data, breast imaging, and radiation treatment plans were reviewed. For in-breast failures more than 2 cm away from the original surgical bed, the doses delivered to the areas of recurrence by partial breast irradiation were calculated. Results: At a median follow-up time of 26.1 months, five treatment failures were identified. There were three in-breast failures more than 2 cm away from the original surgical bed, one failure directly adjacent to the original surgical bed, and one failure in the axilla with synchronous distant metastases. The crude failure rate was 7.1% (5 of 70), and the crude local failure rate was 5.7% (4 of 70). Estimated progression-free survival at 48 months was 89.8% (standard error 4.5%). Conclusions: Our case series of 70 patients with >6 months' follow-up and a median follow-up of 26 months is the largest single-institution report to date with detailed failure analysis associated with MammoSite brachytherapy. Our failure data emphasize the importance of patient selection when offering partial breast irradiation

  19. Explorations in Statistics: The Analysis of Change

    Science.gov (United States)

    Curran-Everett, Douglas; Williams, Calvin L.

    2015-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…

  20. Laboratory and 3-D-distinct element analysis of failure mechanism of slope under external surcharge

    Science.gov (United States)

    Li, N.; Cheng, Y. M.

    2014-09-01

    Landslide is a major disaster resulting in considerable loss of human lives and property damages in hilly terrain in Hong Kong, China and many other countries. The factor of safety and the critical slip surface for slope stabilization are the main considerations for slope stability analysis in the past, while the detailed post-failure conditions of the slopes have not been considered in sufficient details. There are however increasing interest on the consequences after the initiation of failure which includes the development and propagation of the failure surfaces, the amount of failed mass and runoff and the affected region. To assess the development of slope failure in more details and to consider the potential danger of slopes after failure has initiated, the slope stability problem under external surcharge is analyzed by the distinct element method (DEM) and laboratory model test in the present research. A more refined study about the development of failure, microcosmic failure mechanism and the post-failure mechanism of slope will be carried out. The numerical modeling method and the various findings from the present work can provide an alternate method of analysis of slope failure which can give additional information not available from the classical methods of analysis.

  1. The distributed failure probability approach to dependent failure analysis, and its application

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-01-01

    The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)

  2. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  3. Failure analysis of multiple delaminated composite plates due

    Indian Academy of Sciences (India)

    The present work aims at the first ply failure analysis of laminated composite plates with arbitrarily located multiple delaminations subjected to transverse static load as well as impact. The theoretical formulation is based on a simple multiple delamination model. Conventional first order shear deformation is assumed using ...

  4. Prognostic factors for patients with hepatitis B virus-related acute-on-chronic liver failure

    Directory of Open Access Journals (Sweden)

    LI Ying

    2017-03-01

    Full Text Available ObjectiveTo investigate the prognostic factors for patients with hepatitis B virus-related acute-on-chronic liver failure, and to provide a basis for clinical diagnosis and treatment. MethodsA total of 172 patients with hepatitis B virus (HBV-related acute-on-chronic liver failure who were admitted to The First Hospital of Jilin University from January 1, 2006 to January 1, 2016 and had complete medical records and follow-up data were enrolled, and a retrospective analysis was performed for their clinical data and laboratory markers to determine prognostic factors. The independent-samples t test was used for comparison of continuous data between groups, the chi-square test was used for comparison of categorical data between groups, and a multivariate logistic regression analysis was performed for the indices determined to be statistically significant by the univariate analysis to screen out independent risk factors for the prognosis of patients with HBV-related acute-on-chronic liver failure. ResultsThe multivariate logistic regression analysis was performed for the indices determined to be statistically significant by the univariate analysis, and the results showed that the prognostic factors were total bilirubin (TBil, prothrombin time activity (PTA, Na+, total cholesterol (TC, Child-Turcotte-Pugh (CTP score, age ≥50 years, the presence of liver cirrhosis, bilirubin-enzyme separation, and complications. The multivariate regression analysis was performed for the complications determined to affect prognosis by the univariate analysis, and the results showed that the complications as risk factors were hepatic encephalopathy, hepatorenal syndrome, and infection. ConclusionTBil, PTA, Na+, TC, CTP score, age ≥50 years, the presence of liver cirrhosis, bilirubin-enzyme separation, and complications are independent risk factors for the prognosis of patients with HBV-related acute-on-chronic liver failure. Liver failure patients with hepatic

  5. Solar activity and transformer failures in the Greek national electric grid

    Directory of Open Access Journals (Sweden)

    Zois Ioannis Panayiotis

    2013-11-01

    Full Text Available Aims: We study both the short term and long term effects of solar activity on the large transformers (150 kV and 400 kV of the Greek national electric grid. Methods: We use data analysis and various statistical methods and models. Results: Contrary to common belief in PPC Greece, we see that there are considerable both short term (immediate and long term effects of solar activity onto large transformers in a mid-latitude country like Greece. Our results can be summarised as follows: For the short term effects: During 1989–2010 there were 43 “stormy days” (namely days with for example Ap ≥ 100 and we had 19 failures occurring during a stormy day plus or minus 3 days and 51 failures occurring during a stormy day plus or minus 7 days. All these failures can be directly related to Geomagnetically Induced Currents (GICs. Explicit cases are briefly presented. For the long term effects, again for the same period 1989–2010, we have two main results: The annual number of transformer failures seems to follow the solar activity pattern. Yet the maximum number of transformer failures occurs about half a solar cycle after the maximum of solar activity. There is statistical correlation between solar activity expressed using various newly defined long term solar activity indices and the annual number of transformer failures. These new long term solar activity indices were defined using both local (from the geomagnetic station in Greece and global (planetary averages geomagnetic data. Applying both linear and non-linear statistical regression we compute the regression equations and the corresponding coefficients of determination.

  6. Exploitation of a component event data bank for common cause failure analysis

    International Nuclear Information System (INIS)

    Games, A.M.; Amendola, A.; Martin, P.

    1985-01-01

    Investigations into using the European Reliability Data System Component Event Data Bank for common cause failure analysis have been carried out. Starting from early exercises where data were analyzed without computer aid, different types of linked multiple failures have been identified. A classification system is proposed based on this experience. It defines a multiple failure event space wherein each category defines causal, modal, temporal and structural links between failures. It is shown that a search algorithm which incorporates the specific interrogative procedures of the data bank can be developed in conjunction with this classification system. It is concluded that the classification scheme and the search algorithm are useful organizational tools in the field of common cause failures studies. However, it is also suggested that the use of the term common cause failure should be avoided since it embodies to many different types of linked multiple failures

  7. RESEARCH OF REFRIGERATION SYSTEMS FAILURES IN POLISH FISHING VESSELS

    Directory of Open Access Journals (Sweden)

    Waldemar KOSTRZEWA

    2013-07-01

    Full Text Available Temperature is a basic climatic parameter deciding about the quality change of fishing products. Time, after which qualitative changes of caught fish don’t exceed established, acceptable range, is above all the temperature function. Temperature reduction by refrigeration system of the cargo hold is a basic technical method, which allows extend transport time. Failures of refrigeration systems in fishing vessels have a negative impact on the environment in relation to harmful refrigerants emission. The paper presents the statistical analysis of failures occurred in the refrigeration systems of Polish fishing vessels in 2007‐2011 years. Analysis results described in the paper can be a base to draw up guidelines, both for designers as well as operators of the marine refrigeration systems.

  8. Finite Element Creep-Fatigue Analysis of a Welded Furnace Roll for Identifying Failure Root Cause

    Science.gov (United States)

    Yang, Y. P.; Mohr, W. C.

    2015-11-01

    Creep-fatigue induced failures are often observed in engineering components operating under high temperature and cyclic loading. Understanding the creep-fatigue damage process and identifying failure root cause are very important for preventing such failures and improving the lifetime of engineering components. Finite element analyses including a heat transfer analysis and a creep-fatigue analysis were conducted to model the cyclic thermal and mechanical process of a furnace roll in a continuous hot-dip coating line. Typically, the roll has a short life, modeling heat convection from hot air inside the furnace. The creep-fatigue analysis was performed by inputting the predicted temperature history and applying mechanical loads. The analysis results showed that the failure was resulted from a creep-fatigue mechanism rather than a creep mechanism. The difference of material properties between the filler metal and the base metal is the root cause for the roll failure, which induces higher creep strain and stress in the interface between the weld and the HAZ.

  9. Analysis of valve failures from the NUCLARR data base

    International Nuclear Information System (INIS)

    Moore, L.M.

    1997-11-01

    The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) contains data on component failures with categorical and qualifying information such as component design, normal operating state, system application and safety grade information which is important to the development of risk-based component surveillance testing requirements. This report presents descriptions and results of analyses of valve component failure data and covariate information available in the document Nuclear Computerized Library for Assessing Reactor Reliability Data Manual, Part 3: Hardware Component Failure Data (NUCLARR Data Manual). Although there are substantial records on valve performance, there are many categories of the corresponding descriptors and qualifying information for which specific values are missing. Consequently, this limits the data available for analysis of covariate effects. This report presents cross tabulations by different covariate categories and limited modeling of covariate effects for data subsets with substantive non-missing covariate information

  10. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    O' Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang

    2017-05-01

    Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vs treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.

  11. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance.

    Science.gov (United States)

    O'Daniel, Jennifer C; Yin, Fang-Fang

    2017-05-01

    To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Although the failure severity was greatest for daily imaging QA (imaging vs treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. TECHNIQUE OF THE STATISTICAL ANALYSIS OF INVESTMENT APPEAL OF THE REGION

    Directory of Open Access Journals (Sweden)

    А. А. Vershinina

    2014-01-01

    Full Text Available The technique of the statistical analysis of investment appeal of the region is given in scientific article for direct foreign investments. Definition of a technique of the statistical analysis is given, analysis stages reveal, the mathematico-statistical tools are considered.

  13. A NEW APPROACH TO DETECT CONGESTIVE HEART FAILURE USING DETRENDED FLUCTUATION ANALYSIS OF ELECTROCARDIOGRAM SIGNALS

    Directory of Open Access Journals (Sweden)

    CHANDRAKAR KAMATH

    2015-02-01

    Full Text Available The aim of this study is to evaluate how far the detrended fluctuation analysis (DFA approach helps to characterize the short-term and intermediate-term fractal correlations in the raw electrocardiogram (ECG signals and thereby discriminate between normal and congestive heart failure (CHF subjects. The DFA-1 calculations were performed on normal and CHF short-term ECG segments, of the order of 20 seconds duration. Differences were found in shortterm and intermediate-term correlation properties and the corresponding scaling exponents of the two groups (normal and CHF. The statistical analyses show that short-term fractal scaling exponent alone is sufficient to distinguish between normal and CHF subjects. The receiver operating characteristic curve (ROC analysis confirms the robustness of this new approach and exhibits an average accuracy that exceeds 98.2%, average sensitivity of about 98.4%, positive predictivity of 98.00%, and average specificity of 98.00%.

  14. Statistical analyses of incidents on onshore gas transmission pipelines based on PHMSA database

    International Nuclear Information System (INIS)

    Lam, Chio; Zhou, Wenxing

    2016-01-01

    This article reports statistical analyses of the mileage and pipe-related incidents data corresponding to the onshore gas transmission pipelines in the US between 2002 and 2013 collected by the Pipeline Hazardous Material Safety Administration of the US Department of Transportation. The analysis indicates that there are approximately 480,000 km of gas transmission pipelines in the US, approximately 60% of them more than 45 years old as of 2013. Eighty percent of the pipelines are Class 1 pipelines, and about 20% of the pipelines are Classes 2 and 3 pipelines. It is found that the third-party excavation, external corrosion, material failure and internal corrosion are the four leading failure causes, responsible for more than 75% of the total incidents. The 12-year average rate of rupture equals 3.1 × 10"−"5 per km-year due to all failure causes combined. External corrosion is the leading cause for ruptures: the 12-year average rupture rate due to external corrosion equals 1.0 × 10"−"5 per km-year and is twice the rupture rate due to the third-party excavation or material failure. The study provides insights into the current state of gas transmission pipelines in the US and baseline failure statistics for the quantitative risk assessments of such pipelines. - Highlights: • Analyze PHMSA pipeline mileage and incident data between 2002 and 2013. • Focus on gas transmission pipelines. • Leading causes for pipeline failures are identified. • Provide baseline failure statistics for risk assessments of gas transmission pipelines.

  15. Statistical analysis of network data with R

    CERN Document Server

    Kolaczyk, Eric D

    2014-01-01

    Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).

  16. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    Science.gov (United States)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  17. Failure characteristics analysis and fault diagnosis for liquid rocket engines

    CERN Document Server

    Zhang, Wei

    2016-01-01

    This book concentrates on the subject of health monitoring technology of Liquid Rocket Engine (LRE), including its failure analysis, fault diagnosis and fault prediction. Since no similar issue has been published, the failure pattern and mechanism analysis of the LRE from the system stage are of particular interest to the readers. Furthermore, application cases used to validate the efficacy of the fault diagnosis and prediction methods of the LRE are different from the others. The readers can learn the system stage modeling, analyzing and testing methods of the LRE system as well as corresponding fault diagnosis and prediction methods. This book will benefit researchers and students who are pursuing aerospace technology, fault detection, diagnostics and corresponding applications.

  18. Quantitative functional failure analysis of a thermal-hydraulic passive system by means of bootstrapped Artificial Neural Networks

    International Nuclear Information System (INIS)

    Zio, E.; Apostolakis, G.E.; Pedroni, N.

    2010-01-01

    The estimation of the functional failure probability of a thermal-hydraulic (T-H) passive system can be done by Monte Carlo (MC) sampling of the epistemic uncertainties affecting the system model and the numerical values of its parameters, followed by the computation of the system response by a mechanistic T-H code, for each sample. The computational effort associated to this approach can be prohibitive because a large number of lengthy T-H code simulations must be performed (one for each sample) for accurate quantification of the functional failure probability and the related statistics. In this paper, the computational burden is reduced by replacing the long-running, original T-H code by a fast-running, empirical regression model: in particular, an Artificial Neural Network (ANN) model is considered. It is constructed on the basis of a limited-size set of data representing examples of the input/output nonlinear relationships underlying the original T-H code; once the model is built, it is used for performing, in an acceptable computational time, the numerous system response calculations needed for an accurate failure probability estimation, uncertainty propagation and sensitivity analysis. The empirical approximation of the system response provided by the ANN model introduces an additional source of (model) uncertainty, which needs to be evaluated and accounted for. A bootstrapped ensemble of ANN regression models is here built for quantifying, in terms of confidence intervals, the (model) uncertainties associated with the estimates provided by the ANNs. For demonstration purposes, an application to the functional failure analysis of an emergency passive decay heat removal system in a simple steady-state model of a Gas-cooled Fast Reactor (GFR) is presented. The functional failure probability of the system is estimated together with global Sobol sensitivity indices. The bootstrapped ANN regression model built with low computational time on few (e.g., 100) data

  19. Quantitative functional failure analysis of a thermal-hydraulic passive system by means of bootstrapped Artificial Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Zio, E., E-mail: enrico.zio@polimi.i [Energy Department, Politecnico di Milano, Via Ponzio 34/3, 20133 Milan (Italy); Apostolakis, G.E., E-mail: apostola@mit.ed [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139-4307 (United States); Pedroni, N. [Energy Department, Politecnico di Milano, Via Ponzio 34/3, 20133 Milan (Italy)

    2010-05-15

    The estimation of the functional failure probability of a thermal-hydraulic (T-H) passive system can be done by Monte Carlo (MC) sampling of the epistemic uncertainties affecting the system model and the numerical values of its parameters, followed by the computation of the system response by a mechanistic T-H code, for each sample. The computational effort associated to this approach can be prohibitive because a large number of lengthy T-H code simulations must be performed (one for each sample) for accurate quantification of the functional failure probability and the related statistics. In this paper, the computational burden is reduced by replacing the long-running, original T-H code by a fast-running, empirical regression model: in particular, an Artificial Neural Network (ANN) model is considered. It is constructed on the basis of a limited-size set of data representing examples of the input/output nonlinear relationships underlying the original T-H code; once the model is built, it is used for performing, in an acceptable computational time, the numerous system response calculations needed for an accurate failure probability estimation, uncertainty propagation and sensitivity analysis. The empirical approximation of the system response provided by the ANN model introduces an additional source of (model) uncertainty, which needs to be evaluated and accounted for. A bootstrapped ensemble of ANN regression models is here built for quantifying, in terms of confidence intervals, the (model) uncertainties associated with the estimates provided by the ANNs. For demonstration purposes, an application to the functional failure analysis of an emergency passive decay heat removal system in a simple steady-state model of a Gas-cooled Fast Reactor (GFR) is presented. The functional failure probability of the system is estimated together with global Sobol sensitivity indices. The bootstrapped ANN regression model built with low computational time on few (e.g., 100) data

  20. Advanced approaches to failure mode and effect analysis (FMEA applications

    Directory of Open Access Journals (Sweden)

    D. Vykydal

    2015-10-01

    Full Text Available The present paper explores advanced approaches to the FMEA method (Failure Mode and Effect Analysis which take into account the costs associated with occurrence of failures during the manufacture of a product. Different approaches are demonstrated using an example FMEA application to production of drawn wire. Their purpose is to determine risk levels, while taking account of the above-mentioned costs. Finally, the resulting priority levels are compared for developing actions mitigating the risks.

  1. Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

    Science.gov (United States)

    Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo

    2017-03-01

    Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.

  2. Semiclassical analysis, Witten Laplacians, and statistical mechanis

    CERN Document Server

    Helffer, Bernard

    2002-01-01

    This important book explains how the technique of Witten Laplacians may be useful in statistical mechanics. It considers the problem of analyzing the decay of correlations, after presenting its origin in statistical mechanics. In addition, it compares the Witten Laplacian approach with other techniques, such as the transfer matrix approach and its semiclassical analysis. The author concludes by providing a complete proof of the uniform Log-Sobolev inequality. Contents: Witten Laplacians Approach; Problems in Statistical Mechanics with Discrete Spins; Laplace Integrals and Transfer Operators; S

  3. A novel statistic for genome-wide interaction analysis.

    Directory of Open Access Journals (Sweden)

    Xuesen Wu

    2010-09-01

    Full Text Available Although great progress in genome-wide association studies (GWAS has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked. The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDR<0.001 and 0.001analysis is a valuable tool for finding remaining missing heritability unexplained by the current GWAS, and the developed novel statistic is able to search significant interaction between SNPs across the genome. Real data analysis showed that the results of genome-wide interaction analysis can be replicated in two independent studies.

  4. Strength on cut edge and ground edge glass beams with the failure analysis method

    Directory of Open Access Journals (Sweden)

    Stefano Agnetti

    2013-10-01

    Full Text Available The aim of this work is the study of the effect of the finishing of the edge of glass when it has a structural function. Experimental investigations carried out for glass specimens are presented. Various series of annealed glass beam were tested, with cut edge and with ground edge. The glass specimens are tested in four-point bending performing flaw detection on the tested specimens after failure, in order to determine glass strength. As a result, bending strength values are obtained for each specimen. Determining some physical parameter as the depth of the flaw and the mirror radius of the fracture, after the failure of a glass element, it could be possible to calculate the failure strength of that.The experimental results were analyzed with the LEFM theory and the glass strength was analyzed with a statistical study using two-parameter Weibull distribution fitting quite well the failure stress data. The results obtained constitute a validation of the theoretical models and show the influence of the edge processing on the failure strength of the glass. Furthermore, series with different sizes were tested in order to evaluate the size effect.

  5. 1988 failure rate screening data for fusion reliability and risk analysis

    International Nuclear Information System (INIS)

    Cadwallader, L.C.; Piet, S.J.

    1988-01-01

    This document contains failure rate screening data for application to fusion components. The screening values are generally fission or aerospace industry failure rate estimates that can be extrapolated for use by fusion system designers, reliability engineers and risk analysts. Failure rate estimates for tritium-bearing systems, liquid metal-cooled systems, gas-cooled systems, water-cooled systems and containment systems are given. Preliminary system availability estimates and selected initiating event frequency estimates are presented. This first edition document is valuable to design and safety analysis for the Compact Ignition Tokamak and the International Thermonuclear Experimental Reactor. 20 refs., 28 tabs

  6. Analysis approach for common cause failure on non-safety digital control system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yun Goo; Oh, Eungse [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2014-05-15

    The effects of common cause failure (CCF) on safety digital instrumentation and control (I and C) system had been considered in defense in depth and diversity coping analysis with safety analysis method. For the non-safety system, single failure had been considered for safety analysis. IEEE Std. 603-1991, Clause 5.6.3.1(2), 'Isolation' states that no credible failure on the non-safety side of an isolation device shall prevent any portion of a safety system from meeting its minimum performance requirements during and following any design basis event requiring that safety function. The software CCF is one of the credible failure on the non-safety side. In advanced digital I and C system, same hardware component is used for different control system and the defect in manufacture or common external event can generate CCF. Moreover, the non-safety I and C system uses complex software for its various function and software quality assurance for the development process is less severe than safety software for the cost effective design. Therefore the potential defects in software cannot be ignored and the effect of software CCF on non-safety I and C system is needed to be evaluated. This paper proposes the general process and considerations for the analysis of CCF on non-safety I and C system.

  7. Phenomenological uncertainty analysis of early containment failure at severe accident of nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Su Won

    2011-02-15

    The severe accident has inherently significant uncertainty due to wide range of conditions and performing experiments, validation and practical application are extremely difficult because of its high temperature and pressure. Although internal and external researches were put into practice, the reference used in Korean nuclear plants were foreign data of 1980s and safety analysis as the probabilistic safety assessment has not applied the newest methodology. Also, it is applied to containment pressure formed into point value as results of thermal hydraulic analysis to identify the probability of containment failure in level 2 PSA. In this paper, the uncertainty analysis methods for phenomena of severe accident influencing early containment failure were developed, the uncertainty analysis that apply Korean nuclear plants using the MELCOR code was performed and it is a point of view to present the distribution of containment pressure as a result of uncertainty analysis. Because early containment failure is important factor of Large Early Release Frequency(LERF) that is used as representative criteria of decision-making in nuclear power plants, it was selected in this paper among various modes of containment failure. Important phenomena of early containment failure at severe accident based on previous researches were comprehended and methodology of 7th steps to evaluate uncertainty was developed. The MELCOR input for analysis of the severe accident reflected natural circulation flow was developed and the accident scenario for station black out that was representative initial event of early containment failure was determined. By reviewing the internal model and correlation for MELCOR model relevant important phenomena of early containment failure, the uncertainty factors which could affect on the uncertainty were founded and the major factors were finally identified through the sensitivity analysis. In order to determine total number of MELCOR calculations which can

  8. Sensitivity analysis of fuel pin failure performance under slow-ramp type transient overpower condition by using a fuel performance analysis code FEMAXI-FBR

    International Nuclear Information System (INIS)

    Tsuboi, Yasushi; Ninokata, Hisashi; Endo, Hiroshi; Ishizu, Tomoko; Tatewaki, Isao; Saito, Hiroaki

    2012-01-01

    The FEMAXI-FBR is a fuel performance analysis code and has been developed as one module of core disruptive evaluation system, the ASTERIA-FBR. The FEMAXI-FBR has reproduced the failure pin behavior during slow transient overpower. The axial location of pin failure affects the power and reactivity behavior during core disruptive accident, and failure model of which pin failure occurs at upper part of pin is used by reflecting the results of the CABRI-2 test. By using the FEMAXI-FBR, sensitivity analysis of uncertainty of design parameters such as irradiation conditions and fuel fabrication tolerances was performed to clarify the effect on axial location of pin failure during slow transient overpower. The sensitivity analysis showed that the uncertainty of design parameters does not affect the failure location. It suggests that the failure model with which locations of failure occur at upper part of pin can be adopted for core disruptive calculation by taking into consideration of design uncertainties. (author)

  9. A statistical approach to plasma profile analysis

    International Nuclear Information System (INIS)

    Kardaun, O.J.W.F.; McCarthy, P.J.; Lackner, K.; Riedel, K.S.

    1990-05-01

    A general statistical approach to the parameterisation and analysis of tokamak profiles is presented. The modelling of the profile dependence on both the radius and the plasma parameters is discussed, and pertinent, classical as well as robust, methods of estimation are reviewed. Special attention is given to statistical tests for discriminating between the various models, and to the construction of confidence intervals for the parameterised profiles and the associated global quantities. The statistical approach is shown to provide a rigorous approach to the empirical testing of plasma profile invariance. (orig.)

  10. Preliminary failure modes and effects analysis on Korean HCCR TBS to be tested in ITER

    International Nuclear Information System (INIS)

    Ahn, Mu-Young; Cho, Seungyon; Jin, Hyung Gon; Lee, Dong Won; Park, Yi-Hyun; Lee, Youngmin

    2015-01-01

    Highlights: • Postulated initiating events are identified through failure modes and effects analysis on the current HCCR TBS design. • A set of postulated initiating events are selected for consideration of deterministic analysis. • Accident evolutions on the selected postualted initiating events are qualitatively described for deterministic analysis. - Abstract: Korean Helium cooled ceramic reflector (HCCR) Test blanket system (TBS), which comprises Test blanket module (TBM) and ancillary systems in various locations of ITER building, is operated at high temperature and pressure with decay heat. Therefore, safety is utmost concern in design process and it is required to demonstrate that the HCCR TBS is designed to comply with the safety requirements and guidelines of ITER. Due to complexity of the system with many interfaces with ITER, a systematic approach is necessary for safety analysis. This paper presents preliminary failure modes and effects analysis (FMEA) study performed for the HCCR TBS. FMEA is a systematic methodology in which failure modes for components in the system and their consequences are studied from the bottom-up. Over eighty failure modes have been investigated on the HCCR TBS. The failure modes that have similar consequences are grouped as postulated initiating events (PIEs) and total seven reference accident scenarios are derived from FMEA study for deterministic accident analysis. Failure modes not covered here due to evolving design of the HCCR TBS and uncertainty in maintenance procedures will be studied further in near future.

  11. Preliminary failure modes and effects analysis on Korean HCCR TBS to be tested in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Mu-Young, E-mail: myahn74@nfri.re.kr [National Fusion Research Institute, Daejeon (Korea, Republic of); Cho, Seungyon [National Fusion Research Institute, Daejeon (Korea, Republic of); Jin, Hyung Gon; Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Park, Yi-Hyun; Lee, Youngmin [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    Highlights: • Postulated initiating events are identified through failure modes and effects analysis on the current HCCR TBS design. • A set of postulated initiating events are selected for consideration of deterministic analysis. • Accident evolutions on the selected postualted initiating events are qualitatively described for deterministic analysis. - Abstract: Korean Helium cooled ceramic reflector (HCCR) Test blanket system (TBS), which comprises Test blanket module (TBM) and ancillary systems in various locations of ITER building, is operated at high temperature and pressure with decay heat. Therefore, safety is utmost concern in design process and it is required to demonstrate that the HCCR TBS is designed to comply with the safety requirements and guidelines of ITER. Due to complexity of the system with many interfaces with ITER, a systematic approach is necessary for safety analysis. This paper presents preliminary failure modes and effects analysis (FMEA) study performed for the HCCR TBS. FMEA is a systematic methodology in which failure modes for components in the system and their consequences are studied from the bottom-up. Over eighty failure modes have been investigated on the HCCR TBS. The failure modes that have similar consequences are grouped as postulated initiating events (PIEs) and total seven reference accident scenarios are derived from FMEA study for deterministic accident analysis. Failure modes not covered here due to evolving design of the HCCR TBS and uncertainty in maintenance procedures will be studied further in near future.

  12. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

    Science.gov (United States)

    Shaikh, Masood Ali

    2017-09-01

    Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

  13. Statistical analysis of brake squeal noise

    Science.gov (United States)

    Oberst, S.; Lai, J. C. S.

    2011-06-01

    Despite substantial research efforts applied to the prediction of brake squeal noise since the early 20th century, the mechanisms behind its generation are still not fully understood. Squealing brakes are of significant concern to the automobile industry, mainly because of the costs associated with warranty claims. In order to remedy the problems inherent in designing quieter brakes and, therefore, to understand the mechanisms, a design of experiments study, using a noise dynamometer, was performed by a brake system manufacturer to determine the influence of geometrical parameters (namely, the number and location of slots) of brake pads on brake squeal noise. The experimental results were evaluated with a noise index and ranked for warm and cold brake stops. These data are analysed here using statistical descriptors based on population distributions, and a correlation analysis, to gain greater insight into the functional dependency between the time-averaged friction coefficient as the input and the peak sound pressure level data as the output quantity. The correlation analysis between the time-averaged friction coefficient and peak sound pressure data is performed by applying a semblance analysis and a joint recurrence quantification analysis. Linear measures are compared with complexity measures (nonlinear) based on statistics from the underlying joint recurrence plots. Results show that linear measures cannot be used to rank the noise performance of the four test pad configurations. On the other hand, the ranking of the noise performance of the test pad configurations based on the noise index agrees with that based on nonlinear measures: the higher the nonlinearity between the time-averaged friction coefficient and peak sound pressure, the worse the squeal. These results highlight the nonlinear character of brake squeal and indicate the potential of using nonlinear statistical analysis tools to analyse disc brake squeal.

  14. The Statistical Analysis of Time Series

    CERN Document Server

    Anderson, T W

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George

  15. Analysis of room transfer function and reverberant signal statistics

    DEFF Research Database (Denmark)

    Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn

    2008-01-01

    For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the RTF...... magnitude and phase). RTF fractional octave smoothing, as with 1-slash 3 octave analysis, may lead to RTF simplifications that can be useful for several audio applications, like room compensation, room modeling, auralisation purposes. The aim of this work is to identify the relationship of optimal response...... and the corresponding ratio of the direct and reverberant signal. In addition, this work examines the statistical quantities for speech and audio signals prior to their reproduction within rooms and when recorded in rooms. Histograms and other statistical distributions are used to compare RTF minima of typical...

  16. Standard guide for corrosion-related failure analysis

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2000-01-01

    1.1 This guide covers key issues to be considered when examining metallic failures when corrosion is suspected as either a major or minor causative factor. 1.2 Corrosion-related failures could include one or more of the following: change in surface appearance (for example, tarnish, rust, color change), pin hole leak, catastrophic structural failure (for example, collapse, explosive rupture, implosive rupture, cracking), weld failure, loss of electrical continuity, and loss of functionality (for example, seizure, galling, spalling, swelling). 1.3 Issues covered include overall failure site conditions, operating conditions at the time of failure, history of equipment and its operation, corrosion product sampling, environmental sampling, metallurgical and electrochemical factors, morphology (mode) or failure, and by considering the preceding, deducing the cause(s) of corrosion failure. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibili...

  17. Local buckling failure analysis of high-strength pipelines

    Institute of Scientific and Technical Information of China (English)

    Yan Li; Jian Shuai; Zhong-Li Jin; Ya-Tong Zhao; Kui Xu

    2017-01-01

    Pipelines in geological disaster regions typically suffer the risk of local buckling failure because of slender structure and complex load.This paper is meant to reveal the local buckling behavior of buried pipelines with a large diameter and high strength,which are under different conditions,including pure bending and bending combined with internal pressure.Finite element analysis was built according to previous data to study local buckling behavior of pressurized and unpressurized pipes under bending conditions and their differences in local buckling failure modes.In parametric analysis,a series of parameters,including pipe geometrical dimension,pipe material properties and internal pressure,were selected to study their influences on the critical bending moment,critical compressive stress and critical compressive strain of pipes.Especially the hardening exponent of pipe material was introduced to the parameter analysis by using the Ramberg-Osgood constitutive model.Results showed that geometrical dimensions,material and internal pressure can exert similar effects on the critical bending moment and critical compressive stress,which have different,even reverse effects on the critical compressive strain.Based on these analyses,more accurate design models of critical bending moment and critical compressive stress have been proposed for high-strength pipelines under bending conditions,which provide theoretical methods for highstrength pipeline engineering.

  18. Recognition and Analysis of Corrosion Failure Mechanisms

    OpenAIRE

    Steven Suess

    2006-01-01

    Corrosion has a vast impact on the global and domestic economy, and currently incurs losses of nearly $300 billion annually to the U.S. economy alone. Because of the huge impact of corrosion, it is imperative to have a systematic approach to recognizing and mitigating corrosion problems as soon as possible after they become apparent. A proper failure analysis includes collection of pertinent background data and service history, followed by visual inspection, photographic documentation, materi...

  19. Transit safety & security statistics & analysis 2002 annual report (formerly SAMIS)

    Science.gov (United States)

    2004-12-01

    The Transit Safety & Security Statistics & Analysis 2002 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  20. Transit safety & security statistics & analysis 2003 annual report (formerly SAMIS)

    Science.gov (United States)

    2005-12-01

    The Transit Safety & Security Statistics & Analysis 2003 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  1. A meta-analysis of the effects of β-adrenergic blockers in chronic heart failure.

    Science.gov (United States)

    Zhang, Xiaojian; Shen, Chengwu; Zhai, Shujun; Liu, Yukun; Yue, Wen-Wei; Han, Li

    2016-10-01

    Adrenergic β-blockers are drugs that bind to, but do not activate β-adrenergic receptors. Instead they block the actions of β-adrenergic agonists and are used for the treatment of various diseases such as cardiac arrhythmias, angina pectoris, myocardial infarction, hypertension, headache, migraines, stress, anxiety, prostate cancer, and heart failure. Several meta-analysis studies have shown that β-blockers improve the heart function and reduce the risks of cardiovascular events, rate of mortality, and sudden death through chronic heart failure (CHF) of patients. The present study identified results from recent meta-analyses of β-adrenergic blockers and their usefulness in CHF. Databases including Medline/Embase/Cochrane Central Register of Controlled Trials (CENTRAL), and PubMed were searched for the periods May, 1985 to March, 2011 and June, 2013 to August, 2015, and a number of studies identified. Results of those studies showed that use of β-blockers was associated with decreased sudden cardiac death in patients with heart failure. However, contradictory results have also been reported. The present meta-analysis aimed to determine the efficacy of β-blockers on mortality and morbidity in patients with heart failure. The results showed that mortality was significantly reduced by β-blocker treatment prior to the surgery of heart failure patients. The results from the meta-analysis studies showed that β-blocker treatment in heart failure patients correlated with a significant decrease in long-term mortality, even in patients that meet one or more exclusion criteria of the MERIT-HF study. In summary, the findings of the current meta-analysis revealed beneficial effects different β-blockers have on patients with heart failure or related heart disease.

  2. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  3. Corrosion failure analysis as related to prevention of corrosion failures

    International Nuclear Information System (INIS)

    Suss, H.

    1977-10-01

    The factors and conditions which have contributed to many of the corrosion related service failures are discussed based on a review of actual case histories. The anti-corrosion devices which developed as a result of these failure analyses are reviewed, and the method which must be adopted and used to take advantage of the available corrosion prevention techniques is discussed

  4. Failure probability analysis on mercury target vessel

    International Nuclear Information System (INIS)

    Ishikura, Syuichi; Futakawa, Masatoshi; Kogawa, Hiroyuki; Sato, Hiroshi; Haga, Katsuhiro; Ikeda, Yujiro

    2005-03-01

    Failure probability analysis was carried out to estimate the lifetime of the mercury target which will be installed into the JSNS (Japan spallation neutron source) in J-PARC (Japan Proton Accelerator Research Complex). The lifetime was estimated as taking loading condition and materials degradation into account. Considered loads imposed on the target vessel were the static stresses due to thermal expansion and static pre-pressure on He-gas and mercury and the dynamic stresses due to the thermally shocked pressure waves generated repeatedly at 25 Hz. Materials used in target vessel will be degraded by the fatigue, neutron and proton irradiation, mercury immersion and pitting damages, etc. The imposed stresses were evaluated through static and dynamic structural analyses. The material-degradations were deduced based on published experimental data. As a result, it was quantitatively confirmed that the failure probability for the lifetime expected in the design is very much lower, 10 -11 in the safety hull, meaning that it will be hardly failed during the design lifetime. On the other hand, the beam window of mercury vessel suffered with high-pressure waves exhibits the failure probability of 12%. It was concluded, therefore, that the leaked mercury from the failed area at the beam window is adequately kept in the space between the safety hull and the mercury vessel by using mercury-leakage sensors. (author)

  5. Dam failure analysis for the Lago El Guineo Dam, Orocovis, Puerto Rico

    Science.gov (United States)

    Gómez-Fragoso, Julieta; Heriberto Torres-Sierra,

    2016-08-09

    The U.S. Geological Survey, in cooperation with the Puerto Rico Electric Power Authority, completed hydrologic and hydraulic analyses to assess the potential hazard to human life and property associated with the hypothetical failure of the Lago El Guineo Dam. The Lago El Guineo Dam is within the headwaters of the Río Grande de Manatí and impounds a drainage area of about 4.25 square kilometers.The hydrologic assessment was designed to determine the outflow hydrographs and peak discharges for Lago El Guineo and other subbasins in the Río Grande de Manatí hydrographic basin for three extreme rainfall events: (1) a 6-hour probable maximum precipitation event, (2) a 24-hour probable maximum precipitation event, and (3) a 24-hour, 100-year recurrence rainfall event. The hydraulic study simulated a dam failure of Lago El Guineo Dam using flood hydrographs generated from the hydrologic study. The simulated dam failure generated a hydrograph that was routed downstream from Lago El Guineo Dam through the lower reaches of the Río Toro Negro and the Río Grande de Manatí to determine water-surface profiles developed from the event-based hydrologic scenarios and “sunny day” conditions. The Hydrologic Engineering Center’s Hydrologic Modeling System (HEC–HMS) and Hydrologic Engineering Center’s River Analysis System (HEC–RAS) computer programs, developed by the U.S. Army Corps of Engineers, were used for the hydrologic and hydraulic modeling, respectively. The flow routing in the hydraulic analyses was completed using the unsteady flow module available in the HEC–RAS model.Above the Lago El Guineo Dam, the simulated inflow peak discharges from HEC–HMS resulted in about 550 and 414 cubic meters per second for the 6- and 24-hour probable maximum precipitation events, respectively. The 24-hour, 100-year recurrence storm simulation resulted in a peak discharge of about 216 cubic meters per second. For the hydrologic analysis, no dam failure conditions are

  6. Statistical analysis of long term spatial and temporal trends of ...

    Indian Academy of Sciences (India)

    Statistical analysis of long term spatial and temporal trends of temperature ... CGCM3; HadCM3; modified Mann–Kendall test; statistical analysis; Sutlej basin. ... Water Resources Systems Division, National Institute of Hydrology, Roorkee 247 ...

  7. Development of severe accident analysis code - Development of a finite element code for lower head failure analysis

    Energy Technology Data Exchange (ETDEWEB)

    Huh, Hoon; Lee, Choong Ho; Choi, Tae Hoon; Kim, Hyun Sup; Kim, Se Ho; Kang, Woo Jong; Seo, Chong Kwan [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-08-01

    The study concerns the development of analysis models and computer codes for lower head failure analysis when a severe accident occurs in a nuclear reactor system. Although the lower head failure modes consists of several failure modes, the study this year was focused on the global rupture with the collapse pressure and mode by limit analysis and elastic deformation. The behavior of molten core causes elevation of temperature in the reactor vessel wall and deterioration of load-carrying capacity of a reactor vessel. The behavior of molten core and the heat transfer modes were, therefore, postulated in several types and the temperature distributions according to the assumed heat flux modes were calculated. The collapse pressure of a nuclear reactor lower head decreases rapidly with elevation of temperature as time passes. The calculation shows the safety of a nuclear reactor is enhanced with the lager collapse pressure when the hot spot is located far from the pole. 42 refs., 2 tabs., 31 figs. (author)

  8. CORSSA: The Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Michael, Andrew J.; Wiemer, Stefan

    2010-01-01

    Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.

  9. Analysis of Failure Causes and the Criticality Degree of Elements of Motor Vehicle’s Drum Brakes

    Directory of Open Access Journals (Sweden)

    D. Ćatić

    2014-09-01

    Full Text Available The introduction of the paper gives the basic concepts, historical development of methods of Fault Tree Analysis - FTA and Failure Modes, Effects and Criticality Analysis - FMECA for analysis of the reliability and safety of technical systems and importance of applying this method is highlighted. Failure analysis is particularly important for systems whose failures lead to the endangerment of people safety, such as, for example, the braking system of motor vehicles. For the failure analysis of the considered device, it is necessary to know the structure, functioning, working conditions and all factors that have a greater or less influence on its reliability. By formation of the fault tree of drum brakes in braking systems of commercial vehicles, it was established a causal relation between the different events that lead to a reduction in performance or complete failure of the braking system. Based on data from exploitation, using FMECA methods, determination of the criticality degree of drum brake’s elements on the reliable and safe operation of the braking system is performed.

  10. [Examination of safety improvement by failure record analysis that uses reliability engineering].

    Science.gov (United States)

    Kato, Kyoichi; Sato, Hisaya; Abe, Yoshihisa; Ishimori, Yoshiyuki; Hirano, Hiroshi; Higashimura, Kyoji; Amauchi, Hiroshi; Yanakita, Takashi; Kikuchi, Kei; Nakazawa, Yasuo

    2010-08-20

    How the maintenance checks of the medical treatment system, including start of work check and the ending check, was effective for preventive maintenance and the safety improvement was verified. In this research, date on the failure of devices in multiple facilities was collected, and the data of the trouble repair record was analyzed by the technique of reliability engineering. An analysis of data on the system (8 general systems, 6 Angio systems, 11 CT systems, 8 MRI systems, 8 RI systems, and the radiation therapy system 9) used in eight hospitals was performed. The data collection period assumed nine months from April to December 2008. Seven items were analyzed. (1) Mean time between failures (MTBF) (2) Mean time to repair (MTTR) (3) Mean down time (MDT) (4) Number found by check in morning (5) Failure generation time according to modality. The classification of the breakdowns per device, the incidence, and the tendency could be understood by introducing reliability engineering. Analysis, evaluation, and feedback on the failure generation history are useful to keep downtime to a minimum and to ensure safety.

  11. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  12. Prediction of hospital failure: a post-PPS analysis.

    Science.gov (United States)

    Gardiner, L R; Oswald, S L; Jahera, J S

    1996-01-01

    This study investigates the ability of discriminant analysis to provide accurate predictions of hospital failure. Using data from the period following the introduction of the Prospective Payment System, we developed discriminant functions for each of two hospital ownership categories: not-for-profit and proprietary. The resulting discriminant models contain six and seven variables, respectively. For each ownership category, the variables represent four major aspects of financial health (liquidity, leverage, profitability, and efficiency) plus county marketshare and length of stay. The proportion of closed hospitals misclassified as open one year before closure does not exceed 0.05 for either ownership type. Our results show that discriminant functions based on a small set of financial and nonfinancial variables provide the capability to predict hospital failure reliably for both not-for-profit and proprietary hospitals.

  13. Analysis of failure and maintenance experiences of motor operated valves in a Finnish nuclear power plant

    International Nuclear Information System (INIS)

    Simola, K.; Laakso, K.

    1992-01-01

    Operating experiences from 1981 up to 1989 of totally 104 motor operated closing valves (MOV) in different safety systems at TVO I and II nuclear power units were analysed in a systematic way. The qualitative methods used were failure mode and effects analysis (FMEA) and maintenance effects and criticality analysis (MECA). The failure descriptions were obtained from power plant's computerized failure reporting system. The reported 181 failure events were reanalysed and sorted according to specific classifications developed for the MOV function. Filled FMEA and MECA sheets on individual valves were stored in a microcomputer data base for further analyses. Analyses were performed for the failed mechanical and electrical valve parts, ways of detection of failure modes, failure effects, and repair and unavailability times

  14. Weighing of risk factors for penetrating keratoplasty graft failure: application of Risk Score System

    Directory of Open Access Journals (Sweden)

    Abdo Karim Tourkmani

    2017-03-01

    Full Text Available AIM: To analyze the relationship between the score obtained in the Risk Score System (RSS proposed by Hicks et al with penetrating keratoplasty (PKP graft failure at 1y postoperatively and among each factor in the RSS with the risk of PKP graft failure using univariate and multivariate analysis. METHODS: The retrospective cohort study had 152 PKPs from 152 patients. Eighteen cases were excluded from our study due to primary failure (10 cases, incomplete medical notes (5 cases and follow-up less than 1y (3 cases. We included 134 PKPs from 134 patients stratified by preoperative risk score. Spearman coefficient was calculated for the relationship between the score obtained and risk of failure at 1y. Univariate and multivariate analysis were calculated for the impact of every single risk factor included in the RSS over graft failure at 1y. RESULTS: Spearman coefficient showed statistically significant correlation between the score in the RSS and graft failure (P0.05 between diagnosis and lens status with graft failure. The relationship between the other risk factors studied and graft failure was significant (P<0.05, although the results for previous grafts and graft failure was unreliable. None of our patients had previous blood transfusion, thus, it had no impact. CONCLUSION: After the application of multivariate analysis techniques, some risk factors do not show the expected impact over graft failure at 1y.

  15. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  16. Application of failure mode and effect analysis in an assisted reproduction technology laboratory.

    Science.gov (United States)

    Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola

    2016-08-01

    Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  17. Modeling of fast reactor cladding failure for hypothetical accident transient analysis

    International Nuclear Information System (INIS)

    Kramer, J.M.; DiMelfi, R.J.; Hughes, T.H.; Deitrich, L.W.

    1979-01-01

    An analysis is made of burst experiments performed on neutron irradiated cladding tubes. This is done by employing a generalized Voce equation to describe the mechanical deformation of type 316 stainless steel, combined with an empirical creep crack growth law, each modified to account for the effects of irradiation matrix hardening, and irradiation induced grain boundary embrittlement, respectively. The results of this analysis indicate that for large initial hoop stress, failure occurs at relatively low temperature and is controlled by the onset of plastic instability. The increase in failure temperature of irradiated material, in this low temperature region, is due to irradiation strengthening. Failure in the case of relatively small initial hoop stress occurs at high temperature where the Voce equation reduces to a power law creep formula. The ductility of irradiated material, in this high temperature region, is adequately described through the use of an empirical intergranular crack growth law used in conjunction with the creep law. The effect of neutron irradiation is to reduce the activation energy for crack propagation from the value for creep to some lower value correlated to independent Dorn rupture parameter measurements. The result is a predicted reduced ductility which translates into a reduction in failure temperature at a given hoop stress value for irradiated material. (orig.)

  18. Elastic-plastic failure analysis of pressure burst tests of thin toroidal shells

    International Nuclear Information System (INIS)

    Jones, D.P.; Holliday, J.E.; Larson, L.D.

    1998-07-01

    This paper provides a comparison between test and analysis results for bursting of thin toroidal shells. Testing was done by pressurizing two toroidal shells until failure by bursting. An analytical criterion for bursting is developed based on good agreement between structural instability predicted by large strain-large displacement elastic-plastic finite element analysis and observed burst pressure obtained from test. The failures were characterized by loss of local stability of the membrane section of the shells consistent with the predictions from the finite element analysis. Good agreement between measured and predicted burst pressure suggests that incipient structural instability as calculated by an elastic-plastic finite element analysis is a reasonable way to calculate the bursting pressure of thin membrane structures

  19. Statistical evaluation of vibration analysis techniques

    Science.gov (United States)

    Milner, G. Martin; Miller, Patrice S.

    1987-01-01

    An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.

  20. HistFitter software framework for statistical data analysis

    CERN Document Server

    Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...

  1. Failure analysis of carbide fuels under transient overpower (TOP) conditions

    International Nuclear Information System (INIS)

    Nguyen, D.H.

    1980-06-01

    The failure of carbide fuels in the Fast Test Reactor (FTR) under Transient Overpower (TOP) conditions has been examined. The Beginning-of-Cycle Four (BOC-4) all-oxide base case, at $.50/sec ramp rate was selected as the reference case. A coupling between the advanced fuel performance code UNCLE-T and HCDA Code MELT-IIIA was necessary for the analysis. UNCLE-T was used to determine cladding failure and fuel preconditioning which served as initial conditions for MELT-III calculations. MELT-IIIA determined the time of molten fuel ejection from fuel pin

  2. The surgical treatment of failure in cervical lymph nodes after radiotherapy for nasopharyngeal carcinoma: an analysis of 83 patients

    International Nuclear Information System (INIS)

    Gu Wendong; Ji Qinghai; Lu Xueguan; Feng Yan

    2003-01-01

    Objective: To analyze the results of neck dissection in patients who failed in cervical lymph nodes after radiotherapy for nasopharyngeal carcinoma. Methods: Eighty-three patients who received neck dissection due to lymph node persistence or recurrence after definitive radiotherapy were analyzed retrospectively according to the following relevant factors: age, sex, the interval between completion of radiotherapy and surgery, rN stage, postoperative radiotherapy given or not, the adjacent tissues involved or not and the number of positive nodes. Kaplan-Meier method, Log-rank method and Cox method were used in the statistical analysis. Results: The 1-, 3- and 5-year overall survival rates were 80.7%, 47.1% and 34.9%. The interval between completion of radiotherapy and surgery, postoperative radiotherapy given or not, the adjacent tissues involved or not were significantly prognostic factors in statistic analysis. Conclusions: Neck dissection can be applied in the management of cervical lymph node failure in nasopharyngeal carcinoma after radiotherapy. Postoperative radiotherapy should be considered in patients with capsular invasion and/or adjacent tissue involvement

  3. Statistical analysis on extreme wave height

    Digital Repository Service at National Institute of Oceanography (India)

    Teena, N.V.; SanilKumar, V.; Sudheesh, K.; Sajeev, R.

    -294. • WAFO (2000) – A MATLAB toolbox for analysis of random waves and loads, Lund University, Sweden, homepage http://www.maths.lth.se/matstat/wafo/,2000. 15    Table 1: Statistical results of data and fitted distribution for cumulative distribution...

  4. Failure analysis of boiler tubes in lakhra coal power plant

    International Nuclear Information System (INIS)

    Shah, A.; Baluch, M.M.; Ali, A.

    2010-01-01

    Present work deals with the failure analysis of a boiler tube in Lakhra fluidized bed combustion power station. Initially, visual inspection technique was adopted to analyse the fractured surface. Detailed microstructural investigations of the busted boiler tube were carried out using light optical microscope and scanning electron microscope. The hardness tests were also performed. A 50 percent decrease in hardness of intact portion of the tube material and from area adjacent to failure was measured, which was found to be in good agreement with the wall thicknesses measured of the busted boiler tube i.e. 4 mm and 2 mm from unaffected portion and ruptured area respectively. It was concluded that the major cause of failure of boiler tube is erosion of material which occurs due the coal particles strike at the surface of the tube material. Since the temperature of boiler is not maintained uniformly. The variations in boiler temperature can also affect the material and could be another reason for the failure of the tube. (author)

  5. Probabilistic physics-of-failure models for component reliabilities using Monte Carlo simulation and Weibull analysis: a parametric study

    International Nuclear Information System (INIS)

    Hall, P.L.; Strutt, J.E.

    2003-01-01

    In reliability engineering, component failures are generally classified in one of three ways: (1) early life failures; (2) failures having random onset times; and (3) late life or 'wear out' failures. When the time-distribution of failures of a population of components is analysed in terms of a Weibull distribution, these failure types may be associated with shape parameters β having values 1 respectively. Early life failures are frequently attributed to poor design (e.g. poor materials selection) or problems associated with manufacturing or assembly processes. We describe a methodology for the implementation of physics-of-failure models of component lifetimes in the presence of parameter and model uncertainties. This treats uncertain parameters as random variables described by some appropriate statistical distribution, which may be sampled using Monte Carlo methods. The number of simulations required depends upon the desired accuracy of the predicted lifetime. Provided that the number of sampled variables is relatively small, an accuracy of 1-2% can be obtained using typically 1000 simulations. The resulting collection of times-to-failure are then sorted into ascending order and fitted to a Weibull distribution to obtain a shape factor β and a characteristic life-time η. Examples are given of the results obtained using three different models: (1) the Eyring-Peck (EP) model for corrosion of printed circuit boards; (2) a power-law corrosion growth (PCG) model which represents the progressive deterioration of oil and gas pipelines; and (3) a random shock-loading model of mechanical failure. It is shown that for any specific model the values of the Weibull shape parameters obtained may be strongly dependent on the degree of uncertainty of the underlying input parameters. Both the EP and PCG models can yield a wide range of values of β, from β>1, characteristic of wear-out behaviour, to β<1, characteristic of early-life failure, depending on the degree of

  6. [Retrieval and failure analysis of surgical implants in Brazil: the need for proper regulation].

    Science.gov (United States)

    Azevedo, Cesar R de Farias; Hippert, Eduardo

    2002-01-01

    This paper summarizes several cases of metallurgical failure analysis of surgical implants conducted at the Laboratory of Failure Analysis, Instituto de Pesquisas Tecnológicas (IPT), in Brazil. Failures with two stainless steel femoral compression plates, one stainless steel femoral nail plate, one Ti-6Al-4V alloy maxillary reconstruction plate, and five Nitinol wires were investigated. The results showed that the implants were not in accordance with ISO standards and presented evidence of corrosion-assisted fracture. Furthermore, some of the implants presented manufacturing/processing defects which also contributed to their premature failure. Implantation of materials that are not biocompatible may cause several types of adverse effects in the human body and lead to premature implant failure. A review of prevailing health legislation is needed in Brazil, along with the adoption of regulatory mechanisms to assure the quality of surgical implants on the market, providing for compulsory procedures in the reporting and investigation of surgical implants which have failed in service.

  7. Statistical Analysis of Zebrafish Locomotor Response.

    Science.gov (United States)

    Liu, Yiwen; Carmer, Robert; Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling's T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling's T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure.

  8. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  9. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  10. Failure mode and effects analysis outputs: are they valid?

    Science.gov (United States)

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident

  11. Structures for common-cause failure analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1981-01-01

    Common-cause failure methodology and terminology have been reviewed and structured to provide a systematical basis for addressing and developing models and methods for quantification. The structure is based on (1) a specific set of definitions, (2) categories based on the way faults are attributable to a common cause, and (3) classes based on the time of entry and the time of elimination of the faults. The failure events are then characterized by their likelihood or frequency and the average residence time. The structure provides a basis for selecting computational models, collecting and evaluating data and assessing the importance of various failure types, and for developing effective defences against common-cause failure. The relationships of this and several other structures are described

  12. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Yoshitaka; Ohtani, Masanori [Institute of Nuclear Safety System, Inc., Mihama, Fukui (Japan); Fujita, Yushi [TECNOVA Corp., Tokyo (Japan)

    2002-09-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  13. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka; Ohtani, Masanori; Fujita, Yushi

    2002-01-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  14. Safety Management in an Oil Company through Failure Mode Effects and Critical Analysis

    Directory of Open Access Journals (Sweden)

    Benedictus Rahardjo

    2016-06-01

    Full Text Available This study attempts to apply Failure Mode Effects and Criticality Analysis (FMECA to improve the safety of a production system, specifically the production process of an oil company. Since food processing is a worldwide issue and self-management of a food company is more important than relying on government regulations, therefore this study focused on that matter. The initial step of this study is to identify and analyze the criticality of the potential failure modes of the production process. Furthermore, take corrective action to minimize the probability of repeating the same failure mode, followed by a re-analysis of its criticality. The results of corrective actions were compared with those before improvement conditions by testing the significance of the difference using two sample t-test. The final measured result is the Criticality Priority Number (CPN, which refers to the severity category of the failure mode and the probability of occurrence of the same failure mode. The recommended actions proposed by the FMECA significantly reduce the CPN compared with the value before improvement, with increases of 38.46% for the palm olein case study.

  15. Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module

    Energy Technology Data Exchange (ETDEWEB)

    Lee C. Cadwallader

    2010-06-01

    This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.

  16. Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module

    Energy Technology Data Exchange (ETDEWEB)

    Lee C. Cadwallader

    2007-08-01

    This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with “generic” component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance.

  17. Preliminary Failure Modes and Effects Analysis of the US DCLL Test Blanket Module

    International Nuclear Information System (INIS)

    Lee C. Cadwallader

    2007-01-01

    This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a small tritium-breeding test blanket module design for the International Thermonuclear Experimental Reactor. The FMEA was quantified with 'generic' component failure rate data, and the failure events are binned into postulated initiating event families and frequency categories for safety assessment. An appendix to this report contains repair time data to support an occupational radiation exposure assessment for test blanket module maintenance

  18. Study and analysis of failure modes of the electrolytic capacitors and thyristors, applied to the protection system of the LHC (Large Hadron Collider)

    International Nuclear Information System (INIS)

    Perisse, F.

    2003-07-01

    The study presented in this thesis is a contribution about the analysis of failures modes of electrolytic capacitors and thyristors. The studied components are main elements of the protection system of the superconductive magnets of the LHC. The study of the ageing of the electrolytic capacitors has shown that their reliability is strongly related to their technological characteristic. Evolution of their principal indicator of ageing (ESR) can be modeled according to different laws chosen according to their running mode. It appears that the prediction of failure of these components other than that due to wear can be only statistical taking into account the many causes of failure involving various modes of failure. In order to be able to evaluate influence of the ageing of the electrolytic capacitors on a system, simple models taking into account this parameters as well as the effective temperature of the component are proposed. An acceptable precision taking into account the simplicity of the models is obtained. The study of the thyristors has shown that these components have little drift of parameters in static ageing, on the other hand of many failures by short-circuit were observed. These failures always have a local origin, and are due to defects of the components. The breakdown voltage strongly depends on the quality of the thyristor as well as the technology employed. (author)

  19. Characterization of the Failure Site Distribution in MIM Devices Using Zoomed Wavelet Analysis

    Science.gov (United States)

    Muñoz-Gorriz, J.; Monaghan, S.; Cherkaoui, K.; Suñé, J.; Hurley, P. K.; Miranda, E.

    2018-05-01

    The angular wavelet analysis is applied to the study of the spatial distribution of breakdown (BD) spots in Pt/HfO2/Pt capacitors with square and circular areas. The method is originally developed for rectangular areas, so a zoomed approach needs to be considered when the observation window does not coincide with the device area. The BD spots appear as a consequence of the application of electrical stress to the device. The stress generates defects within the dielectric film, a process that ends with the formation of a percolation path between the electrodes and the melting of the top metal layer because of the high release of energy. The BD spots have lateral sizes ranging from 1 μm to 3 μm and they appear as a point pattern that can be studied using spatial statistics methods. In this paper, we report the application of the angular wavelet method as a complementary tool for the analysis of the distribution of failure sites in large-area metal-insulator-metal (MIM) devices. The differences between considering a continuous or a discrete wavelet and the role played by the number of BD spots are also investigated.

  20. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  1. Preliminary Analysis of the Common Cause Failure Events for Domestic Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kang, Daeil; Han, Sanghoon

    2007-01-01

    It is known that the common cause failure (CCF) events have a great effect on the safety and probabilistic safety assessment (PSA) results of nuclear power plants (NPPs). However, the domestic studies have been mainly focused on the analysis method and modeling of CCF events. Thus, the analysis of the CCF events for domestic NPPs were performed to establish a domestic database for the CCF events and to deliver them to the operation office of the international common cause failure data exchange (ICDE) project. This paper presents the analysis results of the CCF events for domestic nuclear power plants

  2. Statistical learning methods in high-energy and astrophysics analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)

    2004-11-21

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.

  3. Statistical learning methods in high-energy and astrophysics analysis

    International Nuclear Information System (INIS)

    Zimmermann, J.; Kiesling, C.

    2004-01-01

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application

  4. Nurses' decision making in heart failure management based on heart failure certification status.

    Science.gov (United States)

    Albert, Nancy M; Bena, James F; Buxbaum, Denise; Martensen, Linda; Morrison, Shannon L; Prasun, Marilyn A; Stamp, Kelly D

    Research findings on the value of nurse certification were based on subjective perceptions or biased by correlations of certification status and global clinical factors. In heart failure, the value of certification is unknown. Examine the value of certification based nurses' decision-making. Cross-sectional study of nurses who completed heart failure clinical vignettes that reflected decision-making in clinical heart failure scenarios. Statistical tests included multivariable linear, logistic and proportional odds logistic regression models. Of nurses (N = 605), 29.1% were heart failure certified, 35.0% were certified in another specialty/job role and 35.9% were not certified. In multivariable modeling, nurses certified in heart failure (versus not heart failure certified) had higher clinical vignette scores (p = 0.002), reflecting higher evidence-based decision making; nurses with another specialty/role certification (versus no certification) did not (p = 0.62). Heart failure certification, but not in other specialty/job roles was associated with decisions that reflected delivery of high-quality care. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. The fuzzy approach to statistical analysis

    NARCIS (Netherlands)

    Coppi, Renato; Gil, Maria A.; Kiers, Henk A. L.

    2006-01-01

    For the last decades, research studies have been developed in which a coalition of Fuzzy Sets Theory and Statistics has been established with different purposes. These namely are: (i) to introduce new data analysis problems in which the objective involves either fuzzy relationships or fuzzy terms;

  6. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  7. A model-based prognostic approach to predict interconnect failure using impedance analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Dae Il; Yoon, Jeong Ah [Dept. of System Design and Control Engineering. Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    The reliability of electronic assemblies is largely affected by the health of interconnects, such as solder joints, which provide mechanical, electrical and thermal connections between circuit components. During field lifecycle conditions, interconnects are often subjected to a DC open circuit, one of the most common interconnect failure modes, due to cracking. An interconnect damaged by cracking is sometimes extremely hard to detect when it is a part of a daisy-chain structure, neighboring with other healthy interconnects that have not yet cracked. This cracked interconnect may seem to provide a good electrical contact due to the compressive load applied by the neighboring healthy interconnects, but it can cause the occasional loss of electrical continuity under operational and environmental loading conditions in field applications. Thus, cracked interconnects can lead to the intermittent failure of electronic assemblies and eventually to permanent failure of the product or the system. This paper introduces a model-based prognostic approach to quantitatively detect and predict interconnect failure using impedance analysis and particle filtering. Impedance analysis was previously reported as a sensitive means of detecting incipient changes at the surface of interconnects, such as cracking, based on the continuous monitoring of RF impedance. To predict the time to failure, particle filtering was used as a prognostic approach using the Paris model to address the fatigue crack growth. To validate this approach, mechanical fatigue tests were conducted with continuous monitoring of RF impedance while degrading the solder joints under test due to fatigue cracking. The test results showed the RF impedance consistently increased as the solder joints were degraded due to the growth of cracks, and particle filtering predicted the time to failure of the interconnects similarly to their actual timesto- failure based on the early sensitivity of RF impedance.

  8. Studies on failure kind analysis of the radiologic medical equipment in general hospital

    International Nuclear Information System (INIS)

    Lee, Woo Cheul; Kim, Jeong Lae

    1999-01-01

    This paper included a data analysis of the unit of medical devices using maintenance recording card that had medical devices of unit failure mode, hospital of failure mode and MTBF. The results of the analysis were as follows : 1. Medical devices of unit failure mode was the highest in QC/PM such A hospital as 33.9%, B hospital 30.9%, C hospital 30.3%, second degree was the Electrical and Electronic failure such A hospital as 23.5%, B hospital 25.3%, C hospital 28%, third degree was mechanical failure such A hospital as 19.6%, B hospital 22.5%, C hospital 25.4%. 2. Hospital of failure mode was the highest in Mobile X-ray device(A hospital 62.5%, B hospital 69.5%, C hospital 37.4%), and was the lowest in Sono devices(A hospital 16.76%, B hospital 8.4%, C hospital 7%). 3. Mean time between failures(MTBT) was the highest in SONO devices and was the lowest in Mobile X-ray devices which have 200 - 400 failure hours. 4. Average failure ratio was the highest in Mobile X-ray devices(A hospital 31.3%, B hospital 34.8%, C hospital 18.7%), and was the lowest in Sono(Ultrasound) devices (A hospital 8.4%, B hospital 4.2%, C hospital 3.5%). 5. Failure ratio results of medical devices according to QC/PM part of unit failure mode were as follows ; A hospital was the highest part of QC/PM (50%) in Mamo X-ray device and was the lowest part of QC/PM(26.4%) in Gastro X-ray. B hospital was the highest part of QC/PM(56%) in Mobile X-ray device, and the lowest part of QC/PM(12%) in Gastro X-ray. C hospital was the highest part of QC/PM(60%) in R/F X-ray device, and the lowest a part of QC/PM(21%) in Universal X-ray. It was found that the units responsible for most failure decreased by systematic management. We made the preventive maintenance schedule focusing on adjustment of operating and dust removal

  9. Advanced composites structural concepts and materials technologies for primary aircraft structures: Structural response and failure analysis

    Science.gov (United States)

    Dorris, William J.; Hairr, John W.; Huang, Jui-Tien; Ingram, J. Edward; Shah, Bharat M.

    1992-01-01

    Non-linear analysis methods were adapted and incorporated in a finite element based DIAL code. These methods are necessary to evaluate the global response of a stiffened structure under combined in-plane and out-of-plane loading. These methods include the Arc Length method and target point analysis procedure. A new interface material model was implemented that can model elastic-plastic behavior of the bond adhesive. Direct application of this method is in skin/stiffener interface failure assessment. Addition of the AML (angle minus longitudinal or load) failure procedure and Hasin's failure criteria provides added capability in the failure predictions. Interactive Stiffened Panel Analysis modules were developed as interactive pre-and post-processors. Each module provides the means of performing self-initiated finite elements based analysis of primary structures such as a flat or curved stiffened panel; a corrugated flat sandwich panel; and a curved geodesic fuselage panel. This module brings finite element analysis into the design of composite structures without the requirement for the user to know much about the techniques and procedures needed to actually perform a finite element analysis from scratch. An interactive finite element code was developed to predict bolted joint strength considering material and geometrical non-linearity. The developed method conducts an ultimate strength failure analysis using a set of material degradation models.

  10. Laboratory and 3-D distinct element analysis of the failure mechanism of a slope under external surcharge

    Science.gov (United States)

    Li, N.; Cheng, Y. M.

    2015-01-01

    Landslide is a major disaster resulting in considerable loss of human lives and property damages in hilly terrain in Hong Kong, China and many other countries. The factor of safety and the critical slip surface for slope stabilization are the main considerations for slope stability analysis in the past, while the detailed post-failure conditions of the slopes have not been considered in sufficient detail. There is however increasing interest in the consequences after the initiation of failure that includes the development and propagation of the failure surfaces, the amount of failed mass and runoff and the affected region. To assess the development of slope failure in more detail and to consider the potential danger of slopes after failure has initiated, the slope stability problem under external surcharge is analyzed by the distinct element method (DEM) and a laboratory model test in the present research. A more refined study about the development of failure, microcosmic failure mechanisms and the post-failure mechanisms of slopes will be carried out. The numerical modeling method and the various findings from the present work can provide an alternate method of analysis of slope failure, which can give additional information not available from the classical methods of analysis.

  11. Foundation of statistical energy analysis in vibroacoustics

    CERN Document Server

    Le Bot, A

    2015-01-01

    This title deals with the statistical theory of sound and vibration. The foundation of statistical energy analysis is presented in great detail. In the modal approach, an introduction to random vibration with application to complex systems having a large number of modes is provided. For the wave approach, the phenomena of propagation, group speed, and energy transport are extensively discussed. Particular emphasis is given to the emergence of diffuse field, the central concept of the theory.

  12. Comprehensive Deployment Method for Technical Characteristics Base on Multi-failure Modes Correlation Analysis

    Science.gov (United States)

    Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.

    2017-12-01

    This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.

  13. Concepts for measuring maintenance performance and methods for analysing competing failure modes

    International Nuclear Information System (INIS)

    Cooke, Roger; Paulsen, Jette

    1997-01-01

    Measurement of maintenance performance is done on the basis of component history data in which service sojourns are distinguished according to whether they terminate in corrective or preventive maintenance. From the viewpoint of data analysis, corrective and preventive maintenance constitute competing failure nudes. This article examines ways to assess maintenance performance without introducing statistical assumptions, then introduces a plausible statistical model for describing the interaction of preventive and corrective maintenance, and finally illustrates these with examples from the Nordic TUD data system

  14. A quasi-independence model to estimate failure rates

    International Nuclear Information System (INIS)

    Colombo, A.G.

    1988-01-01

    The use of a quasi-independence model to estimate failure rates is investigated. Gate valves of nuclear plants are considered, and two qualitative covariates are taken into account: plant location and reactor system. Independence between the two covariates and an exponential failure model are assumed. The failure rate of the components of a given system and plant is assumed to be a constant, but it may vary from one system to another and from one plant to another. This leads to the analysis of a contingency table. A particular feature of the model is the different operating time of the components in the various cells which can also be equal to zero. The concept of independence of the covariates is then replaced by that of quasi-independence. The latter definition, however, is used in a broader sense than usual. Suitable statistical tests are discussed and a numerical example illustrates the use of the method. (author)

  15. Statistical Analysis of Big Data on Pharmacogenomics

    Science.gov (United States)

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  16. HistFitter software framework for statistical data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Baak, M. [CERN, Geneva (Switzerland); Besjes, G.J. [Radboud University Nijmegen, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Cote, D. [University of Texas, Arlington (United States); Koutsman, A. [TRIUMF, Vancouver (Canada); Lorenz, J. [Ludwig-Maximilians-Universitaet Muenchen, Munich (Germany); Excellence Cluster Universe, Garching (Germany); Short, D. [University of Oxford, Oxford (United Kingdom)

    2015-04-15

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  17. HistFitter software framework for statistical data analysis

    International Nuclear Information System (INIS)

    Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  18. Failure propagation tests and analysis at PNC

    International Nuclear Information System (INIS)

    Tanabe, H.; Miyake, O.; Daigo, Y.; Sato, M.

    1984-01-01

    Failure propagation tests have been conducted using the Large Leak Sodium Water Reaction Test Rig (SWAT-1) and the Steam Generator Safety Test Facility (SWAT-3) at PNC in order to establish the safety design of the LMFBR prototype Monju steam generators. Test objectives are to provide data for selecting a design basis leak (DBL), data on the time history of failure propagations, data on the mechanism of the failures, and data on re-use of tubes in the steam generators that have suffered leaks. Eighteen fundamental tests have been performed in an intermediate leak region using the SWAT-1 test rig, and ten failure propagation tests have been conducted in the region from a small leak to a large leak using the SWAT-3 test facility. From the test results it was concluded that a dominant mechanism was tube wastage, and it took more than one minute until each failure propagation occurred. Also, the total leak rate in full sequence simulation tests including a water dump was far less than that of one double-ended-guillotine (DEG) failure. Using such experimental data, a computer code, LEAP (Leak Enlargement and Propagation), has been developed for the purpose of estimating the possible maximum leak rate due to failure propagation. This paper describes the results of the failure propagation tests and the model structure and validation studies of the LEAP code. (author)

  19. Robust statistics and geochemical data analysis

    International Nuclear Information System (INIS)

    Di, Z.

    1987-01-01

    Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures

  20. Failure analysis of radioisotopic heat source capsules tested under multi-axial conditions

    International Nuclear Information System (INIS)

    Zielinski, R.E.; Stacy, E.; Burgan, C.E.

    In order to qualify small radioisotopic heat sources for a 25-yr design life, multi-axial mechanical tests were performed on the structural components of the heat source. The results of these tests indicated that failure predominantly occurred in the middle of the weld ramp-down zone. Examination of the failure zone by standard metallographic techniques failed to indicate the true cause of failure. A modified technique utilizing chemical etching, scanning electron microscopy, and energy dispersive x-ray analysis was employed and dramatically indicated the true cause of failure, impurity concentration in the ramp-down zone. As a result of the initial investigation, weld parameters for the heat sources were altered. Example welds made with a pulse arc technique did not have this impurity buildup in the ramp-down zone

  1. Circulating Tumor Cell Analysis: Technical and Statistical Considerations for Application to the Clinic

    Directory of Open Access Journals (Sweden)

    Alison L. Allan

    2010-01-01

    Full Text Available Solid cancers are a leading cause of death worldwide, primarily due to the failure of effective clinical detection and treatment of metastatic disease in distant sites. There is growing evidence that the presence of circulating tumor cells (CTCs in the blood of cancer patients may be an important indicator of the potential for metastatic disease and poor prognosis. Technological advances have now facilitated the enumeration and characterization of CTCs using methods such as PCR, flow cytometry, image-based immunologic approaches, immunomagnetic techniques, and microchip technology. However, the rare nature of these cells requires that very sensitive and robust detection/enumeration methods be developed and validated in order to implement CTC analysis for widespread use in the clinic. This review will focus on the important technical and statistical considerations that must be taken into account when designing and implementing CTC assays, as well as the subsequent interpretation of these results for the purposes of clinical decision making.

  2. Using Pre-Statistical Analysis to Streamline Monitoring Assessments

    International Nuclear Information System (INIS)

    Reed, J.K.

    1999-01-01

    A variety of statistical methods exist to aid evaluation of groundwater quality and subsequent decision making in regulatory programs. These methods are applied because of large temporal and spatial extrapolations commonly applied to these data. In short, statistical conclusions often serve as a surrogate for knowledge. However, facilities with mature monitoring programs that have generated abundant data have inherently less uncertainty because of the sheer quantity of analytical results. In these cases, statistical tests can be less important, and ''expert'' data analysis should assume an important screening role.The WSRC Environmental Protection Department, working with the General Separations Area BSRI Environmental Restoration project team has developed a method for an Integrated Hydrogeological Analysis (IHA) of historical water quality data from the F and H Seepage Basins groundwater remediation project. The IHA combines common sense analytical techniques and a GIS presentation that force direct interactive evaluation of the data. The IHA can perform multiple data analysis tasks required by the RCRA permit. These include: (1) Development of a groundwater quality baseline prior to remediation startup, (2) Targeting of constituents for removal from RCRA GWPS, (3) Targeting of constituents for removal from UIC, permit, (4) Targeting of constituents for reduced, (5)Targeting of monitoring wells not producing representative samples, (6) Reduction in statistical evaluation, and (7) Identification of contamination from other facilities

  3. Photovoltaic module reliability improvement through application testing and failure analysis

    Science.gov (United States)

    Dumas, L. N.; Shumka, A.

    1982-01-01

    During the first four years of the U.S. Department of Energy (DOE) National Photovoltatic Program, the Jet Propulsion Laboratory Low-Cost Solar Array (LSA) Project purchased about 400 kW of photovoltaic modules for test and experiments. In order to identify, report, and analyze test and operational problems with the Block Procurement modules, a problem/failure reporting and analysis system was implemented by the LSA Project with the main purpose of providing manufacturers with feedback from test and field experience needed for the improvement of product performance and reliability. A description of the more significant types of failures is presented, taking into account interconnects, cracked cells, dielectric breakdown, delamination, and corrosion. Current design practices and reliability evaluations are also discussed. The conducted evaluation indicates that current module designs incorporate damage-resistant and fault-tolerant features which address field failure mechanisms observed to date.

  4. Failure trend analysis for safety related components of Korean standard NPPs

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Han, Sang Hoon

    2005-01-01

    The component reliability data of Korean NPP that reflects the plant specific characteristics is required necessarily for PSA of Korean nuclear power plants. We have performed a project to develop the component reliability database (KIND, Korea Integrated Nuclear Reliability Database) and S/W for database management and component reliability analysis. Based on the system, we have collected the component operation data and failure/repair data during from plant operation date to 2002 for YGN 3, 4 and UCN 3, 4 plants. Recently, we provided the component failure rate data for UCN 3, 4 standard PSA model from the KIND. We evaluated the components that have high-ranking failure rates with the component reliability data from plant operation date to 1998 and 2000 for YGN 3,4 and UCN 3, 4 respectively. We also identified their failure mode that occurred frequently. In this study, we analyze the component failure trend and perform site comparison based on the generic data by using the component reliability data which is extended to 2002 for UCN 3, 4 and YGN 3, 4 respectively. We focus on the major safety related rotating components such as pump, EDG etc

  5. FAILPROB-A Computer Program to Compute the Probability of Failure of a Brittle Component; TOPICAL

    International Nuclear Information System (INIS)

    WELLMAN, GERALD W.

    2002-01-01

    FAILPROB is a computer program that applies the Weibull statistics characteristic of brittle failure of a material along with the stress field resulting from a finite element analysis to determine the probability of failure of a component. FAILPROB uses the statistical techniques for fast fracture prediction (but not the coding) from the N.A.S.A. - CARES/life ceramic reliability package. FAILPROB provides the analyst at Sandia with a more convenient tool than CARES/life because it is designed to behave in the tradition of structural analysis post-processing software such as ALGEBRA, in which the standard finite element database format EXODUS II is both read and written. This maintains compatibility with the entire SEACAS suite of post-processing software. A new technique to deal with the high local stresses computed for structures with singularities such as glass-to-metal seals and ceramic-to-metal braze joints is proposed and implemented. This technique provides failure probability computation that is insensitive to the finite element mesh employed in the underlying stress analysis. Included in this report are a brief discussion of the computational algorithms employed, user instructions, and example problems that both demonstrate the operation of FAILPROB and provide a starting point for verification and validation

  6. Conjunction analysis and propositional logic in fMRI data analysis using Bayesian statistics.

    Science.gov (United States)

    Rudert, Thomas; Lohmann, Gabriele

    2008-12-01

    To evaluate logical expressions over different effects in data analyses using the general linear model (GLM) and to evaluate logical expressions over different posterior probability maps (PPMs). In functional magnetic resonance imaging (fMRI) data analysis, the GLM was applied to estimate unknown regression parameters. Based on the GLM, Bayesian statistics can be used to determine the probability of conjunction, disjunction, implication, or any other arbitrary logical expression over different effects or contrast. For second-level inferences, PPMs from individual sessions or subjects are utilized. These PPMs can be combined to a logical expression and its probability can be computed. The methods proposed in this article are applied to data from a STROOP experiment and the methods are compared to conjunction analysis approaches for test-statistics. The combination of Bayesian statistics with propositional logic provides a new approach for data analyses in fMRI. Two different methods are introduced for propositional logic: the first for analyses using the GLM and the second for common inferences about different probability maps. The methods introduced extend the idea of conjunction analysis to a full propositional logic and adapt it from test-statistics to Bayesian statistics. The new approaches allow inferences that are not possible with known standard methods in fMRI. (c) 2008 Wiley-Liss, Inc.

  7. Failure Analysis of PRDS Pipe in a Thermal Power Plant Boiler

    Science.gov (United States)

    Ghosh, Debashis; Ray, Subrata; Mandal, Jiten; Mandal, Nilrudra; Shukla, Awdhesh Kumar

    2018-04-01

    The pressure reducer desuperheater (PRDS) pipeline is used for reducing the pressure and desuperheating of the steam in different auxiliary pipeline. When the PRDS pipeline is failed, the reliability of the boiler is affected. This paper investigates the probable cause/causes of failure of the PRDS tapping line. In that context, visual inspection, outside diameter and wall thickness measurement, chemical analysis, metallographic examination and hardness measurement are conducted as part of the investigative studies. Apart from these tests, mechanical testing and fractographic analysis are also conducted as supplements. Finally, it has been concluded that the PRDS pipeline has mainly failed due to graphitization due to prolonged exposure of the pipe at higher temperature. The improper material used is mainly responsible for premature failure of the pipe.

  8. A quantitative impact analysis of sensor failures on human operator's decision making in nuclear power plants

    International Nuclear Information System (INIS)

    Seong, Poong Hyun

    2004-01-01

    In emergency or accident situations in nuclear power plants, human operators take important roles in generating appropriate control signals to mitigate accident situation. In human reliability analysis (HRA) in the framework of probabilistic safety assessment (PSA), the failure probabilities of such appropriate actions are estimated and used for the safety analysis of nuclear power plants. Even though understanding the status of the plant is basically the process of information seeking and processing by human operators, it seems that conventional HRA methods such as THERP, HCR, and ASEP does not pay a lot of attention to the possibilities of providing wrong information to human operators. In this paper, a quantitative impact analysis of providing wrong information to human operators due to instrument faults or sensor failures is performed. The quantitative impact analysis is performed based on a quantitative situation assessment model. By comparing the situation in which there are sensor failures and the situation in which there are not sensor failures, the impact of sensor failures can be evaluated quantitatively. It is concluded that the impact of sensor failures are quite significant at the initial stages, but the impact is gradually reduced as human operators make more and more observations. Even though the impact analysis is highly dependent on the situation assessment model, it is expected that the conclusions made based on other situation assessment models with be consistent with the conclusion made in this paper. (author)

  9. Gearbox Reliability Collaborative Gearbox 1 Failure Analysis Report: December 2010 - January 2011

    Energy Technology Data Exchange (ETDEWEB)

    Errichello, R.; Muller, J.

    2012-02-01

    Unintended gearbox failures have a significant impact on the cost of wind farm operations. In 2007, NREL initiated the Gearbox Reliability Collaborative (GRC). The project combines analysis, field testing, dynamometer testing, condition monitoring, and the development and population of a gearbox failure database in a multi-pronged approach to determine why wind turbine gearboxes do not achieve their expected design life. The collaborative of manufacturers, owners, researchers, and consultants focuses on gearbox testing and modeling and the development of a gearbox failure database. Collaborative members also investigate gearbox condition monitoring techniques. Data gained from the GRC will enable designers, developers, and manufacturers to improve gearbox designs and testing standards and create more robust modeling tools. GRC project essentials include the development of two identical, heavily instrumented representative gearbox designs. Knowledge gained from the field and dynamometer tests conducted on these gearboxes builds an understanding of how the selected loads and events translate into bearing and gear response. This report contains the analysis of the first gearbox design.

  10. Advances on the Failure Analysis of the Dam—Foundation Interface of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Luis Altarejos-García

    2015-12-01

    Full Text Available Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern.

  11. Failure mode and effects analysis of witnessing protocols for ensuring traceability during IVF.

    Science.gov (United States)

    Rienzi, Laura; Bariani, Fiorenza; Dalla Zorza, Michela; Romano, Stefania; Scarica, Catello; Maggiulli, Roberta; Nanni Costa, Alessandro; Ubaldi, Filippo Maria

    2015-10-01

    Traceability of cells during IVF is a fundamental aspect of treatment, and involves witnessing protocols. Failure mode and effects analysis (FMEA) is a method of identifying real or potential breakdowns in processes, and allows strategies to mitigate risks to be developed. To examine the risks associated with witnessing protocols, an FMEA was carried out in a busy IVF centre, before and after implementation of an electronic witnessing system (EWS). A multidisciplinary team was formed and moderated by human factors specialists. Possible causes of failures, and their potential effects, were identified and risk priority number (RPN) for each failure calculated. A second FMEA analysis was carried out after implementation of an EWS. The IVF team identified seven main process phases, 19 associated process steps and 32 possible failure modes. The highest RPN was 30, confirming the relatively low risk that mismatches may occur in IVF when a manual witnessing system is used. The introduction of the EWS allowed a reduction in the moderate-risk failure mode by two-thirds (highest RPN = 10). In our experience, FMEA is effective in supporting multidisciplinary IVF groups to understand the witnessing process, identifying critical steps and planning changes in practice to enable safety to be enhanced. Copyright © 2015 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  12. Common-Cause Failure Analysis in Event Assessment

    International Nuclear Information System (INIS)

    Rasmuson, D.M.; Kelly, D.L.

    2008-01-01

    This paper reviews the basic concepts of modeling common-cause failures (CCFs) in reliability and risk studies and then applies these concepts to the treatment of CCF in event assessment. The cases of a failed component (with and without shared CCF potential) and a component being unavailable due to preventive maintenance or testing are addressed. The treatment of two related failure modes (e.g. failure to start and failure to run) is a new feature of this paper, as is the treatment of asymmetry within a common-cause component group

  13. SU-F-T-246: Evaluation of Healthcare Failure Mode And Effect Analysis For Risk Assessment

    International Nuclear Information System (INIS)

    Harry, T; Manger, R; Cervino, L; Pawlicki, T

    2016-01-01

    Purpose: To evaluate the differences between the Veteran Affairs Healthcare Failure Modes and Effect Analysis (HFMEA) and the AAPM Task Group 100 Failure and Effect Analysis (FMEA) risk assessment techniques in the setting of a stereotactic radiosurgery (SRS) procedure were compared respectively. Understanding the differences in the techniques methodologies and outcomes will provide further insight into the applicability and utility of risk assessments exercises in radiation therapy. Methods: HFMEA risk assessment analysis was performed on a stereotactic radiosurgery procedure. A previous study from our institution completed a FMEA of our SRS procedure and the process map generated from this work was used for the HFMEA. The process of performing the HFMEA scoring was analyzed, and the results from both analyses were compared. Results: The key differences between the two risk assessments are the scoring criteria for failure modes and identifying critical failure modes for potential hazards. The general consensus among the team performing the analyses was that scoring for the HFMEA was simpler and more intuitive then the FMEA. The FMEA identified 25 critical failure modes while the HFMEA identified 39. Seven of the FMEA critical failure modes were not identified by the HFMEA and 21 of the HFMEA critical failure modes were not identified by the FMEA. HFMEA as described by the Veteran Affairs provides guidelines on which failure modes to address first. Conclusion: HFMEA is a more efficient model for identifying gross risks in a process than FMEA. Clinics with minimal staff, time and resources can benefit from this type of risk assessment to eliminate or mitigate high risk hazards with nominal effort. FMEA can provide more in depth details but at the cost of elevated effort.

  14. SU-F-T-246: Evaluation of Healthcare Failure Mode And Effect Analysis For Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Harry, T [Oregon State University, Corvallis, OR (United States); University of California, San Diego, La Jolla, CA (United States); Manger, R; Cervino, L; Pawlicki, T [University of California, San Diego, La Jolla, CA (United States)

    2016-06-15

    Purpose: To evaluate the differences between the Veteran Affairs Healthcare Failure Modes and Effect Analysis (HFMEA) and the AAPM Task Group 100 Failure and Effect Analysis (FMEA) risk assessment techniques in the setting of a stereotactic radiosurgery (SRS) procedure were compared respectively. Understanding the differences in the techniques methodologies and outcomes will provide further insight into the applicability and utility of risk assessments exercises in radiation therapy. Methods: HFMEA risk assessment analysis was performed on a stereotactic radiosurgery procedure. A previous study from our institution completed a FMEA of our SRS procedure and the process map generated from this work was used for the HFMEA. The process of performing the HFMEA scoring was analyzed, and the results from both analyses were compared. Results: The key differences between the two risk assessments are the scoring criteria for failure modes and identifying critical failure modes for potential hazards. The general consensus among the team performing the analyses was that scoring for the HFMEA was simpler and more intuitive then the FMEA. The FMEA identified 25 critical failure modes while the HFMEA identified 39. Seven of the FMEA critical failure modes were not identified by the HFMEA and 21 of the HFMEA critical failure modes were not identified by the FMEA. HFMEA as described by the Veteran Affairs provides guidelines on which failure modes to address first. Conclusion: HFMEA is a more efficient model for identifying gross risks in a process than FMEA. Clinics with minimal staff, time and resources can benefit from this type of risk assessment to eliminate or mitigate high risk hazards with nominal effort. FMEA can provide more in depth details but at the cost of elevated effort.

  15. Failure and Maintenance Analysis Using Web-Based Reliability Database System

    International Nuclear Information System (INIS)

    Hwang, Seok Won; Kim, Myoung Su; Seong, Ki Yeoul; Na, Jang Hwan; Jerng, Dong Wook

    2007-01-01

    Korea Hydro and Nuclear Power Company has lunched the development of a database system for PSA and Maintenance Rule implementation. It focuses on the easy processing of raw data into a credible and useful database for the risk-informed environment of nuclear power plant operation and maintenance. Even though KHNP had recently completed the PSA for all domestic NPPs as a requirement of the severe accident mitigation strategy, the component failure data were only gathered as a means of quantification purposes for the relevant project. So, the data were not efficient enough for the Living PSA or other generic purposes. Another reason to build a real time database is for the newly adopted Maintenance Rule, which requests the utility to continuously monitor the plant risk based on its operation and maintenance performance. Furthermore, as one of the pre-condition for the Risk Informed Regulation and Application, the nuclear regulatory agency of Korea requests the development and management of domestic database system. KHNP is stacking up data of operation and maintenance on the Enterprise Resource Planning (ERP) system since its first opening on July, 2003. But, so far a systematic review has not been performed to apply the component failure and maintenance history for PSA and other reliability analysis. The data stored in PUMAS before the ERP system is introduced also need to be converted and managed into the new database structure and methodology. This reliability database system is a web-based interface on a UNIX server with Oracle relational database. It is designed to be applicable for all domestic NPPs with a common database structure and the web interfaces, therefore additional program development would not be necessary for data acquisition and processing in the near future. Categorization standards for systems and components have been implemented to analyze all domestic NPPs. For example, SysCode (for a system code) and CpCode (for a component code) were newly

  16. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, M; Abazeed, M; Woody, N; Stephans, K; Videtic, G; Xia, P; Zhuang, T [The Cleveland Clinic Foundation, Cleveland, OH (United States)

    2016-06-15

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported to R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.

  17. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    International Nuclear Information System (INIS)

    Andrews, M; Abazeed, M; Woody, N; Stephans, K; Videtic, G; Xia, P; Zhuang, T

    2016-01-01

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported to R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.

  18. Random safety auditing, root cause analysis, failure mode and effects analysis.

    Science.gov (United States)

    Ursprung, Robert; Gray, James

    2010-03-01

    Improving quality and safety in health care is a major concern for health care providers, the general public, and policy makers. Errors and quality issues are leading causes of morbidity and mortality across the health care industry. There is evidence that patients in the neonatal intensive care unit (NICU) are at high risk for serious medical errors. To facilitate compliance with safe practices, many institutions have established quality-assurance monitoring procedures. Three techniques that have been found useful in the health care setting are failure mode and effects analysis, root cause analysis, and random safety auditing. When used together, these techniques are effective tools for system analysis and redesign focused on providing safe delivery of care in the complex NICU system. Copyright 2010 Elsevier Inc. All rights reserved.

  19. Failure mode and effects analysis outputs: are they valid?

    Directory of Open Access Journals (Sweden)

    Shebl Nada

    2012-06-01

    Full Text Available Abstract Background Failure Mode and Effects Analysis (FMEA is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies

  20. Failure analysis of boiler tube

    International Nuclear Information System (INIS)

    Mehmood, K.; Siddiqui, A.R.

    2007-01-01

    Boiler tubes are energy conversion components where heat energy is used to convert water into high pressure superheated steam, which is then delivered to a turbine for electric power generation in thermal power plants or to run plant and machineries in a process or manufacturing industry. It was reported that one of the tubes of a fire-tube boiler used in a local industry had leakage after the formation of pits at the external surface of the tube. The inner side of the fire tube was working with hot flue gasses with a pressure of 10 Kg/cm/sup 2/ and temperature 225 degree C. The outside of the tube was surrounded by feed water. The purpose of this study was to determine the cause of pits developed at the external surface of the failed boiler tube sample. In the present work boiler tube samples of steel grade ASTM AI61/ASTM A192 were analyzed using metallographic analysis, chemical analysis, and mechanical testing. It was concluded that the appearance of defects on the boiler tube sample indicates cavitation type corrosion failure. Cavitation damage superficially resembled pitting, but surface appeared considerably rougher and had many closely spaced pits. (author)

  1. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  2. Multivariate Statistical Methods as a Tool of Financial Analysis of Farm Business

    Czech Academy of Sciences Publication Activity Database

    Novák, J.; Sůvová, H.; Vondráček, Jiří

    2002-01-01

    Roč. 48, č. 1 (2002), s. 9-12 ISSN 0139-570X Institutional research plan: AV0Z1030915 Keywords : financial analysis * financial ratios * multivariate statistical methods * correlation analysis * discriminant analysis * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research

  3. Approaches to statistical analysis of repeated echocardiographic measurements after myocardial infarction and its relation to heart failure : Application of a random-effects model

    NARCIS (Netherlands)

    de Kam, PJ; Voors, AA; Brouwer, J; van Gilst, WH

    Background: Extensive left ventricular (LV) dilatation after myocardial infarction (MI) is associated with increased heart failure risk. Aims: To investigate whether the power to demonstrate the relation between LV dilatation and heart failure depends on the method applied to predict LV dilatation

  4. Technical evaluation report on the seven main transformer failures at the North Anna Power Station, Units 1 and 2 (Docket Nos. 50-338, 50-339)

    International Nuclear Information System (INIS)

    Dalton, K.J.; Kresser, J.V.; Savage, J.W.; Selan, J.C.

    1984-01-01

    This report documents technical evaluations on various aspects pertaining to the seven main transformer failures at the North Anna Power Station, Units 1 and 2. These reports cover the subjects of Probability Risk Assessment (PRA), Failure Modes and Effects Analysis (FMEA), Root Causes, Protection Systems, Modifications, Failure Statistics, and Generic Aspects. The PRA determined that the contribution from a main transformer failure affecting plant safety systems so as to increase the risk to the public health and safety is negligible. The FMEA determined that a main transformer failure can have primary and secondary effects on plant safety system operation. The evaluation of the Root Causes found that no single common cause contributed to the seven failures. Each failure was found to have specific circumstances for initiating the failure. Both the generator and transformer primary protection systems were found to perform correctly and were designed within industry standards and practices. The proposed modifications resulting from the analyses of the failures will improve system reliability and integrity, and will reduce potentially damaging effects. The failure statistic survey found very limited data bases from which a meaningful correlation could be ascertained. The statistical comparison found no appreciable anomalies with the NAPS failures. The evaluation of all the available information and the results of the separate reports on the main transformer failures found that several generic concerns exist

  5. BILAM: a composite laminate failure-analysis code using bilinear stress-strain approximations

    Energy Technology Data Exchange (ETDEWEB)

    McLaughlin, P.V. Jr.; Dasgupta, A.; Chun, Y.W.

    1980-10-01

    The BILAM code which uses constant strain laminate analysis to generate in-plane load/deformation or stress/strain history of composite laminates to the point of laminate failure is described. The program uses bilinear stress-strain curves to model layer stress-strain behavior. Composite laminates are used for flywheels. The use of this computer code will help to develop data on the behavior of fiber composite materials which can be used by flywheel designers. In this program the stress-strain curves are modelled by assuming linear response in axial tension while using bilinear approximations (2 linear segments) for stress-strain response to axial compressive, transverse tensile, transverse compressive and axial shear loadings. It should be noted that the program attempts to empirically simulate the effects of the phenomena which cause nonlinear stress-strain behavior, instead of mathematically modelling the micromechanics involved. This code, therefore, performs a bilinear laminate analysis, and, in conjunction with several user-defined failure interaction criteria, is designed to provide sequential information on all layer failures up to and including the first fiber failure. The modus operandi is described. Code BILAM can be used to: predict the load-deformation/stress-strain behavior of a composite laminate subjected to a given combination of in-plane loads, and make analytical predictions of laminate strength.

  6. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    Science.gov (United States)

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  7. Analysis of Service Recovery Failure: From Minority Perspective

    OpenAIRE

    Yasemin Öcal Atınç

    2016-01-01

    We investigate the service failures towards diverse customer groups for the purpose to bring insightful proposals to the managers to recover from these failures. Previous literature provided insights regarding the perception of service failures by minorities and the challenge of recovery due to racial implications driven from the failure, however lacked to propose suggestions for the managers so that they can take either corrective steps toward service failure recovery or prevent service fail...

  8. Statistics and thermodynamics of fracture

    Science.gov (United States)

    Chudnovsky, A.

    1984-01-01

    A probabilistic model of the fracture processes unifying the phenomenological study of long term strength of materials, fracture mechanics and statistical approaches to fracture is briefly outlined. The general framework of irreversible thermodynamics is employed to model the deterministic side of the failure phenomenon. The stochastic calculus is used to account for thg failure mechanisms controlled by chance; particularly, the random roughness of fracture surfaces.

  9. [Failure mode and effects analysis (FMEA) of insulin in a mother-child university-affiliated health center].

    Science.gov (United States)

    Berruyer, M; Atkinson, S; Lebel, D; Bussières, J-F

    2016-01-01

    Insulin is a high-alert drug. The main objective of this descriptive cross-sectional study was to evaluate the risks associated with insulin use in healthcare centers. The secondary objective was to propose corrective measures to reduce the main risks associated with the most critical failure modes in the analysis. We conducted a failure mode and effects analysis (FMEA) in obstetrics-gynecology, neonatology and pediatrics. Five multidisciplinary meetings occurred in August 2013. A total of 44 out of 49 failure modes were analyzed. Nine out of 44 (20%) failure modes were deemed critical, with a criticality score ranging from 540 to 720. Following the multidisciplinary meetings, everybody agreed that an FMEA was a useful tool to identify failure modes and their relative importance. This approach identified many corrective measures. This shared experience increased awareness of safety issues with insulin in our mother-child center. This study identified the main failure modes and associated corrective measures. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  10. Statistical analysis of environmental data

    International Nuclear Information System (INIS)

    Beauchamp, J.J.; Bowman, K.O.; Miller, F.L. Jr.

    1975-10-01

    This report summarizes the analyses of data obtained by the Radiological Hygiene Branch of the Tennessee Valley Authority from samples taken around the Browns Ferry Nuclear Plant located in Northern Alabama. The data collection was begun in 1968 and a wide variety of types of samples have been gathered on a regular basis. The statistical analysis of environmental data involving very low-levels of radioactivity is discussed. Applications of computer calculations for data processing are described

  11. Failure Analysis of a Modern High Performance Diesel Engine Cylinder Head

    Directory of Open Access Journals (Sweden)

    Bingbin Guo

    2014-05-01

    Full Text Available This paper presents a failure analysis on a modern high performance diesel engine cylinder head made of gray cast iron. Cracks appeared intensively at the intersection of two exhaust passages in the cylinder head. The metallurgical examination was conducted in the crack origin zone and other zones. Meanwhile, the load state of the failure part of the cylinder head was determined through the Finite Element Analysis. The results showed that both the point of the maximum temperature and the point of the maximum thermal-mechanical coupling stress were not in the crack position. The excessive load was not the main cause of the failure. The large cooling rate in the casting process created an abnormal graphite zone that existed below the surface of the exhaust passage (about 1.1 mm depth, which led to the fracture of the cylinder head. In the fractured area, there were a large number of casting defects (dip sand, voids, etc. and inferior graphite structure (type D, type E which caused stress concentration. Moreover, high temperature gas entered the cracks, which caused material corrosion, material oxidization, and crack propagation. Finally, premature fracture of the cylinder head took place.

  12. Root cause analysis of SI line-seated thermal sleeve separation failures

    International Nuclear Information System (INIS)

    Jo, Jong Chull; Jhung, Myung Jo; Kim, Hho Jung

    2004-01-01

    At conventional pressurized water reactors, a thermal sleeve (named simply 'sleeve' hereafter) is seated inside the nozzle part of each Safety Injection (SI) branch pipe to prevent and relieve potential excessive transient thermal stress in the nozzle wall when a cold water is injected during the safety injection mode Recently, mechanical failures that the sleeves are separated from the SI branch pipe and fall into the connected cold leg main pipe were occurred in sequence at Yonggwang units 5 and 6 and Ulchin unit 5. There were many activities and efforts to figure out the causes of those failures with experts' reasoning, but the proposed causes were derived from superficial views rather than physically concrete grounds or analysis results. The prerequisites to find out the root causes of failure mechanism will be to identify the flow situation in the pipe junction area connecting the cold leg with the SI pipe and to know the vibration characteristics of sleeves. This paper investigates the flow field in the pipe junction thru a numerical simulation and vibration characteristics of thermal sleeves thru a modal analysis, from which the root causes of sleeve separation mechanism are analyzed

  13. Failure modes and effects analysis (FMEA) for Gamma Knife radiosurgery.

    Science.gov (United States)

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Flickinger, John; Arai, Yoshio; Vacsulka, Jonet; Feng, Wenzheng; Monaco, Edward; Niranjan, Ajay; Lunsford, L Dade; Huq, M Saiful

    2017-11-01

    Gamma Knife radiosurgery is a highly precise and accurate treatment technique for treating brain diseases with low risk of serious error that nevertheless could potentially be reduced. We applied the AAPM Task Group 100 recommended failure modes and effects analysis (FMEA) tool to develop a risk-based quality management program for Gamma Knife radiosurgery. A team consisting of medical physicists, radiation oncologists, neurosurgeons, radiation safety officers, nurses, operating room technologists, and schedulers at our institution and an external physicist expert on Gamma Knife was formed for the FMEA study. A process tree and a failure mode table were created for the Gamma Knife radiosurgery procedures using the Leksell Gamma Knife Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detection for failure mode (D) were assigned to each failure mode by 8 professionals on a scale from 1 to 10. An overall risk priority number (RPN) for each failure mode was then calculated from the averaged O, S, and D scores. The coefficient of variation for each O, S, or D score was also calculated. The failure modes identified were prioritized in terms of both the RPN scores and the severity scores. The established process tree for Gamma Knife radiosurgery consists of 10 subprocesses and 53 steps, including a subprocess for frame placement and 11 steps that are directly related to the frame-based nature of the Gamma Knife radiosurgery. Out of the 86 failure modes identified, 40 Gamma Knife specific failure modes were caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the Gamma Knife helmets and plugs, the skull definition tools as well as other features of the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all external beam radiation therapy

  14. Failure analysis of top nozzle holddown spring screw for nuclear fuel assembly

    International Nuclear Information System (INIS)

    Koh, S. K.; Ryu, C. H.; Na, E. G.; Baek, T. H.; Jeon, K. L.

    2003-01-01

    A failure analysis of holddown spring screw was performed using fracture mechanics approach. The spring screw was designed such that it was capable of sustaining the loads imposed by the initial tensile preload and operational loads. In order to investigate the cause of failure, a stress analysis of the top nozzle spring assembly was done using finite element analysis and a life prediction of the screw was made using a fracture mechanics approach. The elastic-plastic finite element analysis showed that the local stresses at the critical regions of head-shank fillet and thread root significantly exceeded than the yield strength of the screw material, resulting in local plastic deformation. Primary water stress corrosion cracking life of the Inconel 600 screw was predicted by using integration of the Scott model and resulted in 1.42 years, which was fairly close to the actual service life of the holddown spring screw

  15. Modular titanium alloy neck adapter failures in hip replacement - failure mode analysis and influence of implant material

    Directory of Open Access Journals (Sweden)

    Bloemer Wilhelm

    2010-01-01

    Full Text Available Abstract Background Modular neck adapters for hip arthroplasty stems allow the surgeon to modify CCD angle, offset and femoral anteversion intraoperatively. Fretting or crevice corrosion may lead to failure of such a modular device due to high loads or surface contamination inside the modular coupling. Unfortunately we have experienced such a failure of implants and now report our clinical experience with the failures in order to advance orthopaedic material research and joint replacement surgery. The failed neck adapters were implanted between August 2004 and November 2006 a total of about 5000 devices. After this period, the titanium neck adapters were replaced by adapters out of cobalt-chromium. Until the end of 2008 in total 1.4% (n = 68 of the implanted titanium alloy neck adapters failed with an average time of 2.0 years (0.7 to 4.0 years postoperatively. All, but one, patients were male, their average age being 57.4 years (36 to 75 years and the average weight 102.3 kg (75 to 130 kg. The failures of neck adapters were divided into 66% with small CCD of 130° and 60% with head lengths of L or larger. Assuming an average time to failure of 2.8 years, the cumulative failure rate was calculated with 2.4%. Methods A series of adapter failures of titanium alloy modular neck adapters in combination with a titanium alloy modular short hip stem was investigated. For patients having received this particular implant combination risk factors were identified which were associated with the occurence of implant failure. A Kaplan-Meier survival-failure-analysis was conducted. The retrieved implants were analysed using microscopic and chemical methods. Modes of failure were simulated in biomechanical tests. Comparative tests included modular neck adapters made of titanium alloy and cobalt chrome alloy material. Results Retrieval examinations and biomechanical simulation revealed that primary micromotions initiated fretting within the modular tapered neck

  16. FAILURE MODE AND EFFECT ANALYSIS (FMEA OF BUTTERFLY VALVE IN OIL AND GAS INDUSTRY

    Directory of Open Access Journals (Sweden)

    MUHAMMAD AMIRUL BIN YUSOF

    2016-04-01

    Full Text Available Butterfly valves are mostly used in various industries such as oil and gas plant. This valve operates with rotating motion using pneumatic system. Rotating actuator turns the disc either parallel or perpendicular to the flow. When the valve is fully open, the disc is rotated a quarter turn so that it allows free passage of the fluid and when fully closed, the disc rotated a quarter turns to block the fluid. The primary failure modes for valves are the valve leaks to environment through flanges, seals on the valve body, valve stem packing not properly protected, over tightened packing nuts, the valve cracks and leaks over the seat. To identify the failure of valve Failure Mode and Effects Analysis has been chosen. FMEA is the one of technique to perform failure analysis. It involves reviewing as many components to identify failure modes, and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA form. Risk priority number, severity, detection, occurrence are the factor determined in this studies. Risk priority number helps to find out the highest hazardous activities which need more attention than the other activity. The highest score of risk priority number in this research is seat. Action plan was proposed to reduce the risk priority number and so that potential failures also will be reduced.

  17. Highly Robust Statistical Methods in Medical Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf

  18. Common mode and coupled failure

    International Nuclear Information System (INIS)

    Taylor, J.R.

    1975-10-01

    Based on examples and data from Abnormal Occurence Reports for nuclear reactors, a classification of common mode or coupled failures is given, and some simple statistical models are investigated. (author)

  19. Statistical Power Analysis with Missing Data A Structural Equation Modeling Approach

    CERN Document Server

    Davey, Adam

    2009-01-01

    Statistical power analysis has revolutionized the ways in which we conduct and evaluate research.  Similar developments in the statistical analysis of incomplete (missing) data are gaining more widespread applications. This volume brings statistical power and incomplete data together under a common framework, in a way that is readily accessible to those with only an introductory familiarity with structural equation modeling.  It answers many practical questions such as: How missing data affects the statistical power in a study How much power is likely with different amounts and types

  20. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2003-01-01

    Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...... for timber are investigated....

  1. A review of the progress with statistical models of passive component reliability

    Energy Technology Data Exchange (ETDEWEB)

    Lydell, Bengt O. Y. [Sigma-Phase Inc., Vail (United States)

    2017-03-15

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  2. A Review of the Progress with Statistical Models of Passive Component Reliability

    Directory of Open Access Journals (Sweden)

    Bengt O.Y. Lydell

    2017-03-01

    Full Text Available During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models.

  3. A review of the progress with statistical models of passive component reliability

    International Nuclear Information System (INIS)

    Lydell, Bengt O. Y.

    2017-01-01

    During the past 25 years, in the context of probabilistic safety assessment, efforts have been directed towards establishment of comprehensive pipe failure event databases as a foundation for exploratory research to better understand how to effectively organize a piping reliability analysis task. The focused pipe failure database development efforts have progressed well with the development of piping reliability analysis frameworks that utilize the full body of service experience data, fracture mechanics analysis insights, expert elicitation results that are rolled into an integrated and risk-informed approach to the estimation of piping reliability parameters with full recognition of the embedded uncertainties. The discussion in this paper builds on a major collection of operating experience data (more than 11,000 pipe failure records) and the associated lessons learned from data analysis and data applications spanning three decades. The piping reliability analysis lessons learned have been obtained from the derivation of pipe leak and rupture frequencies for corrosion resistant piping in a raw water environment, loss-of-coolant-accident frequencies given degradation mitigation, high-energy pipe break analysis, moderate-energy pipe break analysis, and numerous plant-specific applications of a statistical piping reliability model framework. Conclusions are presented regarding the feasibility of determining and incorporating aging effects into probabilistic safety assessment models

  4. Numeric computation and statistical data analysis on the Java platform

    CERN Document Server

    Chekanov, Sergei V

    2016-01-01

    Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...

  5. Geotechnical Failure of a Concrete Crown Wall on a Rubble Mound Breakwater Considering Sliding Failure and Rupture Failure of Foundation

    DEFF Research Database (Denmark)

    Christiani, E.; Burcharth, H. F.; Sørensen, John Dalsgaard

    1995-01-01

    Sliding and rupture failure in the rubble mound are considered in this paper. In order to describe these failure modes the wave breaking forces have to be accounted for. Wave breaking forces on a crown wall are determined from Burcharth's wave force formula Burcharth (1992). Overtopping rates...... are calculated for a given design by Bradbury et al. (1988a,b) and compared to acceptable overtopping rates, prior to a determininstic design. The method of foundation stability analysis is presented by the example of a translation slip failure involving kinematically correct slip surfaces and failure zones...... in friction based soil. Rupture failure modes for a crown wall with a plane base and a crown wall with an extended leg on the seaward side will be formulated. The failure modes are described by limit state functions. This allows a deterministic analysis to be performed....

  6. A Divergence Statistics Extension to VTK for Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  7. Basic factors to forecast maintenance cost and failure processes for nuclear power plants

    International Nuclear Information System (INIS)

    Popova, Elmira; Yu, Wei; Kee, Ernie; Sun, Alice; Richards, Drew; Grantom, Rick

    2006-01-01

    Two types of maintenance interventions are usually administered at nuclear power plants: planned and corrective. The cost incurred includes the labor (manpower) cost, cost for new parts, or emergency order of expensive items. At the plant management level there is a budgeted amount of money to be spent every year for such operations. It is very important to have a good forecast for this cost since unexpected events can trigger it to a very high level. In this research we present a statistical factor model to forecast the maintenance cost for the incoming month. One of the factors is the expected number of unplanned (due to failure) maintenance interventions. We introduce a Bayesian model for the failure rate of the equipment, which is input to the cost forecasting model. The importance of equipment reliability and prediction in the commercial nuclear power plant is presented along with applicable governmental and industry organization requirements. A detailed statistical analysis is performed on a set of maintenance cost and failure data gathered at the South Texas Project Nuclear Operating Company (STPNOC) in Bay City, Texas, USA

  8. Failure Mode and Effect Analysis of Subsea Multiphase Pump Equipment

    Directory of Open Access Journals (Sweden)

    Oluwatoyin Shobowale Kafayat

    2014-07-01

    Full Text Available Finding oil and gas reserves in deep/harsh environment with challenging reservoir and field conditions, subsea multiphase pumping benefits has found its way to provide solutions to these issues. Challenges such as failure issues that are still surging the industry and with the current practice of information hiding, this issues becomes even more difficult to tackle. Although, there are some joint industry projects which are only accessible to its members, still there is a need to have a clear understanding of these equipment groups so as to know which issues to focus attention on. A failure mode and effect analysis (FMEA is a potential first aid in understanding this equipment groups. A survey questionnaire/interview was conducted with the oil and gas operating company and equipment manufacturer based on the literature review. The results indicates that these equipment’s group are similar with its onshore counterpart, but the difference is the robustness built into the equipment internal subsystems for subsea applications. The results from the manufacturer perspectives indicates that Helico-axial multiphase pump have a mean time to failure of more than 10 years, twin-screw and electrical submersible pumps are still struggling with a mean time to failure of less than 5 years.

  9. Developments in statistical analysis in quantitative genetics

    DEFF Research Database (Denmark)

    Sorensen, Daniel

    2009-01-01

    of genetic means and variances, models for the analysis of categorical and count data, the statistical genetics of a model postulating that environmental variance is partly under genetic control, and a short discussion of models that incorporate massive genetic marker information. We provide an overview......A remarkable research impetus has taken place in statistical genetics since the last World Conference. This has been stimulated by breakthroughs in molecular genetics, automated data-recording devices and computer-intensive statistical methods. The latter were revolutionized by the bootstrap...... and by Markov chain Monte Carlo (McMC). In this overview a number of specific areas are chosen to illustrate the enormous flexibility that McMC has provided for fitting models and exploring features of data that were previously inaccessible. The selected areas are inferences of the trajectories over time...

  10. On the Statistical Validation of Technical Analysis

    Directory of Open Access Journals (Sweden)

    Rosane Riera Freire

    2007-06-01

    Full Text Available Technical analysis, or charting, aims on visually identifying geometrical patterns in price charts in order to antecipate price "trends". In this paper we revisit the issue of thecnical analysis validation which has been tackled in the literature without taking care for (i the presence of heterogeneity and (ii statistical dependence in the analyzed data - various agglutinated return time series from distinct financial securities. The main purpose here is to address the first cited problem by suggesting a validation methodology that also "homogenizes" the securities according to the finite dimensional probability distribution of their return series. The general steps go through the identification of the stochastic processes for the securities returns, the clustering of similar securities and, finally, the identification of presence, or absence, of informatinal content obtained from those price patterns. We illustrate the proposed methodology with a real data exercise including several securities of the global market. Our investigation shows that there is a statistically significant informational content in two out of three common patterns usually found through technical analysis, namely: triangle, rectangle and head and shoulders.

  11. Reliability analysis of Markov history-dependent repairable systems with neglected failures

    International Nuclear Information System (INIS)

    Du, Shijia; Zeng, Zhiguo; Cui, Lirong; Kang, Rui

    2017-01-01

    Markov history-dependent repairable systems refer to the Markov repairable systems in which some states are changeable and dependent on recent evolutional history of the system. In practice, many Markov history-dependent repairable systems are subjected to neglected failures, i.e., some failures do not affect system performances if they can be repaired promptly. In this paper, we develop a model based on the theory of aggregated stochastic processes to describe the history-dependent behavior and the effect of neglected failures on the Markov history-dependent repairable systems. Based on the developed model, instantaneous and steady-state availabilities are derived to characterize the reliability of the system. Four reliability-related time distributions, i.e., distribution for the k th working period, distribution for the k th failure period, distribution for the real working time in an effective working period, distribution for the neglected failure time in an effective working period, are also derived to provide a more comprehensive description of the system's reliability. Thanks to the power of the theory of aggregated stochastic processes, closed-form expressions are obtained for all the reliability indexes and time distributions. Finally, the developed indexes and analysis methods are demonstrated by a numerical example. - Highlights: • Markovian history-dependent repairable systems with neglected failures is modeled. • Aggregated stochastic processes are used to derive reliability indexes and time distributions. • Closed-form expressions are derived for the considered indexes and distributions.

  12. Rooting out causes in failure analysis; Risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Keith, Graeme

    2010-07-01

    The Deepwater Horizon disaster was a terrible reminder of the consequences of equipment failure on facilities operating in challenging environments. Thankfully catastrophes on the scale of the Deepwater Horizon are rare, but equipment failure is a daily occurrence on installations around the globe. The consequences range from short unexpected downtime, to a total stop on production. from a brief burst of flaring to lasting environmental damage and from the momentary discomfiture of a worker to incapability or death. (Author)

  13. An Integrated Model to Predict Corporate Failure of Listed Companies in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Nisansala Wijekoon

    2015-07-01

    Full Text Available The primary objective of this study is to develop an integrated model to predict corporate failure of listed companies in Sri Lanka. The logistic regression analysis was employed to a data set of 70 matched-pairs of failed and non-failed companies listed in the Colombo Stock Exchange (CSE in Sri Lanka over the period 2002 to 2010. A total of fifteen financial ratios and eight corporate governance variables were used as predictor variables of corporate failure. Analysis of the statistical testing results indicated that model consists with both corporate governance variables and financial ratios improved the prediction accuracy to reach 88.57 per cent one year prior to failure. Furthermore, predictive accuracy of this model in all three years prior to failure is above 80 per cent. Hence model is robust in obtaining accurate results for up to three years prior to failure. It was further found that two financial ratios, working capital to total assets and cash flow from operating activities to total assets, and two corporate governance variables, outside director ratio and company audit committee are having more explanatory power to predict corporate failure. Therefore, model developed in this study can assist investors, managers, shareholders, financial institutions, auditors and regulatory agents in Sri Lanka to forecast corporate failure of listed companies.

  14. WE-H-BRC-02: Failure Mode and Effect Analysis of Liver Stereotactic Body Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Rusu, I; Thomas, T; Roeske, J; Price, J; Perino, C; Surucu, M [Loyola University Chicago, Maywood, IL (United States); Mescioglu, I [Lewis University, Romeoville, IL (United States)

    2016-06-15

    Purpose: To identify areas of improvement in our liver stereotactic body radiation therapy (SBRT) program, using failure mode and effect analysis (FMEA). Methods: A multidisciplinary group consisting of one physician, three physicists, one dosimetrist and two therapists was formed. A process map covering 10 major stages of the liver SBRT program from the initial diagnosis to post treatment follow-up was generated. A total of 102 failure modes, together with their causes and effects, were identified. The occurrence (O), severity (S) and lack of detectability (D) were independently scored. The ranking was done using the risk probability number (RPN) defined as the product of average O, S and D numbers for each mode. The scores were normalized to remove inter-observer variability, while preserving individual ranking order. Further, a correlation analysis on the overall agreement on rank order of all failure modes resulted in positive values for successive pairs of evaluators. The failure modes with the highest RPN value were considered for further investigation. Results: The average normalized RPN values for all modes were 39 with a range of 9 to 103. The FMEA analysis resulted in the identification of the top 10 critical failures modes as: Incorrect CT-MR registration, MR scan not performed in treatment position, patient movement between CBCT acquisition and treatment, daily IGRT QA not verified, incorrect or incomplete ITV delineation, OAR contours not verified, inaccurate normal liver effective dose (Veff) calculation, failure of bolus tracking for 4D CT scan, setup instructions not followed for treatment and plan evaluation metrics missed. Conclusion: The application of FMEA to our liver SBRT program led to the identification and possible improvement of areas affecting patient safety.

  15. WE-H-BRC-02: Failure Mode and Effect Analysis of Liver Stereotactic Body Radiotherapy

    International Nuclear Information System (INIS)

    Rusu, I; Thomas, T; Roeske, J; Price, J; Perino, C; Surucu, M; Mescioglu, I

    2016-01-01

    Purpose: To identify areas of improvement in our liver stereotactic body radiation therapy (SBRT) program, using failure mode and effect analysis (FMEA). Methods: A multidisciplinary group consisting of one physician, three physicists, one dosimetrist and two therapists was formed. A process map covering 10 major stages of the liver SBRT program from the initial diagnosis to post treatment follow-up was generated. A total of 102 failure modes, together with their causes and effects, were identified. The occurrence (O), severity (S) and lack of detectability (D) were independently scored. The ranking was done using the risk probability number (RPN) defined as the product of average O, S and D numbers for each mode. The scores were normalized to remove inter-observer variability, while preserving individual ranking order. Further, a correlation analysis on the overall agreement on rank order of all failure modes resulted in positive values for successive pairs of evaluators. The failure modes with the highest RPN value were considered for further investigation. Results: The average normalized RPN values for all modes were 39 with a range of 9 to 103. The FMEA analysis resulted in the identification of the top 10 critical failures modes as: Incorrect CT-MR registration, MR scan not performed in treatment position, patient movement between CBCT acquisition and treatment, daily IGRT QA not verified, incorrect or incomplete ITV delineation, OAR contours not verified, inaccurate normal liver effective dose (Veff) calculation, failure of bolus tracking for 4D CT scan, setup instructions not followed for treatment and plan evaluation metrics missed. Conclusion: The application of FMEA to our liver SBRT program led to the identification and possible improvement of areas affecting patient safety.

  16. Lecture notes: meantime to failure analysis

    International Nuclear Information System (INIS)

    Hanlen, R.C.

    1976-01-01

    A method is presented which affects the Quality Assurance Engineer's place in management decision making by giving him a working parameter to base sound engineering and management decisions. The theory used in Reliability Engineering to determine the mean-time-to-failure of a component or system is reviewed. The method presented derives the probability density function for the parameter of the exponential distribution. The exponential distribution is commonly used by industry to determine the reliability of a component or system when the failure rate is assumed to be constant. Some examples of N Reactor performance data are used. To be specific: The ball system data with 4.9 x 10 6 unit hours of service and 7 individual failures indicates a demonstrated 98.8 percent reliability at a 95 percent confidence level for a 12 month mission period, and the diesel starts data with 7.2 x 10 5 unit hours of service and 1 failure indicates a demonstrated 94.4 percent reliability at a 95 percent confidence level for a 12 month mission period

  17. Data management and statistical analysis for environmental assessment

    International Nuclear Information System (INIS)

    Wendelberger, J.R.; McVittie, T.I.

    1995-01-01

    Data management and statistical analysis for environmental assessment are important issues on the interface of computer science and statistics. Data collection for environmental decision making can generate large quantities of various types of data. A database/GIS system developed is described which provides efficient data storage as well as visualization tools which may be integrated into the data analysis process. FIMAD is a living database and GIS system. The system has changed and developed over time to meet the needs of the Los Alamos National Laboratory Restoration Program. The system provides a repository for data which may be accessed by different individuals for different purposes. The database structure is driven by the large amount and varied types of data required for environmental assessment. The integration of the database with the GIS system provides the foundation for powerful visualization and analysis capabilities

  18. FEAT - FAILURE ENVIRONMENT ANALYSIS TOOL (UNIX VERSION)

    Science.gov (United States)

    Pack, G.

    1994-01-01

    The Failure Environment Analysis Tool, FEAT, enables people to see and better understand the effects of failures in a system. FEAT uses digraph models to determine what will happen to a system if a set of failure events occurs and to identify the possible causes of a selected set of failures. Failures can be user-selected from either engineering schematic or digraph model graphics, and the effects or potential causes of the failures will be color highlighted on the same schematic or model graphic. As a design tool, FEAT helps design reviewers understand exactly what redundancies have been built into a system and where weaknesses need to be protected or designed out. A properly developed digraph will reflect how a system functionally degrades as failures accumulate. FEAT is also useful in operations, where it can help identify causes of failures after they occur. Finally, FEAT is valuable both in conceptual development and as a training aid, since digraphs can identify weaknesses in scenarios as well as hardware. Digraphs models for use with FEAT are generally built with the Digraph Editor, a Macintosh-based application which is distributed with FEAT. The Digraph Editor was developed specifically with the needs of FEAT users in mind and offers several time-saving features. It includes an icon toolbox of components required in a digraph model and a menu of functions for manipulating these components. It also offers FEAT users a convenient way to attach a formatted textual description to each digraph node. FEAT needs these node descriptions in order to recognize nodes and propagate failures within the digraph. FEAT users store their node descriptions in modelling tables using any word processing or spreadsheet package capable of saving data to an ASCII text file. From within the Digraph Editor they can then interactively attach a properly formatted textual description to each node in a digraph. Once descriptions are attached to them, a selected set of nodes can be

  19. Failure probabilistic model of CNC lathes

    International Nuclear Information System (INIS)

    Wang Yiqiang; Jia Yazhou; Yu Junyi; Zheng Yuhua; Yi Shangfeng

    1999-01-01

    A field failure analysis of computerized numerical control (CNC) lathes is described. Field failure data was collected over a period of two years on approximately 80 CNC lathes. A coding system to code failure data was devised and a failure analysis data bank of CNC lathes was established. The failure position and subsystem, failure mode and cause were analyzed to indicate the weak subsystem of a CNC lathe. Also, failure probabilistic model of CNC lathes was analyzed by fuzzy multicriteria comprehensive evaluation

  20. The Astringency of the GP Algorithm for Forecasting Software Failure Data Series

    Directory of Open Access Journals (Sweden)

    Yong-qiang Zhang

    2007-05-01

    Full Text Available The forecasting of software failure data series by Genetic Programming (GP can be realized without any assumptions before modeling. This discovery has transformed traditional statistical modeling methods as well as improved consistency for model applicability. The individuals' different characteristics during the evolution of generations, which are randomly changeable, are treated as Markov random processes. This paper also proposes that a GP algorithm with "optimal individuals reserved strategy" is the best solution to this problem, and therefore the adaptive individuals finally will be evolved. This will allow practical applications in software reliability modeling analysis and forecasting for failure behaviors. Moreover it can verify the feasibility and availability of the GP algorithm, which is applied to software failure data series forecasting on a theoretical basis. The results show that the GP algorithm is the best solution for software failure behaviors in a variety of disciplines.

  1. Wind Turbine Gearbox Condition Monitoring with AAKR and Moving Window Statistic Methods

    Directory of Open Access Journals (Sweden)

    Peng Guo

    2011-11-01

    Full Text Available Condition Monitoring (CM of wind turbines can greatly reduce the maintenance costs for wind farms, especially for offshore wind farms. A new condition monitoring method for a wind turbine gearbox using temperature trend analysis is proposed. Autoassociative Kernel Regression (AAKR is used to construct the normal behavior model of the gearbox temperature. With a proper construction of the memory matrix, the AAKR model can cover the normal working space for the gearbox. When the gearbox has an incipient failure, the residuals between AAKR model estimates and the measurement temperature will become significant. A moving window statistical method is used to detect the changes of the residual mean value and standard deviation in a timely manner. When one of these parameters exceeds predefined thresholds, an incipient failure is flagged. In order to simulate the gearbox fault, manual temperature drift is added to the initial Supervisory Control and Data Acquisitions (SCADA data. Analysis of simulated gearbox failures shows that the new condition monitoring method is effective.

  2. Compliance strategy for statistically based neutron overpower protection safety analysis methodology

    International Nuclear Information System (INIS)

    Holliday, E.; Phan, B.; Nainer, O.

    2009-01-01

    The methodology employed in the safety analysis of the slow Loss of Regulation (LOR) event in the OPG and Bruce Power CANDU reactors, referred to as Neutron Overpower Protection (NOP) analysis, is a statistically based methodology. Further enhancement to this methodology includes the use of Extreme Value Statistics (EVS) for the explicit treatment of aleatory and epistemic uncertainties, and probabilistic weighting of the initial core states. A key aspect of this enhanced NOP methodology is to demonstrate adherence, or compliance, with the analysis basis. This paper outlines a compliance strategy capable of accounting for the statistical nature of the enhanced NOP methodology. (author)

  3. Failure mode and effect analysis: improving intensive care unit risk management processes.

    Science.gov (United States)

    Askari, Roohollah; Shafii, Milad; Rafiei, Sima; Abolhassani, Mohammad Sadegh; Salarikhah, Elaheh

    2017-04-18

    Purpose Failure modes and effects analysis (FMEA) is a practical tool to evaluate risks, discover failures in a proactive manner and propose corrective actions to reduce or eliminate potential risks. The purpose of this paper is to apply FMEA technique to examine the hazards associated with the process of service delivery in intensive care unit (ICU) of a tertiary hospital in Yazd, Iran. Design/methodology/approach This was a before-after study conducted between March 2013 and December 2014. By forming a FMEA team, all potential hazards associated with ICU services - their frequency and severity - were identified. Then risk priority number was calculated for each activity as an indicator representing high priority areas that need special attention and resource allocation. Findings Eight failure modes with highest priority scores including endotracheal tube defect, wrong placement of endotracheal tube, EVD interface, aspiration failure during suctioning, chest tube failure, tissue injury and deep vein thrombosis were selected for improvement. Findings affirmed that improvement strategies were generally satisfying and significantly decreased total failures. Practical implications Application of FMEA in ICUs proved to be effective in proactively decreasing the risk of failures and corrected the control measures up to acceptable levels in all eight areas of function. Originality/value Using a prospective risk assessment approach, such as FMEA, could be beneficial in dealing with potential failures through proposing preventive actions in a proactive manner. The method could be used as a tool for healthcare continuous quality improvement so that the method identifies both systemic and human errors, and offers practical advice to deal effectively with them.

  4. Competing failure analysis in phased-mission systems with multiple functional dependence groups

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Peng, Rui; Pan, Zhusheng

    2017-01-01

    A phased-mission system (PMS) involves multiple, consecutive, non-overlapping phases of operation. The system structure function and component failure behavior in a PMS can change from phase to phase, posing big challenges to the system reliability analysis. Further complicating the problem is the functional dependence (FDEP) behavior where the failure of certain component(s) causes other component(s) to become unusable or inaccessible or isolated. Previous studies have shown that FDEP can cause competitions between failure propagation and failure isolation in the time domain. While such competing failure effects have been well addressed in single-phase systems, only little work has focused on PMSs with a restrictive assumption that a single FDEP group exists in one phase of the mission. Many practical systems (e.g., computer systems and networks), however may involve multiple FDEP groups during the mission. Moreover, different FDEP groups can be dependent due to sharing some common components; they may appear in a single phase or multiple phases. This paper makes new contributions by modeling and analyzing reliability of PMSs subject to multiple FDEP groups through a Markov chain-based methodology. Propagated failures with both global and selective effects are considered. Four case studies are presented to demonstrate application of the proposed method. - Highlights: • Reliability of phased-mission systems subject to competing failure propagation and isolation effects is modeled. • Multiple independent or dependent functional dependence groups are considered. • Propagated failures with global effects and selective effects are studied. • Four case studies demonstrate generality and application of the proposed Markov-based method.

  5. Risk-Cost Estimation of On-Site Wastewater Treatment System Failures Using Extreme Value Analysis.

    Science.gov (United States)

    Kohler, Laura E; Silverstein, JoAnn; Rajagopalan, Balaji

    2017-05-01

      Owner resistance to increasing regulation of on-site wastewater treatment systems (OWTS), including obligatory inspections and upgrades, moratoriums and cease-and-desist orders in communities around the U.S. demonstrate the challenges associated with managing risks of inadequate performance of owner-operated wastewater treatment systems. As a result, determining appropriate and enforceable performance measures in an industry with little history of these requirements is challenging. To better support such measures, we develop a statistical method to predict lifetime failure risks, expressed as costs, in order to identify operational factors associated with costly repairs and replacement. A binomial logistic regression is used to fit data from public records of reported OWTS failures, in Boulder County, Colorado, which has 14 300 OWTS to determine the probability that an OWTS will be in a low- or high-risk category for lifetime repair and replacement costs. High-performing or low risk OWTS with repairs and replacements below the threshold of $9000 over a 40-year life are associated with more frequent inspections and upgrades following home additions. OWTS with a high risk of exceeding the repair cost threshold of $18 000 are further analyzed in a variation of extreme value analysis (EVA), Points Over Threshold (POT) where the distribution of risk-cost exceedance values are represented by a generalized Pareto distribution. The resulting threshold cost exceedance estimates for OWTS in the high-risk category over a 40-year expected life ranged from $18 000 to $44 000.

  6. Diagnosis checking of statistical analysis in RCTs indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-11-01

    Statistical analysis is essential for reporting of the results of randomized controlled trials (RCTs), as well as evaluating their effectiveness. However, the validity of a statistical analysis also depends on whether the assumptions of that analysis are valid. To review all RCTs published in journals indexed in PubMed during December 2014 to provide a complete picture of how RCTs handle assumptions of statistical analysis. We reviewed all RCTs published in December 2014 that appeared in journals indexed in PubMed using the Cochrane highly sensitive search strategy. The 2014 impact factors of the journals were used as proxies for their quality. The type of statistical analysis used and whether the assumptions of the analysis were tested were reviewed. In total, 451 papers were included. Of the 278 papers that reported a crude analysis for the primary outcomes, 31 (27·2%) reported whether the outcome was normally distributed. Of the 172 papers that reported an adjusted analysis for the primary outcomes, diagnosis checking was rarely conducted, with only 20%, 8·6% and 7% checked for generalized linear model, Cox proportional hazard model and multilevel model, respectively. Study characteristics (study type, drug trial, funding sources, journal type and endorsement of CONSORT guidelines) were not associated with the reporting of diagnosis checking. The diagnosis of statistical analyses in RCTs published in PubMed-indexed journals was usually absent. Journals should provide guidelines about the reporting of a diagnosis of assumptions. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.

  7. Construct validity of the Heart Failure Screening Tool (Heart-FaST) to identify heart failure patients at risk of poor self-care: Rasch analysis.

    Science.gov (United States)

    Reynolds, Nicholas A; Ski, Chantal F; McEvedy, Samantha M; Thompson, David R; Cameron, Jan

    2018-02-14

    The aim of this study was to psychometrically evaluate the Heart Failure Screening Tool (Heart-FaST) via: (1) examination of internal construct validity; (2) testing of scale function in accordance with design; and (3) recommendation for change/s, if items are not well adjusted, to improve psychometric credential. Self-care is vital to the management of heart failure. The Heart-FaST may provide a prospective assessment of risk, regarding the likelihood that patients with heart failure will engage in self-care. Psychometric validation of the Heart-FaST using Rasch analysis. The Heart-FaST was administered to 135 patients (median age = 68, IQR = 59-78 years; 105 males) enrolled in a multidisciplinary heart failure management program. The Heart-FaST is a nurse-administered tool for screening patients with HF at risk of poor self-care. A Rasch analysis of responses was conducted which tested data against Rasch model expectations, including whether items serve as unbiased, non-redundant indicators of risk and measure a single construct and that rating scales operate as intended. The results showed that data met Rasch model expectations after rescoring or deleting items due to poor discrimination, disordered thresholds, differential item functioning, or response dependence. There was no evidence of multidimensionality which supports the use of total scores from Heart-FaST as indicators of risk. Aggregate scores from this modified screening tool rank heart failure patients according to their "risk of poor self-care" demonstrating that the Heart-FaST items constitute a meaningful scale to identify heart failure patients at risk of poor engagement in heart failure self-care. © 2018 John Wiley & Sons Ltd.

  8. A κ-generalized statistical mechanics approach to income analysis

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2009-02-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.

  9. A κ-generalized statistical mechanics approach to income analysis

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2009-01-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful

  10. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    Science.gov (United States)

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  11. Proceedings of the 1980 DOE statistical symposium

    International Nuclear Information System (INIS)

    Truett, T.; Margolies, D.; Mensing, R.W.

    1981-04-01

    Separate abstracts were prepared for 8 of the 16 papers presented at the DOE Statistical Symposium in California in October 1980. The topics of those papers not included cover the relative detection efficiency on sets of irradiated fuel elements, estimating failure rates for pumps in nuclear reactors, estimating fragility functions, application of bounded-influence regression, the influence function method applied to energy time series data, reliability problems in power generation systems and uncertainty analysis associated with radioactive waste disposal. The other 8 papers have previously been added to the data base

  12. Failure modes and effects criticality analysis and accelerated life testing of LEDs for medical applications

    Science.gov (United States)

    Sawant, M.; Christou, A.

    2012-12-01

    While use of LEDs in Fiber Optics and lighting applications is common, their use in medical diagnostic applications is not very extensive. Since the precise value of light intensity will be used to interpret patient results, understanding failure modes [1-4] is very important. We used the Failure Modes and Effects Criticality Analysis (FMECA) tool to identify the critical failure modes of the LEDs. FMECA involves identification of various failure modes, their effects on the system (LED optical output in this context), their frequency of occurrence, severity and the criticality of the failure modes. The competing failure modes/mechanisms were degradation of: active layer (where electron-hole recombination occurs to emit light), electrodes (provides electrical contact to the semiconductor chip), Indium Tin Oxide (ITO) surface layer (used to improve current spreading and light extraction), plastic encapsulation (protective polymer layer) and packaging failures (bond wires, heat sink separation). A FMECA table is constructed and the criticality is calculated by estimating the failure effect probability (β), failure mode ratio (α), failure rate (λ) and the operating time. Once the critical failure modes were identified, the next steps were generation of prior time to failure distribution and comparing with our accelerated life test data. To generate the prior distributions, data and results from previous investigations were utilized [5-33] where reliability test results of similar LEDs were reported. From the graphs or tabular data, we extracted the time required for the optical power output to reach 80% of its initial value. This is our failure criterion for the medical diagnostic application. Analysis of published data for different LED materials (AlGaInP, GaN, AlGaAs), the Semiconductor Structures (DH, MQW) and the mode of testing (DC, Pulsed) was carried out. The data was categorized according to the materials system and LED structure such as AlGaInP-DH-DC, Al

  13. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    Science.gov (United States)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  14. Prognostic Role of Hypothyroidism in Heart Failure: A Meta-Analysis.

    Science.gov (United States)

    Ning, Ning; Gao, Dengfeng; Triggiani, Vincenzo; Iacoviello, Massimo; Mitchell, Judith E; Ma, Rui; Zhang, Yan; Kou, Huijuan

    2015-07-01

    Hypothyroidism is a risk factor of heart failure (HF) in the general population. However, the relationship between hypothyroidism and clinical outcomes in patients with established HF is still inconclusive.We conducted a systematic review and meta-analysis to clarify the association of hypothyroidism and all-cause mortality as well as cardiac death and/or hospitalization in patients with HF. We searched MEDLINE via PubMed, EMBASE, and Scopus databases for studies of hypothyroidism and clinical outcomes in patients with HF published up to the end of January 2015. Random-effects models were used to estimate summary relative risk (RR) statistics. We included 13 articles that reported RR estimates and 95% confidence intervals (95% CIs) for hypothyroidism with outcomes in patients with HF. For the association of hypothyroidism with all-cause mortality and with cardiac death and/or hospitalization, the pooled RR was 1.44 (95% CI: 1.29-1.61) and 1.37 (95% CI: 1.22-1.55), respectively. However, the association disappeared on adjustment for B-type natriuretic protein level (RR 1.17, 95% CI: 0.90-1.52) and in studies of patients with mean age hypothyroidism associated with increased all-cause mortality as well as cardiac death and/or hospitalization in patients with HF. Further diagnostic and therapeutic procedures for hypothyroidism may be needed for patients with HF.

  15. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    Energy Technology Data Exchange (ETDEWEB)

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul [Stanford University, Stanford, California 94394 (United States); Varian Medical Systems, Palo Alto, California 94304 (United States); Stanford University, Stanford, California 94394 (United States)

    2010-12-15

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN{>=}125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software/hardware upgrades. System latency was determined to be {approx}193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%-3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was {approx}35 min, while that taken for comprehensive testing was {approx}3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures

  16. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems.

    Science.gov (United States)

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-12-01

    To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Failures modes with RPN > or = 125 were recommended to be tested monthly. Failure modes with RPN < 125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software/hardware upgrades. System latency was determined to be approximately 193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%-3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was approximately 35 min, while that taken for comprehensive testing was approximately 3.5 h. FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation

  17. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    International Nuclear Information System (INIS)

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software/hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%-3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation

  18. Analysis of Moderator System Failure Accidents by Using New Method for Wolsong-1 CANDU 6 Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Dongsik; Kim, Jonghyun; Cho, Cheonhwey [Atomic Creative Technology Co., Ltd., Daejeon (Korea, Republic of); Kim, Sungmin [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of)

    2013-05-15

    To reconfirm the safety of moderator system failure accidents, the safety analysis by using the reactor physics code, RFSP-IST, coupled with the thermal hydraulics code, CATHENA is performed additionally. In the present paper, the newly developed analysis method is briefly described and the results obtained from the moderator system failure accident simulations for Wolsong-1 CANDU 6 reactor by using the new method are summarized. The safety analysis of the moderator system failure accidents for Wolsong-1 CANDU 6 reactor was carried out by using the new code system, i. e., CATHENA and RFSP-IST, instead of the non-IST old codes, namely, SMOKIN G-2 and MODSTBOIL. The analysis results by using the new method revealed as same with the results by using the old method that the fuel integrity is warranted because the localized power peak remained well below the limits and, most importantly, the reactor operation enters into the self-shutdown mode due to the substantial loss of moderator D{sub 2}O inventory from the moderator system. In the analysis results obtained by using the old method, it was predicted that the ROP trip conditions occurred for the transient cases which are also studied in the present paper. But, in the new method, it was found that the ROP trip conditions did not occur. Consequently, in the safety analysis performed additionally by using the new method, the safety of moderator system failure accidents was reassured. In the future, the new analysis method by using the IST codes instead of the non-IST old codes for the moderator system failure accidents is strongly recommended.

  19. Potential failure mode and effects analysis for the ITER NB injector

    International Nuclear Information System (INIS)

    Boldrin, M.; De Lorenzi, A.; Fiorentin, A.; Grando, L.; Marcuzzi, D.; Peruzzo, S.; Pomaro, N.; Rigato, W.; Serianni, G.

    2009-01-01

    The failure mode and effects analysis (FMEA) is a widely used analytical technique that helps in identifying and reducing the risks of failure in a system, component or process. The application of a systematic method like the FMEA was deemed necessary and adequate to support the design process of the ITER NBI (neutral beam injector). The approach adopted was to develop a FMEA at a general 'system level', focusing the study on the main functions of the system and ensuring that all the interfaces and interactions are covered among the various subsystems. The FMEA was extended to the whole NBI system taking into account the present design status. The FMEA procedure will be then applied to the detailed design phase at the component level, in particular to identify (or define) the ITER Class of Risk. Several important failure modes were evidenced, and estimates of subsystems and components reliability are now available. FMEA procedure resulted essential to identify and confirm the diagnostic systems required for protection and control, and the outcome of this analysis will represent the baseline document for the design of the NBI and NBTF integrated protection system. In the paper, rationale and background of the FMEA for ITER NBI are presented, methods employed are described and most interesting results are reported and discussed.

  20. Statistical analysis of metallicity in spiral galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Galeotti, P [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Turin Univ. (Italy). Ist. di Fisica Generale)

    1981-04-01

    A principal component analysis of metallicity and other integral properties of 33 spiral galaxies is presented; the involved parameters are: morphological type, diameter, luminosity and metallicity. From the statistical analysis it is concluded that the sample has only two significant dimensions and additonal tests, involving different parameters, show similar results. Thus it seems that only type and luminosity are independent variables, being the other integral properties of spiral galaxies correlated with them.

  1. A biomechanical analysis of point of failure during lateral-row tensioning in transosseous-equivalent rotator cuff repair.

    Science.gov (United States)

    Dierckman, Brian D; Goldstein, Jordan L; Hammond, Kyle E; Karas, Spero G

    2012-01-01

    The purpose of this study was to determine the maximum load and point of failure of the construct during tensioning of the lateral row of a transosseous-equivalent (TOE) rotator cuff repair. In 6 fresh-frozen human shoulders, a TOE rotator cuff repair was performed, with 1 suture from each medial anchor passed through the tendon and tied in a horizontal mattress pattern. One of 2 limbs from each of 2 medial anchors was pulled laterally over the tendon. After preparation of the lateral bone for anchor placement, the 2 limbs were passed through the polyether ether ketone (PEEK) eyelet of a knotless anchor and tied to a tensiometer. The lateral anchor was placed into the prepared bone tunnel but not fully seated. Tensioning of the lateral-row repair was simulated by pulling the tensiometer to tighten the suture limbs as they passed through the eyelet of the knotless anchor. The mode of failure and maximum tension were recorded. The procedure was then repeated for the second lateral-row anchor. The mean load to failure during lateral-row placement in the TOE model was 80.8 ± 21.0 N (median, 83 N; range, 27.2 to 115.8 N). There was no statistically significant difference between load to failure during lateral-row tensioning for the anterior and posterior anchors (P = .84). Each of the 12 constructs failed at the eyelet of the lateral anchor. Retrieval analysis showed no failure of the medial anchors, no medial suture cutout through the rotator cuff tendon, and no signs of gapping at the repair site. Our results suggest that the medial-row repair does not appear vulnerable during tensioning of the lateral row of a TOE rotator cuff repair with the implants tested. However, surgeons should exercise caution when tensioning the lateral row, especially when lateral-row anchors with PEEK eyelets are implemented. For this repair construct, the findings suggest that although the medial row is not vulnerable during lateral-row tensioning of a TOE rotator cuff repair, lateral

  2. Failure Mode and Effect Analysis using Soft Set Theory and COPRAS Method

    Directory of Open Access Journals (Sweden)

    Ze-Ling Wang

    2017-01-01

    Full Text Available Failure mode and effect analysis (FMEA is a risk management technique frequently applied to enhance the system performance and safety. In recent years, many researchers have shown an intense interest in improving FMEA due to inherent weaknesses associated with the classical risk priority number (RPN method. In this study, we develop a new risk ranking model for FMEA based on soft set theory and COPRAS method, which can deal with the limitations and enhance the performance of the conventional FMEA. First, trapezoidal fuzzy soft set is adopted to manage FMEA team membersr linguistic assessments on failure modes. Then, a modified COPRAS method is utilized for determining the ranking order of the failure modes recognized in FMEA. Especially, we treat the risk factors as interdependent and employ the Choquet integral to obtain the aggregate risk of failures in the new FMEA approach. Finally, a practical FMEA problem is analyzed via the proposed approach to demonstrate its applicability and effectiveness. The result shows that the FMEA model developed in this study outperforms the traditional RPN method and provides a more reasonable risk assessment of failure modes.

  3. An integrated model of statistical process control and maintenance based on the delayed monitoring

    International Nuclear Information System (INIS)

    Yin, Hui; Zhang, Guojun; Zhu, Haiping; Deng, Yuhao; He, Fei

    2015-01-01

    This paper develops an integrated model of statistical process control and maintenance decision. The proposal of delayed monitoring policy postpones the sampling process till a scheduled time and contributes to ten-scenarios of the production process, where equipment failure may occur besides quality shift. The equipment failure and the control chart alert trigger the corrective maintenance and the predictive maintenance, respectively. The occurrence probability, the cycle time and the cycle cost of each scenario are obtained by integral calculation; therefore, a mathematical model is established to minimize the expected cost by using the genetic algorithm. A Monte Carlo simulation experiment is conducted and compared with the integral calculation in order to ensure the analysis of the ten-scenario model. Another ordinary integrated model without delayed monitoring is also established as comparison. The results of a numerical example indicate satisfactory economic performance of the proposed model. Finally, a sensitivity analysis is performed to investigate the effect of model parameters. - Highlights: • We develop an integrated model of statistical process control and maintenance. • We propose delayed monitoring policy and derive an economic model with 10 scenarios. • We consider two deterioration mechanisms, quality shift and equipment failure. • The delayed monitoring policy will help reduce the expected cost

  4. Coupled Mechanical-Electrochemical-Thermal Analysis of Failure Propagation in Lithium-ion Batteries

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Chao; Santhanagopalan, Shriram; Pesaran, Ahmad

    2016-07-28

    This is a presentation given at the 12th World Congress for Computational Mechanics on coupled mechanical-electrochemical-thermal analysis of failure propagation in lithium-ion batteries for electric vehicles.

  5. Improved GLR method to instrument failure detection

    International Nuclear Information System (INIS)

    Jeong, Hak Yeoung; Chang, Soon Heung

    1985-01-01

    The generalized likehood radio(GLR) method performs statistical tests on the innovations sequence of a Kalman-Buchy filter state estimator for system failure detection and its identification. However, the major drawback of the convensional GLR is to hypothesize particular failure type in each case. In this paper, a method to solve this drawback is proposed. The improved GLR method is applied to a PWR pressurizer and gives successful results in detection and identification of any failure. Furthmore, some benefit on the processing time per each cycle of failure detection and its identification can be accompanied. (Author)

  6. The failure trace archive : enabling comparative analysis of failures in diverse distributed systems

    NARCIS (Netherlands)

    Kondo, D.; Javadi, B.; Iosup, A.; Epema, D.H.J.

    2010-01-01

    With the increasing functionality and complexity of distributed systems, resource failures are inevitable. While numerous models and algorithms for dealing with failures exist, the lack of public trace data sets and tools has prevented meaningful comparisons. To facilitate the design, validation,

  7. Failure Mode and Effect Analysis in Increasing the Revenue of Emergency Department

    Directory of Open Access Journals (Sweden)

    Farhad Rahmati

    2015-02-01

    Full Text Available Introduction: Successful performance of emergency department(ED is one of the important indications of increasing the satisfaction among referees. The insurance of such successful performance is fiscal discipline and avoiding from non-beneficial activities in this department. Therefore, the increasing revenue of emergency department is one of the interested goals of hospital management system. According to above-mentioned, the researchers assessed problems lead to loss the revenue of ED and eliminate them by using failure mode and effects analysis (FMEA.Methods: This was the prospective cohort study performed during 18 months, set in 6 phases. In the first phase, the failures were determined and some solutions suggested to eliminate them. During 2-5 phases, based on the prioritizing the problems, solutions were performed. In the sixth phase, final assessment of the study was done. Finally, the feedback of system’s revenue was evaluated and data analyzed using repeated measure ANOVA.Results: Lack of recording the consuming instrument and attribution of separate codes for emergency services of hospitalized patients were the most important failures that lead to decrease the revenue of ED. Such elimination caused to 75.9% increase in revenue within a month (df = 1.6; F = 84.0; p<0.0001.  Totally, 18 months following the eliminating of failures caused to 328.2% increase in the revenue of ED (df = 15.9; F = 215; p<0.0001.Conclusion: The findings of the present study shows that failure mode and effect analysis, can be used as a safe and effected method to reduce the expenses of ED and increase its revenue.

  8. Statistical Analysis of Protein Ensembles

    Science.gov (United States)

    Máté, Gabriell; Heermann, Dieter

    2014-04-01

    As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.

  9. State analysis of BOP using statistical and heuristic methods

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Chang, Soon Heung

    2003-01-01

    Under the deregulation environment, the performance enhancement of BOP in nuclear power plants is being highlighted. To analyze performance level of BOP, we use the performance test procedures provided from an authorized institution such as ASME. However, through plant investigation, it was proved that the requirements of the performance test procedures about the reliability and quantity of sensors was difficult to be satisfied. As a solution of this, state analysis method that are the expanded concept of signal validation, was proposed on the basis of the statistical and heuristic approaches. Authors recommended the statistical linear regression model by analyzing correlation among BOP parameters as a reference state analysis method. Its advantage is that its derivation is not heuristic, it is possible to calculate model uncertainty, and it is easy to apply to an actual plant. The error of the statistical linear regression model is below 3% under normal as well as abnormal system states. Additionally a neural network model was recommended since the statistical model is impossible to apply to the validation of all of the sensors and is sensitive to the outlier that is the signal located out of a statistical distribution. Because there are a lot of sensors need to be validated in BOP, wavelet analysis (WA) were applied as a pre-processor for the reduction of input dimension and for the enhancement of training accuracy. The outlier localization capability of WA enhanced the robustness of the neural network. The trained neural network restored the degraded signals to the values within ±3% of the true signals

  10. Sensitivity analysis of repairable redundant system with switching failure and geometric reneging

    Directory of Open Access Journals (Sweden)

    Chandra Shekhar

    2017-09-01

    Full Text Available This study deals with the performance modeling and reliability analysis of a redundant machining system composed of several functional machines. To analyze the more realistic scenarios, the concepts of switching failure and geometric reneging are included. The time-to-breakdown and repair time of operating and standby machines are assumed to follow the exponential distribution. For the quantitative assessment of the machine interference problem, various performance measures such as mean-time-to-failure, reliability, reneging rate, etc. have been formulated. To show the practicability of the developed model, a numerical illustration has been presented. For the practical justification and validity of the results established, the sensitivity analysis of reliability indices has been presented by varying different system descriptors.

  11. Local Failure in Resected N1 Lung Cancer: Implications for Adjuvant Therapy

    International Nuclear Information System (INIS)

    Higgins, Kristin A.; Chino, Junzo P.; Berry, Mark; Ready, Neal; Boyd, Jessamy; Yoo, David S.; Kelsey, Chris R.

    2012-01-01

    Purpose: To evaluate actuarial rates of local failure in patients with pathologic N1 non–small-cell lung cancer and to identify clinical and pathologic factors associated with an increased risk of local failure after resection. Methods and Materials: All patients who underwent surgery for non–small-cell lung cancer with pathologically confirmed N1 disease at Duke University Medical Center from 1995–2008 were identified. Patients receiving any preoperative therapy or postoperative radiotherapy or with positive surgical margins were excluded. Local failure was defined as disease recurrence within the ipsilateral hilum, mediastinum, or bronchial stump/staple line. Actuarial rates of local failure were calculated with the Kaplan-Meier method. A Cox multivariate analysis was used to identify factors independently associated with a higher risk of local recurrence. Results: Among 1,559 patients who underwent surgery during the time interval, 198 met the inclusion criteria. Of these patients, 50 (25%) received adjuvant chemotherapy. Actuarial (5-year) rates of local failure, distant failure, and overall survival were 40%, 55%, and 33%, respectively. On multivariate analysis, factors associated with an increased risk of local failure included a video-assisted thoracoscopic surgery approach (hazard ratio [HR], 2.5; p = 0.01), visceral pleural invasion (HR, 2.1; p = 0.04), and increasing number of positive N1 lymph nodes (HR, 1.3 per involved lymph node; p = 0.02). Chemotherapy was associated with a trend toward decreased risk of local failure that was not statistically significant (HR, 0.61; p = 0.2). Conclusions: Actuarial rates of local failure in pN1 disease are high. Further investigation of conformal postoperative radiotherapy may be warranted.

  12. Local Failure in Resected N1 Lung Cancer: Implications for Adjuvant Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Higgins, Kristin A., E-mail: kristin.higgins@duke.edu [Department of Radiation Oncology, Duke University Medical Center, Durham, NC (United States); Chino, Junzo P [Department of Radiation Oncology, Duke University Medical Center, Durham, NC (United States); Berry, Mark [Department of Surgery, Division of Cardiovascular and Thoracic Surgery, Duke University Medical Center, Durham, NC (United States); Ready, Neal [Department of Medicine, Division of Medical Oncology, Duke University Medical Center, Durham, NC (United States); Boyd, Jessamy [US Oncology, Dallas, TX (United States); Yoo, David S; Kelsey, Chris R [Department of Radiation Oncology, Duke University Medical Center, Durham, NC (United States)

    2012-06-01

    Purpose: To evaluate actuarial rates of local failure in patients with pathologic N1 non-small-cell lung cancer and to identify clinical and pathologic factors associated with an increased risk of local failure after resection. Methods and Materials: All patients who underwent surgery for non-small-cell lung cancer with pathologically confirmed N1 disease at Duke University Medical Center from 1995-2008 were identified. Patients receiving any preoperative therapy or postoperative radiotherapy or with positive surgical margins were excluded. Local failure was defined as disease recurrence within the ipsilateral hilum, mediastinum, or bronchial stump/staple line. Actuarial rates of local failure were calculated with the Kaplan-Meier method. A Cox multivariate analysis was used to identify factors independently associated with a higher risk of local recurrence. Results: Among 1,559 patients who underwent surgery during the time interval, 198 met the inclusion criteria. Of these patients, 50 (25%) received adjuvant chemotherapy. Actuarial (5-year) rates of local failure, distant failure, and overall survival were 40%, 55%, and 33%, respectively. On multivariate analysis, factors associated with an increased risk of local failure included a video-assisted thoracoscopic surgery approach (hazard ratio [HR], 2.5; p = 0.01), visceral pleural invasion (HR, 2.1; p = 0.04), and increasing number of positive N1 lymph nodes (HR, 1.3 per involved lymph node; p = 0.02). Chemotherapy was associated with a trend toward decreased risk of local failure that was not statistically significant (HR, 0.61; p = 0.2). Conclusions: Actuarial rates of local failure in pN1 disease are high. Further investigation of conformal postoperative radiotherapy may be warranted.

  13. Adaptive Failure Identification for Healthcare Risk Analysis and Its Application on E-Healthcare

    Directory of Open Access Journals (Sweden)

    Kuo-Chung Chu

    2014-01-01

    Full Text Available To satisfy the requirement for diverse risk preferences, we propose a generic risk priority number (GRPN function that assigns a risk weight to each parameter such that they represent individual organization/department/process preferences for the parameters. This research applies GRPN function-based model to differentiate the types of risk, and primary data are generated through simulation. We also conduct sensitivity analysis on correlation and regression to compare it with the traditional RPN (TRPN. The proposed model outperforms the TRPN model and provides a practical, effective, and adaptive method for risk evaluation. In particular, the defined GRPN function offers a new method to prioritize failure modes in failure mode and effect analysis (FMEA. The different risk preferences considered in the healthcare example show that the modified FMEA model can take into account the various risk factors and prioritize failure modes more accurately. In addition, the model also can apply to a generic e-healthcare service environment with a hierarchical architecture.

  14. Using the failure mode and effects analysis model to improve parathyroid hormone and adrenocorticotropic hormone testing

    Directory of Open Access Journals (Sweden)

    Magnezi R

    2016-12-01

    Full Text Available Racheli Magnezi,1 Asaf Hemi,1 Rina Hemi2 1Department of Management, Public Health and Health Systems Management Program, Bar Ilan University, Ramat Gan, 2Endocrine Service Unit, Sheba Medical Center, Tel Aviv, Israel Background: Risk management in health care systems applies to all hospital employees and directors as they deal with human life and emergency routines. There is a constant need to decrease risk and increase patient safety in the hospital environment. The purpose of this article is to review the laboratory testing procedures for parathyroid hormone and adrenocorticotropic hormone (which are characterized by short half-lives and to track failure modes and risks, and offer solutions to prevent them. During a routine quality improvement review at the Endocrine Laboratory in Tel Hashomer Hospital, we discovered these tests are frequently repeated unnecessarily due to multiple failures. The repetition of the tests inconveniences patients and leads to extra work for the laboratory and logistics personnel as well as the nurses and doctors who have to perform many tasks with limited resources.Methods: A team of eight staff members accompanied by the Head of the Endocrine Laboratory formed the team for analysis. The failure mode and effects analysis model (FMEA was used to analyze the laboratory testing procedure and was designed to simplify the process steps and indicate and rank possible failures.Results: A total of 23 failure modes were found within the process, 19 of which were ranked by level of severity. The FMEA model prioritizes failures by their risk priority number (RPN. For example, the most serious failure was the delay after the samples were collected from the department (RPN =226.1.Conclusion: This model helped us to visualize the process in a simple way. After analyzing the information, solutions were proposed to prevent failures, and a method to completely avoid the top four problems was also developed. Keywords: failure mode

  15. WWER expert system for fuel failure analysis using the RTOP-CA code

    International Nuclear Information System (INIS)

    Likhanskii, V.; Evdokimov, I.; Sorokin, A.; Khromov, A.; Kanukova, V.; Apollonova, O.; Ugryumov, A.

    2008-01-01

    The computer expert system for fuel failure analysis of WWER during operation is presented. The diagnostics is based on the measurement of specific activity of reference nuclides in reactor primary coolant and application of a computer code for the data interpretation. The data analysis includes an evaluation of tramp uranium mass in reactor core, detection of failures by iodine and caesium spikes, evaluation of burnup of defective fuel. Evaluation of defective fuel burnup was carried out by applying the relation of caesium nuclides activity in spikes and relations of activities of gaseous fission products for steady state operational conditions. The method of burnup evaluation of defective fuel by use of fission gas activity is presented in details. The neural-network analysis is performed for determination of failed fuel rod number and defect size. Results of the expert system application are illustrated for several fuel campaigns on operating WWER NPPs. (authors)

  16. Precision Statistical Analysis of Images Based on Brightness Distribution

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2017-07-01

    Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.

  17. Evaluation of Safety in a Radiation Oncology Setting Using Failure Mode and Effects Analysis

    International Nuclear Information System (INIS)

    Ford, Eric C.; Gaudette, Ray; Myers, Lee; Vanderver, Bruce; Engineer, Lilly; Zellars, Richard; Song, Danny Y.; Wong, John; DeWeese, Theodore L.

    2009-01-01

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool for prospectively evaluating safety and reliability. We report our experiences in applying FMEA in the setting of radiation oncology. Methods and Materials: We performed an FMEA analysis for our external beam radiation therapy service, which consisted of the following tasks: (1) create a visual map of the process, (2) identify possible failure modes; assign risk probability numbers (RPN) to each failure mode based on tabulated scores for the severity, frequency of occurrence, and detectability, each on a scale of 1 to 10; and (3) identify improvements that are both feasible and effective. The RPN scores can span a range of 1 to 1000, with higher scores indicating the relative importance of a given failure mode. Results: Our process map consisted of 269 different nodes. We identified 127 possible failure modes with RPN scores ranging from 2 to 160. Fifteen of the top-ranked failure modes were considered for process improvements, representing RPN scores of 75 and more. These specific improvement suggestions were incorporated into our practice with a review and implementation by each department team responsible for the process. Conclusions: The FMEA technique provides a systematic method for finding vulnerabilities in a process before they result in an error. The FMEA framework can naturally incorporate further quantification and monitoring. A general-use system for incident and near miss reporting would be useful in this regard.

  18. Fisher statistics for analysis of diffusion tensor directional information.

    Science.gov (United States)

    Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P

    2012-04-30

    A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Statistical analysis of RHIC beam position monitors performance

    Science.gov (United States)

    Calaga, R.; Tomás, R.

    2004-04-01

    A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  20. Statistical analysis of RHIC beam position monitors performance

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2004-04-01

    Full Text Available A detailed statistical analysis of beam position monitors (BPM performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.