WorldWideScience

Sample records for minimum failure probability

  1. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  2. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    Science.gov (United States)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  3. Failure probability analysis of optical grid

    Science.gov (United States)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  4. 14 CFR 417.224 - Probability of failure analysis.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Probability of failure analysis. 417.224..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.224 Probability of failure..., must account for launch vehicle failure probability in a consistent manner. A launch vehicle failure...

  5. The distributed failure probability approach to dependent failure analysis, and its application

    International Nuclear Information System (INIS)

    Hughes, R.P.

    1989-01-01

    The Distributed Failure Probability (DFP) approach to the problem of dependent failures in systems is presented. The basis of the approach is that the failure probability of a component is a variable. The source of this variability is the change in the 'environment' of the component, where the term 'environment' is used to mean not only obvious environmental factors such as temperature etc., but also such factors as the quality of maintenance and manufacture. The failure probability is distributed among these various 'environments' giving rise to the Distributed Failure Probability method. Within the framework which this method represents, modelling assumptions can be made, based both on engineering judgment and on the data directly. As such, this DFP approach provides a soundly based and scrutable technique by which dependent failures can be quantitatively assessed. (orig.)

  6. Evaluation of nuclear power plant component failure probability and core damage probability using simplified PSA model

    International Nuclear Information System (INIS)

    Shimada, Yoshio

    2000-01-01

    It is anticipated that the change of frequency of surveillance tests, preventive maintenance or parts replacement of safety related components may cause the change of component failure probability and result in the change of core damage probability. It is also anticipated that the change is different depending on the initiating event frequency or the component types. This study assessed the change of core damage probability using simplified PSA model capable of calculating core damage probability in a short time period, which is developed by the US NRC to process accident sequence precursors, when various component's failure probability is changed between 0 and 1, or Japanese or American initiating event frequency data are used. As a result of the analysis, (1) It was clarified that frequency of surveillance test, preventive maintenance or parts replacement of motor driven pumps (high pressure injection pumps, residual heat removal pumps, auxiliary feedwater pumps) should be carefully changed, since the core damage probability's change is large, when the base failure probability changes toward increasing direction. (2) Core damage probability change is insensitive to surveillance test frequency change, since the core damage probability change is small, when motor operated valves and turbine driven auxiliary feed water pump failure probability changes around one figure. (3) Core damage probability change is small, when Japanese failure probability data are applied to emergency diesel generator, even if failure probability changes one figure from the base value. On the other hand, when American failure probability data is applied, core damage probability increase is large, even if failure probability changes toward increasing direction. Therefore, when Japanese failure probability data is applied, core damage probability change is insensitive to surveillance tests frequency change etc. (author)

  7. Failure frequencies and probabilities applicable to BWR and PWR piping

    International Nuclear Information System (INIS)

    Bush, S.H.; Chockie, A.D.

    1996-03-01

    This report deals with failure probabilities and failure frequencies of nuclear plant piping and the failure frequencies of flanges and bellows. Piping failure probabilities are derived from Piping Reliability Analysis Including Seismic Events (PRAISE) computer code calculations based on fatigue and intergranular stress corrosion as failure mechanisms. Values for both failure probabilities and failure frequencies are cited from several sources to yield a better evaluation of the spread in mean and median values as well as the widths of the uncertainty bands. A general conclusion is that the numbers from WASH-1400 often used in PRAs are unduly conservative. Failure frequencies for both leaks and large breaks tend to be higher than would be calculated using the failure probabilities, primarily because the frequencies are based on a relatively small number of operating years. Also, failure probabilities are substantially lower because of the probability distributions used in PRAISE calculations. A general conclusion is that large LOCA probability values calculated using PRAISE will be quite small, on the order of less than 1E-8 per year (<1E-8/year). The values in this report should be recognized as having inherent limitations and should be considered as estimates and not absolute values. 24 refs 24 refs

  8. PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION

    Data.gov (United States)

    National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...

  9. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out......-crossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval and thus for the first-passage probability...

  10. Incorporation of various uncertainties in dependent failure-probability estimation

    International Nuclear Information System (INIS)

    Samanta, P.K.; Mitra, S.P.

    1982-01-01

    This paper describes an approach that allows the incorporation of various types of uncertainties in the estimation of dependent failure (common mode failure) probability. The types of uncertainties considered are attributable to data, modeling and coupling. The method developed is applied to a class of dependent failures, i.e., multiple human failures during testing, maintenance and calibration. Estimation of these failures is critical as they have been shown to be significant contributors to core melt probability in pressurized water reactors

  11. Integrated failure probability estimation based on structural integrity analysis and failure data: Natural gas pipeline case

    International Nuclear Information System (INIS)

    Dundulis, Gintautas; Žutautaitė, Inga; Janulionis, Remigijus; Ušpuras, Eugenijus; Rimkevičius, Sigitas; Eid, Mohamed

    2016-01-01

    In this paper, the authors present an approach as an overall framework for the estimation of the failure probability of pipelines based on: the results of the deterministic-probabilistic structural integrity analysis (taking into account loads, material properties, geometry, boundary conditions, crack size, and defected zone thickness), the corrosion rate, the number of defects and failure data (involved into the model via application of Bayesian method). The proposed approach is applied to estimate the failure probability of a selected part of the Lithuanian natural gas transmission network. The presented approach for the estimation of integrated failure probability is a combination of several different analyses allowing us to obtain: the critical crack's length and depth, the failure probability of the defected zone thickness, dependency of the failure probability on the age of the natural gas transmission pipeline. A model's uncertainty analysis and uncertainty propagation analysis are performed, as well. - Highlights: • Degradation mechanisms of natural gas transmission pipelines. • Fracture mechanic analysis of the pipe with crack. • Stress evaluation of the pipe with critical crack. • Deterministic-probabilistic structural integrity analysis of gas pipeline. • Integrated estimation of pipeline failure probability by Bayesian method.

  12. Distinguishing mixed quantum states: Minimum-error discrimination versus optimum unambiguous discrimination

    International Nuclear Information System (INIS)

    Herzog, Ulrike; Bergou, Janos A.

    2004-01-01

    We consider two different optimized measurement strategies for the discrimination of nonorthogonal quantum states. The first is ambiguous discrimination with a minimum probability of inferring an erroneous result, and the second is unambiguous, i.e., error-free, discrimination with a minimum probability of getting an inconclusive outcome, where the measurement fails to give a definite answer. For distinguishing between two mixed quantum states, we investigate the relation between the minimum-error probability achievable in ambiguous discrimination, and the minimum failure probability that can be reached in unambiguous discrimination of the same two states. The latter turns out to be at least twice as large as the former for any two given states. As an example, we treat the case where the state of the quantum system is known to be, with arbitrary prior probability, either a given pure state, or a uniform statistical mixture of any number of mutually orthogonal states. For this case we derive an analytical result for the minimum probability of error and perform a quantitative comparison with the minimum failure probability

  13. Pipe failure probability - the Thomas paper revisited

    International Nuclear Information System (INIS)

    Lydell, B.O.Y.

    2000-01-01

    Almost twenty years ago, in Volume 2 of Reliability Engineering (the predecessor of Reliability Engineering and System Safety), a paper by H. M. Thomas of Rolls Royce and Associates Ltd. presented a generalized approach to the estimation of piping and vessel failure probability. The 'Thomas-approach' used insights from actual failure statistics to calculate the probability of leakage and conditional probability of rupture given leakage. It was intended for practitioners without access to data on the service experience with piping and piping system components. This article revisits the Thomas paper by drawing on insights from development of a new database on piping failures in commercial nuclear power plants worldwide (SKI-PIPE). Partially sponsored by the Swedish Nuclear Power Inspectorate (SKI), the R and D leading up to this note was performed during 1994-1999. Motivated by data requirements of reliability analysis and probabilistic safety assessment (PSA), the new database supports statistical analysis of piping failure data. Against the background of this database development program, the article reviews the applicability of the 'Thomas approach' in applied risk and reliability analysis. It addresses the question whether a new and expanded database on the service experience with piping systems would alter the original piping reliability correlation as suggested by H. M. Thomas

  14. Finding upper bounds for software failure probabilities - experiments and results

    International Nuclear Information System (INIS)

    Kristiansen, Monica; Winther, Rune

    2005-09-01

    This report looks into some aspects of using Bayesian hypothesis testing to find upper bounds for software failure probabilities. In the first part, the report evaluates the Bayesian hypothesis testing approach for finding upper bounds for failure probabilities of single software components. The report shows how different choices of prior probability distributions for a software component's failure probability influence the number of tests required to obtain adequate confidence in a software component. In the evaluation, both the effect of the shape of the prior distribution as well as one's prior confidence in the software component were investigated. In addition, different choices of prior probability distributions are discussed based on their relevance in a software context. In the second part, ideas on how the Bayesian hypothesis testing approach can be extended to assess systems consisting of multiple software components are given. One of the main challenges when assessing systems consisting of multiple software components is to include dependency aspects in the software reliability models. However, different types of failure dependencies between software components must be modelled differently. Identifying different types of failure dependencies are therefore an important condition for choosing a prior probability distribution, which correctly reflects one's prior belief in the probability for software components failing dependently. In this report, software components include both general in-house software components, as well as pre-developed software components (e.g. COTS, SOUP, etc). (Author)

  15. Human error recovery failure probability when using soft controls in computerized control rooms

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Wondea

    2014-01-01

    Many literatures categorized recovery process into three phases; detection of problem situation, explanation of problem causes or countermeasures against problem, and end of recovery. Although the focus of recovery promotion has been on categorizing recovery phases and modeling recovery process, research related to human recovery failure probabilities has not been perform actively. On the other hand, a few study regarding recovery failure probabilities were implemented empirically. Summarizing, researches that have performed so far have several problems in terms of use in human reliability analysis (HRA). By adopting new human-system interfaces that are based on computer-based technologies, the operation environment of MCRs in NPPs has changed from conventional MCRs to advanced MCRs. Because of the different interfaces between conventional and advanced MCRs, different recovery failure probabilities should be considered in the HRA for advanced MCRs. Therefore, this study carries out an empirical analysis of human error recovery probabilities under an advanced MCR mockup called compact nuclear simulator (CNS). The aim of this work is not only to compile a recovery failure probability database using the simulator for advanced MCRs but also to collect recovery failure probability according to defined human error modes to compare that which human error mode has highest recovery failure probability. The results show that recovery failure probability regarding wrong screen selection was lowest among human error modes, which means that most of human error related to wrong screen selection can be recovered. On the other hand, recovery failure probabilities of operation selection omission and delayed operation were 1.0. These results imply that once subject omitted one task in the procedure, they have difficulties finding and recovering their errors without supervisor's assistance. Also, wrong screen selection had an effect on delayed operation. That is, wrong screen

  16. Estimation of component failure probability from masked binomial system testing data

    International Nuclear Information System (INIS)

    Tan Zhibin

    2005-01-01

    The component failure probability estimates from analysis of binomial system testing data are very useful because they reflect the operational failure probability of components in the field which is similar to the test environment. In practice, this type of analysis is often confounded by the problem of data masking: the status of tested components is unknown. Methods in considering this type of uncertainty are usually computationally intensive and not practical to solve the problem for complex systems. In this paper, we consider masked binomial system testing data and develop a probabilistic model to efficiently estimate component failure probabilities. In the model, all system tests are classified into test categories based on component coverage. Component coverage of test categories is modeled by a bipartite graph. Test category failure probabilities conditional on the status of covered components are defined. An EM algorithm to estimate component failure probabilities is developed based on a simple but powerful concept: equivalent failures and tests. By simulation we not only demonstrate the convergence and accuracy of the algorithm but also show that the probabilistic model is capable of analyzing systems in series, parallel and any other user defined structures. A case study illustrates an application in test case prioritization

  17. Main factors for fatigue failure probability of pipes subjected to fluid thermal fluctuation

    International Nuclear Information System (INIS)

    Machida, Hideo; Suzuki, Masaaki; Kasahara, Naoto

    2015-01-01

    It is very important to grasp failure probability and failure mode appropriately to carry out risk reduction measures of nuclear power plants. To clarify the important factors for failure probability and failure mode of pipes subjected to fluid thermal fluctuation, failure probability analyses were performed by changing the values of a stress range, stress ratio, stress components and threshold of stress intensity factor range. The important factors for the failure probability are range, stress ratio (mean stress condition) and threshold of stress intensity factor range. The important factor for the failure mode is a circumferential angle range of fluid thermal fluctuation. When a large fluid thermal fluctuation acts on the entire circumferential surface of the pipe, the probability of pipe breakage increases, calling for measures to prevent such a failure and reduce the risk to the plant. When the circumferential angle subjected to fluid thermal fluctuation is small, the failure mode of piping is leakage and the corrective maintenance might be applicable from the viewpoint of risk to the plant. (author)

  18. Failure probability analysis on mercury target vessel

    International Nuclear Information System (INIS)

    Ishikura, Syuichi; Futakawa, Masatoshi; Kogawa, Hiroyuki; Sato, Hiroshi; Haga, Katsuhiro; Ikeda, Yujiro

    2005-03-01

    Failure probability analysis was carried out to estimate the lifetime of the mercury target which will be installed into the JSNS (Japan spallation neutron source) in J-PARC (Japan Proton Accelerator Research Complex). The lifetime was estimated as taking loading condition and materials degradation into account. Considered loads imposed on the target vessel were the static stresses due to thermal expansion and static pre-pressure on He-gas and mercury and the dynamic stresses due to the thermally shocked pressure waves generated repeatedly at 25 Hz. Materials used in target vessel will be degraded by the fatigue, neutron and proton irradiation, mercury immersion and pitting damages, etc. The imposed stresses were evaluated through static and dynamic structural analyses. The material-degradations were deduced based on published experimental data. As a result, it was quantitatively confirmed that the failure probability for the lifetime expected in the design is very much lower, 10 -11 in the safety hull, meaning that it will be hardly failed during the design lifetime. On the other hand, the beam window of mercury vessel suffered with high-pressure waves exhibits the failure probability of 12%. It was concluded, therefore, that the leaked mercury from the failed area at the beam window is adequately kept in the space between the safety hull and the mercury vessel by using mercury-leakage sensors. (author)

  19. Sensitivity of the probability of failure to probability of detection curve regions

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2016-01-01

    Non-destructive inspection (NDI) techniques have been shown to play a vital role in fracture control plans, structural health monitoring, and ensuring availability and reliability of piping, pressure vessels, mechanical and aerospace equipment. Probabilistic fatigue simulations are often used in order to determine the efficacy of an inspection procedure with the NDI method modeled as a probability of detection (POD) curve. These simulations can be used to determine the most advantageous NDI method for a given application. As an aid to this process, a first order sensitivity method of the probability-of-failure (POF) with respect to regions of the POD curve (lower tail, middle region, right tail) is developed and presented here. The sensitivity method computes the partial derivative of the POF with respect to a change in each region of a POD or multiple POD curves. The sensitivities are computed at no cost by reusing the samples from an existing Monte Carlo (MC) analysis. A numerical example is presented considering single and multiple inspections. - Highlights: • Sensitivities of probability-of-failure to a region of probability-of-detection curve. • The sensitivities are computed with negligible cost. • Sensitivities identify the important region of a POD curve. • Sensitivities can be used as a guide to selecting the optimal POD curve.

  20. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  1. A method for estimating failure rates for low probability events arising in PSA

    International Nuclear Information System (INIS)

    Thorne, M.C.; Williams, M.M.R.

    1995-01-01

    The authors develop a method for predicting failure rates and failure probabilities per event when, over a given test period or number of demands, no failures have occurred. A Bayesian approach is adopted to calculate a posterior probability distribution for the failure rate or failure probability per event subsequent to the test period. This posterior is then used to estimate effective failure rates or probabilities over a subsequent period of time or number of demands. In special circumstances, the authors results reduce to the well-known rules of thumb, viz: 1/N and 1/T, where N is the number of demands during the test period for no failures and T is the test period for no failures. However, the authors are able to give strict conditions on the validity of these rules of thumb and to improve on them when necessary

  2. Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid

    2012-01-01

    This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...

  3. Failure probability of PWR reactor coolant loop piping

    International Nuclear Information System (INIS)

    Lo, T.; Woo, H.H.; Holman, G.S.; Chou, C.K.

    1984-02-01

    This paper describes the results of assessments performed on the PWR coolant loop piping of Westinghouse and Combustion Engineering plants. For direct double-ended guillotine break (DEGB), consideration was given to crack existence probability, initial crack size distribution, hydrostatic proof test, preservice inspection, leak detection probability, crack growth characteristics, and failure criteria based on the net section stress failure and tearing modulus stability concept. For indirect DEGB, fragilities of major component supports were estimated. The system level fragility was then calculated based on the Boolean expression involving these fragilities. Indirect DEGB due to seismic effects was calculated by convolving the system level fragility and the seismic hazard curve. The results indicate that the probability of occurrence of both direct and indirect DEGB is extremely small, thus, postulation of DEGB in design should be eliminated and replaced by more realistic criteria

  4. Hydra-Ring: a computational framework to combine failure probabilities

    Science.gov (United States)

    Diermanse, Ferdinand; Roscoe, Kathryn; IJmker, Janneke; Mens, Marjolein; Bouwer, Laurens

    2013-04-01

    This presentation discusses the development of a new computational framework for the safety assessment of flood defence systems: Hydra-Ring. Hydra-Ring computes the failure probability of a flood defence system, which is composed of a number of elements (e.g., dike segments, dune segments or hydraulic structures), taking all relevant uncertainties explicitly into account. This is a major step forward in comparison with the current Dutch practice in which the safety assessment is done separately per individual flood defence section. The main advantage of the new approach is that it will result in a more balanced prioratization of required mitigating measures ('more value for money'). Failure of the flood defence system occurs if any element within the system fails. Hydra-Ring thus computes and combines failure probabilities of the following elements: - Failure mechanisms: A flood defence system can fail due to different failure mechanisms. - Time periods: failure probabilities are first computed for relatively small time scales (assessment of flood defense systems, Hydra-Ring can also be used to derive fragility curves, to asses the efficiency of flood mitigating measures, and to quantify the impact of climate change and land subsidence on flood risk. Hydra-Ring is being developed in the context of the Dutch situation. However, the computational concept is generic and the model is set up in such a way that it can be applied to other areas as well. The presentation will focus on the model concept and probabilistic computation techniques.

  5. A probability model for the failure of pressure containing parts

    International Nuclear Information System (INIS)

    Thomas, H.M.

    1978-01-01

    The model provides a method of estimating the order of magnitude of the leakage failure probability of pressure containing parts. It is a fatigue based model which makes use of the statistics available for both specimens and vessels. Some novel concepts are introduced but essentially the model simply quantifies the obvious i.e. that failure probability increases with increases in stress levels, number of cycles, volume of material and volume of weld metal. A further model based on fracture mechanics estimates the catastrophic fraction of leakage failures. (author)

  6. Probabilistic analysis of Millstone Unit 3 ultimate containment failure probability given high pressure: Chapter 14

    International Nuclear Information System (INIS)

    Bickel, J.H.

    1983-01-01

    The quantification of the containment event trees in the Millstone Unit 3 Probabilistic Safety Study utilizes a conditional probability of failure given high pressure which is based on a new approach. The generation of this conditional probability was based on a weakest link failure mode model which considered contributions from a number of overlapping failure modes. This overlap effect was due to a number of failure modes whose mean failure pressures were clustered within a 5 psi range and which had uncertainties due to variances in material strengths and analytical uncertainties which were between 9 and 15 psi. Based on a review of possible probability laws to describe the failure probability of individual structural failure modes, it was determined that a Weibull probability law most adequately described the randomness in the physical process of interest. The resultant conditional probability of failure is found to have a median failure pressure of 132.4 psia. The corresponding 5-95 percentile values are 112 psia and 146.7 psia respectively. The skewed nature of the conditional probability of failure vs. pressure results in a lower overall containment failure probability for an appreciable number of the severe accident sequences of interest, but also probabilities which are more rigorously traceable from first principles

  7. Estimation of failure probabilities of linear dynamic systems by ...

    Indian Academy of Sciences (India)

    An iterative method for estimating the failure probability for certain time-variant reliability problems has been developed. In the paper, the focus is on the displacement response of a linear oscillator driven by white noise. Failure is then assumed to occur when the displacement response exceeds a critical threshold.

  8. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Yoshitaka; Ohtani, Masanori [Institute of Nuclear Safety System, Inc., Mihama, Fukui (Japan); Fujita, Yushi [TECNOVA Corp., Tokyo (Japan)

    2002-09-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  9. Quantification of a decision-making failure probability of the accident management using cognitive analysis model

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka; Ohtani, Masanori; Fujita, Yushi

    2002-01-01

    In the nuclear power plant, much knowledge is acquired through probabilistic safety assessment (PSA) of a severe accident, and accident management (AM) is prepared. It is necessary to evaluate the effectiveness of AM using the decision-making failure probability of an emergency organization, operation failure probability of operators, success criteria of AM and reliability of AM equipments in PSA. However, there has been no suitable qualification method for PSA so far to obtain the decision-making failure probability, because the decision-making failure of an emergency organization treats the knowledge based error. In this work, we developed a new method for quantification of the decision-making failure probability of an emergency organization using cognitive analysis model, which decided an AM strategy, in a nuclear power plant at the severe accident, and tried to apply it to a typical pressurized water reactor (PWR) plant. As a result: (1) It could quantify the decision-making failure probability adjusted to PSA for general analysts, who do not necessarily possess professional human factors knowledge, by choosing the suitable value of a basic failure probability and an error-factor. (2) The decision-making failure probabilities of six AMs were in the range of 0.23 to 0.41 using the screening evaluation method and in the range of 0.10 to 0.19 using the detailed evaluation method as the result of trial evaluation based on severe accident analysis of a typical PWR plant, and a result of sensitivity analysis of the conservative assumption, failure probability decreased about 50%. (3) The failure probability using the screening evaluation method exceeded that using detailed evaluation method by 99% of probability theoretically, and the failure probability of AM in this study exceeded 100%. From this result, it was shown that the decision-making failure probability was more conservative than the detailed evaluation method, and the screening evaluation method satisfied

  10. Estimation of probability of failure for damage-tolerant aerospace structures

    Science.gov (United States)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This

  11. Impact of proof test interval and coverage on probability of failure of safety instrumented function

    International Nuclear Information System (INIS)

    Jin, Jianghong; Pang, Lei; Hu, Bin; Wang, Xiaodong

    2016-01-01

    Highlights: • Introduction of proof test coverage makes the calculation of the probability of failure for SIF more accurate. • The probability of failure undetected by proof test is independently defined as P TIF and calculated. • P TIF is quantified using reliability block diagram and simple formula of PFD avg . • Improving proof test coverage and adopting reasonable test period can reduce the probability of failure for SIF. - Abstract: Imperfection of proof test can result in the safety function failure of safety instrumented system (SIS) at any time in its life period. IEC61508 and other references ignored or only elementarily analyzed the imperfection of proof test. In order to further study the impact of the imperfection of proof test on the probability of failure for safety instrumented function (SIF), the necessity of proof test and influence of its imperfection on system performance was first analyzed theoretically. The probability of failure for safety instrumented function resulted from the imperfection of proof test was defined as probability of test independent failures (P TIF ), and P TIF was separately calculated by introducing proof test coverage and adopting reliability block diagram, with reference to the simplified calculation formula of average probability of failure on demand (PFD avg ). Research results show that: the shorter proof test period and the higher proof test coverage indicate the smaller probability of failure for safety instrumented function. The probability of failure for safety instrumented function which is calculated by introducing proof test coverage will be more accurate.

  12. Unbiased multi-fidelity estimate of failure probability of a free plane jet

    Science.gov (United States)

    Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin

    2017-11-01

    Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.

  13. A new reliability measure based on specified minimum distances before the locations of random variables in a finite interval

    International Nuclear Information System (INIS)

    Todinov, M.T.

    2004-01-01

    A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations. In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level

  14. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    Energy Technology Data Exchange (ETDEWEB)

    Garza, J. [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States); Millwater, H., E-mail: harry.millwater@utsa.edu [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States)

    2012-04-15

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: Black-Right-Pointing-Pointer Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. Black-Right-Pointing-Pointer The sensitivities are computed with negligible cost using Monte Carlo sampling. Black-Right-Pointing-Pointer The change in the POF due to a change in the POD curve parameters can be easily estimated.

  15. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2012-01-01

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: ► Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. ►The sensitivities are computed with negligible cost using Monte Carlo sampling. ► The change in the POF due to a change in the POD curve parameters can be easily estimated.

  16. Failure-probability driven dose painting

    International Nuclear Information System (INIS)

    Vogelius, Ivan R.; Håkansson, Katrin; Due, Anne K.; Aznar, Marianne C.; Kristensen, Claus A.; Rasmussen, Jacob; Specht, Lena; Berthelsen, Anne K.; Bentzen, Søren M.

    2013-01-01

    Purpose: To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study using the optimized dose prescription in 20 patients is performed.Methods: Patients treated at our center have five tumor subvolumes from the center of the tumor (PET positive volume) and out delineated. The spatial distribution of 48 failures in patients with complete clinical response after (chemo)radiation is used to derive a model for tumor control probability (TCP). The total TCP is fixed to the clinically observed 70% actuarial TCP at five years. Additionally, the authors match the distribution of failures between the five subvolumes to the observed distribution. The steepness of the dose–response is extracted from the literature and the authors assume 30% and 20% risk of subclinical involvement in the elective volumes. The result is a five-compartment dose response model matching the observed distribution of failures. The model is used to optimize the distribution of dose in individual patients, while keeping the treatment intensity constant and the maximum prescribed dose below 85 Gy.Results: The vast majority of failures occur centrally despite the small volumes of the central regions. Thus, optimizing the dose prescription yields higher doses to the central target volumes and lower doses to the elective volumes. The dose planning study shows that the modified prescription is clinically feasible. The optimized TCP is 89% (range: 82%–91%) as compared to the observed TCP of 70%.Conclusions: The observed distribution of locoregional failures was used to derive an objective, data-driven dose prescription function. The optimized dose is predicted to result in a substantial increase in local control without increasing the predicted risk of toxicity

  17. Personality Changes as a Function of Minimum Competency Test Success or Failure.

    Science.gov (United States)

    Richman, Charles L.; And Others

    1987-01-01

    The psychological effects of success and failure on the North Carolina Minimum Competency Test (MCT) were examined. Subjects were high school students, who were pre- and post-tested using the Rosenberg Self Esteem Scale and the High School Personality Questionnaire. Self-esteem decreased following knowledge of MCT failure. (LMO)

  18. Calculating failure probabilities for TRISO-coated fuel particles using an integral formulation

    International Nuclear Information System (INIS)

    Miller, Gregory K.; Maki, John T.; Knudson, Darrell L.; Petti, David A.

    2010-01-01

    The fundamental design for a gas-cooled reactor relies on the safe behavior of the coated particle fuel. The coating layers surrounding the fuel kernels in these spherical particles, termed the TRISO coating, act as a pressure vessel that retains fission products. The quality of the fuel is reflected in the number of particle failures that occur during reactor operation, where failed particles become a source for fission products that can then diffuse through the fuel element. The failure probability for any batch of particles, which has traditionally been calculated using the Monte Carlo method, depends on statistical variations in design parameters and on variations in the strengths of coating layers among particles in the batch. An alternative approach to calculating failure probabilities is developed herein that uses direct numerical integration of a failure probability integral. Because this is a multiple integral where the statistically varying parameters become integration variables, a fast numerical integration approach is also developed. In sample cases analyzed involving multiple failure mechanisms, results from the integration methods agree closely with Monte Carlo results. Additionally, the fast integration approach, particularly, is shown to significantly improve efficiency of failure probability calculations. These integration methods have been implemented in the PARFUME fuel performance code along with the Monte Carlo method, where each serves to verify accuracy of the others.

  19. Input-profile-based software failure probability quantification for safety signal generation systems

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Lim, Ho Gon; Lee, Ho Jung; Kim, Man Cheol; Jang, Seung Cheol

    2009-01-01

    The approaches for software failure probability estimation are mainly based on the results of testing. Test cases represent the inputs, which are encountered in an actual use. The test inputs for the safety-critical application such as a reactor protection system (RPS) of a nuclear power plant are the inputs which cause the activation of protective action such as a reactor trip. A digital system treats inputs from instrumentation sensors as discrete digital values by using an analog-to-digital converter. Input profile must be determined in consideration of these characteristics for effective software failure probability quantification. Another important characteristic of software testing is that we do not have to repeat the test for the same input value since the software response is deterministic for each specific digital input. With these considerations, we propose an effective software testing method for quantifying the failure probability. As an example application, the input profile of the digital RPS is developed based on the typical plant data. The proposed method in this study is expected to provide a simple but realistic mean to quantify the software failure probability based on input profile and system dynamics.

  20. Evaluations of Structural Failure Probabilities and Candidate Inservice Inspection Programs

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.; Simonen, Fredric A.

    2009-05-01

    The work described in this report applies probabilistic structural mechanics models to predict the reliability of nuclear pressure boundary components. These same models are then applied to evaluate the effectiveness of alternative programs for inservice inspection to reduce these failure probabilities. Results of the calculations support the development and implementation of risk-informed inservice inspection of piping and vessels. Studies have specifically addressed the potential benefits of ultrasonic inspections to reduce failure probabilities associated with fatigue crack growth and stress-corrosion cracking. Parametric calculations were performed with the computer code pc-PRAISE to generate an extensive set of plots to cover a wide range of pipe wall thicknesses, cyclic operating stresses, and inspection strategies. The studies have also addressed critical inputs to fracture mechanics calculations such as the parameters that characterize the number and sizes of fabrication flaws in piping welds. Other calculations quantify uncertainties associated with the inputs calculations, the uncertainties in the fracture mechanics models, and the uncertainties in the resulting calculated failure probabilities. A final set of calculations address the effects of flaw sizing errors on the effectiveness of inservice inspection programs.

  1. Automatic Monitoring System Design and Failure Probability Analysis for River Dikes on Steep Channel

    Science.gov (United States)

    Chang, Yin-Lung; Lin, Yi-Jun; Tung, Yeou-Koung

    2017-04-01

    The purposes of this study includes: (1) design an automatic monitoring system for river dike; and (2) develop a framework which enables the determination of dike failure probabilities for various failure modes during a rainstorm. The historical dike failure data collected in this study indicate that most dikes in Taiwan collapsed under the 20-years return period discharge, which means the probability of dike failure is much higher than that of overtopping. We installed the dike monitoring system on the Chiu-She Dike which located on the middle stream of Dajia River, Taiwan. The system includes: (1) vertical distributed pore water pressure sensors in front of and behind the dike; (2) Time Domain Reflectometry (TDR) to measure the displacement of dike; (3) wireless floating device to measure the scouring depth at the toe of dike; and (4) water level gauge. The monitoring system recorded the variation of pore pressure inside the Chiu-She Dike and the scouring depth during Typhoon Megi. The recorded data showed that the highest groundwater level insides the dike occurred 15 hours after the peak discharge. We developed a framework which accounts for the uncertainties from return period discharge, Manning's n, scouring depth, soil cohesion, and friction angle and enables the determination of dike failure probabilities for various failure modes such as overtopping, surface erosion, mass failure, toe sliding and overturning. The framework was applied to Chiu-She, Feng-Chou, and Ke-Chuang Dikes on Dajia River. The results indicate that the toe sliding or overturning has the highest probability than other failure modes. Furthermore, the overall failure probability (integrate different failure modes) reaches 50% under 10-years return period flood which agrees with the historical failure data for the study reaches.

  2. Statistical analysis on failure-to-open/close probability of motor-operated valve in sodium system

    International Nuclear Information System (INIS)

    Kurisaka, Kenichi

    1998-08-01

    The objective of this work is to develop basic data for examination on efficiency of preventive maintenance and actuation test from the standpoint of failure probability. This work consists of a statistical trend analysis of valve failure probability in a failure-to-open/close mode on time since installation and time since last open/close action, based on the field data of operating- and failure-experience. In this work, the terms both dependent and independent on time were considered in the failure probability. The linear aging model was modified and applied to the first term. In this model there are two terms with both failure rates in proportion to time since installation and to time since last open/close-demand. Because of sufficient statistical population, motor-operated valves (MOV's) in sodium system were selected to be analyzed from the CORDS database which contains operating data and failure data of components in the fast reactors and sodium test facilities. According to these data, the functional parameters were statistically estimated to quantify the valve failure probability in a failure-to-open/close mode, with consideration of uncertainty. (J.P.N.)

  3. Reliability of structures by using probability and fatigue theories

    International Nuclear Information System (INIS)

    Lee, Ouk Sub; Kim, Dong Hyeok; Park, Yeon Chang

    2008-01-01

    Methodologies to calculate failure probability and to estimate the reliability of fatigue loaded structures are developed. The applicability of the methodologies is evaluated with the help of the fatigue crack growth models suggested by Paris and Walker. The probability theories such as the FORM (first order reliability method), the SORM (second order reliability method) and the MCS (Monte Carlo simulation) are utilized. It is found that the failure probability decreases with the increase of the design fatigue life and the applied minimum stress, the decrease of the initial edge crack size, the applied maximum stress and the slope of Paris equation. Furthermore, according to the sensitivity analysis of random variables, the slope of Pairs equation affects the failure probability dominantly among other random variables in the Paris and the Walker models

  4. Estimation of Extreme Response and Failure Probability of Wind Turbines under Normal Operation using Probability Density Evolution Method

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Liu, W. F.

    2013-01-01

    Estimation of extreme response and failure probability of structures subjected to ultimate design loads is essential for structural design of wind turbines according to the new standard IEC61400-1. This task is focused on in the present paper in virtue of probability density evolution method (PDEM......), which underlies the schemes of random vibration analysis and structural reliability assessment. The short-term rare failure probability of 5-mega-watt wind turbines, for illustrative purposes, in case of given mean wind speeds and turbulence levels is investigated through the scheme of extreme value...... distribution instead of any other approximate schemes of fitted distribution currently used in statistical extrapolation techniques. Besides, the comparative studies against the classical fitted distributions and the standard Monte Carlo techniques are carried out. Numerical results indicate that PDEM exhibits...

  5. Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods

    Science.gov (United States)

    Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed

    2018-04-01

    This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.

  6. Uncertainties and quantification of common cause failure rates and probabilities for system analyses

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2005-01-01

    Simultaneous failures of multiple components due to common causes at random times are modelled by constant multiple-failure rates. A procedure is described for quantification of common cause failure (CCF) basic event probabilities for system models using plant-specific and multiple-plant failure-event data. Methodology is presented for estimating CCF-rates from event data contaminated with assessment uncertainties. Generalised impact vectors determine the moments for the rates of individual systems or plants. These moments determine the effective numbers of events and observation times to be input to a Bayesian formalism to obtain plant-specific posterior CCF-rates. The rates are used to determine plant-specific common cause event probabilities for the basic events of explicit fault tree models depending on test intervals, test schedules and repair policies. Three methods are presented to determine these probabilities such that the correct time-average system unavailability can be obtained with single fault tree quantification. Recommended numerical values are given and examples illustrate different aspects of the methodology

  7. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  8. Determination of the failure probability in the weld region of ap-600 vessel for transient condition

    International Nuclear Information System (INIS)

    Wahyono, I.P.

    1997-01-01

    Failure probability in the weld region of AP-600 vessel was determined for transient condition scenario. The type of transient is increase of the heat removal from primary cooling system due to sudden opening of safety valves or steam relief valves on the secondary cooling system or the steam generator. Temperature and pressure in the vessel was considered as the base of deterministic calculation of the stress intensity factor. Calculation of film coefficient of the convective heat transfers is a function of the transient time and water parameter. Pressure, material temperature, flaw depth and transient time are variables for the stress intensity factor. Failure probability consideration was done by using the above information in regard with the flaw and probability distributions of Octavia II and Marshall. Calculation of the failure probability by probability fracture mechanic simulation is applied on the weld region. Failure of the vessel is assumed as a failure of the weld material with one crack which stress intensity factor applied is higher than the critical stress intensity factor. VISA II code (Vessel Integrity Simulation Analysis II) was used for deterministic calculation and simulation. Failure probability of the material is 1.E-5 for Octavia II distribution and 4E-6 for marshall distribution for each transient event postulated. The failure occurred at the 1.7th menit of the initial transient under 12.53 ksi of the pressure

  9. Reactor materials program process water component failure probability

    International Nuclear Information System (INIS)

    Daugherty, W. L.

    1988-01-01

    The maximum rate loss of coolant accident for the Savannah River Production Reactors is presently specified as the abrupt double-ended guillotine break (DEGB) of a large process water pipe. This accident is not considered credible in light of the low applied stresses and the inherent ductility of the piping materials. The Reactor Materials Program was initiated to provide the technical basis for an alternate, credible maximum rate LOCA. The major thrust of this program is to develop an alternate worst case accident scenario by deterministic means. In addition, the probability of a DEGB is also being determined; to show that in addition to being mechanistically incredible, it is also highly improbable. The probability of a DEGB of the process water piping is evaluated in two parts: failure by direct means, and indirectly-induced failure. These two areas have been discussed in other reports. In addition, the frequency of a large bread (equivalent to a DEGB) in other process water system components is assessed. This report reviews the large break frequency for each component as well as the overall large break frequency for the reactor system

  10. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  11. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  12. Failure probability analyses for PWSCC in Ni-based alloy welds

    International Nuclear Information System (INIS)

    Udagawa, Makoto; Katsuyama, Jinya; Onizawa, Kunio; Li, Yinsheng

    2015-01-01

    A number of cracks due to primary water stress corrosion cracking (PWSCC) in pressurized water reactors and Ni-based alloy stress corrosion cracking (NiSCC) in boiling water reactors have been detected around Ni-based alloy welds. The causes of crack initiation and growth due to stress corrosion cracking include weld residual stress, operating stress, the materials, and the environment. We have developed the analysis code PASCAL-NP for calculating the failure probability and assessment of the structural integrity of cracked components on the basis of probabilistic fracture mechanics (PFM) considering PWSCC and NiSCC. This PFM analysis code has functions for calculating the incubation time of PWSCC and NiSCC crack initiation, evaluation of crack growth behavior considering certain crack location and orientation patterns, and evaluation of failure behavior near Ni-based alloy welds due to PWSCC and NiSCC in a probabilistic manner. Herein, actual plants affected by PWSCC have been analyzed using PASCAL-NP. Failure probabilities calculated by PASCAL-NP are in reasonable agreement with the detection data. Furthermore, useful knowledge related to leakage due to PWSCC was obtained through parametric studies using this code

  13. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, Stephen [Cleveland State Univ., Cleveland, OH (United States)

    2013-09-09

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  14. Modeling Stress Strain Relationships and Predicting Failure Probabilities For Graphite Core Components

    International Nuclear Information System (INIS)

    Duffy, Stephen

    2013-01-01

    This project will implement inelastic constitutive models that will yield the requisite stress-strain information necessary for graphite component design. Accurate knowledge of stress states (both elastic and inelastic) is required to assess how close a nuclear core component is to failure. Strain states are needed to assess deformations in order to ascertain serviceability issues relating to failure, e.g., whether too much shrinkage has taken place for the core to function properly. Failure probabilities, as opposed to safety factors, are required in order to capture the bariability in failure strength in tensile regimes. The current stress state is used to predict the probability of failure. Stochastic failure models will be developed that can accommodate possible material anisotropy. This work will also model material damage (i.e., degradation of mechanical properties) due to radiation exposure. The team will design tools for components fabricated from nuclear graphite. These tools must readily interact with finite element software--in particular, COMSOL, the software algorithm currently being utilized by the Idaho National Laboratory. For the eleastic response of graphite, the team will adopt anisotropic stress-strain relationships available in COMSO. Data from the literature will be utilized to characterize the appropriate elastic material constants.

  15. Determination of bounds on failure probability in the presence of ...

    Indian Academy of Sciences (India)

    In particular, fuzzy set theory provides a more rational framework for ..... indicating that the random variations inT andO2 do not affect failure probability significantly. ... The upper-bound for PF shown in figure 6 can be used in decision-making.

  16. Approximative determination of failure probabilities in probabilistic fracture mechanics

    International Nuclear Information System (INIS)

    Riesch-Oppermann, H.; Brueckner, A.

    1987-01-01

    The possibility of using FORM in probabilistic fracture mechanics (PFM) is investigated. After a short review of the method and a description of some specific problems occurring in PFM applications, results obtained with FORM for the failure probabilities in a typical PFM problem (fatigue crack growth) are compared with those determined by a Monte Carlo simulation. (orig./HP)

  17. Cladding failure probability modeling for risk evaluations of fast reactors

    International Nuclear Information System (INIS)

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current US innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery

  18. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    Science.gov (United States)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  19. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    Science.gov (United States)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  20. Reactor Materials Program probability of indirectly--induced failure of L and P reactor process water piping

    International Nuclear Information System (INIS)

    Daugherty, W.L.

    1988-01-01

    The design basis accident for the Savannah River Production Reactors is the abrupt double-ended guillotine break (DEGB) of a large process water pipe. This accident is not considered credible in light of the low applied stresses and the inherent ductility of the piping material. The Reactor Materials Program was initiated to provide the technical basis for an alternate credible design basis accident. One aspect of this work is to determine the probability of the DEGB; to show that in addition to being incredible, it is also highly improbable. The probability of a DEGB is broken into two parts: failure by direct means, and indirectly-induced failure. Failure of the piping by direct means can only be postulated to occur if an undetected crack grows to the point of instability, causing a large pipe break. While this accident is not as severe as a DEGB, it provides a conservative upper bound on the probability of a direct DEGB of the piping. The second part of this evaluation calculates the probability of piping failure by indirect causes. Indirect failure of the piping can be triggered by an earthquake which causes other reactor components or the reactor building to fall on the piping or pull it from its supports. Since indirectly-induced failure of the piping will not always produce consequences as severe as a DEGB, this gives a conservative estimate of the probability of an indirectly- induced DEGB. This second part, indirectly-induced pipe failure, is the subject of this report. Failure by seismic loads in the piping itself will be covered in a separate report on failure by direct causes. This report provides a detailed evaluation of L reactor. A walkdown of P reactor and an analysis of the P reactor building provide the basis for extending the L reactor results to P reactor

  1. Probabilistic Design Analysis (PDA) Approach to Determine the Probability of Cross-System Failures for a Space Launch Vehicle

    Science.gov (United States)

    Shih, Ann T.; Lo, Yunnhon; Ward, Natalie C.

    2010-01-01

    Quantifying the probability of significant launch vehicle failure scenarios for a given design, while still in the design process, is critical to mission success and to the safety of the astronauts. Probabilistic risk assessment (PRA) is chosen from many system safety and reliability tools to verify the loss of mission (LOM) and loss of crew (LOC) requirements set by the NASA Program Office. To support the integrated vehicle PRA, probabilistic design analysis (PDA) models are developed by using vehicle design and operation data to better quantify failure probabilities and to better understand the characteristics of a failure and its outcome. This PDA approach uses a physics-based model to describe the system behavior and response for a given failure scenario. Each driving parameter in the model is treated as a random variable with a distribution function. Monte Carlo simulation is used to perform probabilistic calculations to statistically obtain the failure probability. Sensitivity analyses are performed to show how input parameters affect the predicted failure probability, providing insight for potential design improvements to mitigate the risk. The paper discusses the application of the PDA approach in determining the probability of failure for two scenarios from the NASA Ares I project

  2. A statistical analysis on failure-to open/close probability of pneumatic valve in sodium cooling systems

    International Nuclear Information System (INIS)

    Kurisaka, Kenichi

    1999-11-01

    The objective of this study is to develop fundamental data for examination on efficiency of preventive maintenance and surveillance test from the standpoint of failure probability. In this study, as a major standby component, a pneumatic valve in sodium cooling systems was selected. A statistical analysis was made about a trend of valve in sodium cooling systems was selected. A statistical analysis was made about a trend of valve failure-to-open/close (FTOC) probability depending on number of demands ('n'), time since installation ('t') and standby time since last open/close action ('T'). The analysis is based on the field data of operating- and failure-experiences stored in the Component Reliability Database and Statistical Analysis System for LMFBR's (CORDS). In the analysis, the FTOC probability ('P') was expressed as follows: P=1-exp{-C-En-F/n-λT-aT(t-T/2)-AT 2 /2}. The functional parameters, 'C', 'E', 'F', 'λ', 'a' and 'A', were estimated with the maximum likelihood estimation method. As a result, the FTOC probability is almost expressed with the failure probability being derived from the failure rate under assumption of the Poisson distribution only when valve cycle (i.e. open-close-open cycle) exceeds about 100 days. When the valve cycle is shorter than about 100 days, the FTOC probability can be adequately estimated with the parameter model proposed in this study. The results obtained from this study may make it possible to derive an adequate frequency of surveillance test for a given target of the FTOC probability. (author)

  3. Cladding failure probability modeling for risk evaluations of fast reactors

    International Nuclear Information System (INIS)

    Mueller, C.J.; Kramer, J.M.

    1987-01-01

    This paper develops the methodology to incorporate cladding failure data and associated modeling into risk evaluations of liquid metal-cooled fast reactors (LMRs). Current U.S. innovative designs for metal-fueled pool-type LMRs take advantage of inherent reactivity feedback mechanisms to limit reactor temperature increases in response to classic anticipated-transient-without-scram (ATWS) initiators. Final shutdown without reliance on engineered safety features can then be accomplished if sufficient time is available for operator intervention to terminate fission power production and/or provide auxiliary cooling prior to significant core disruption. Coherent cladding failure under the sustained elevated temperatures of ATWS events serves as one indicator of core disruption. In this paper we combine uncertainties in cladding failure data with uncertainties in calculations of ATWS cladding temperature conditions to calculate probabilities of cladding failure as a function of the time for accident recovery. (orig.)

  4. A method for the calculation of the cumulative failure probability distribution of complex repairable systems

    International Nuclear Information System (INIS)

    Caldarola, L.

    1976-01-01

    A method is proposed for the analytical evaluation of the cumulative failure probability distribution of complex repairable systems. The method is based on a set of integral equations each one referring to a specific minimal cut set of the system. Each integral equation links the unavailability of a minimal cut set to its failure probability density distribution and to the probability that the minimal cut set is down at the time t under the condition that it was down at time t'(t'<=t). The limitations for the applicability of the method are also discussed. It has been concluded that the method is applicable if the process describing the failure of a minimal cut set is a 'delayed semi-regenerative process'. (Auth.)

  5. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  6. Use of probabilistic methods for estimating failure probabilities and directing ISI-efforts

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, F; Brickstad, B [University of Uppsala, (Switzerland)

    1988-12-31

    Some general aspects of the role of Non Destructive Testing (NDT) efforts on the resulting probability of core damage is discussed. A simple model for the estimation of the pipe break probability due to IGSCC is discussed. It is partly based on analytical procedures, partly on service experience from the Swedish BWR program. Estimates of the break probabilities indicate that further studies are urgently needed. It is found that the uncertainties about the initial crack configuration are large contributors to the total uncertainty. Some effects of the inservice inspection are studied and it is found that the detection probabilities influence the failure probabilities. (authors).

  7. Long-Term Fatigue and Its Probability of Failure Applied to Dental Implants

    Directory of Open Access Journals (Sweden)

    María Prados-Privado

    2016-01-01

    Full Text Available It is well known that dental implants have a high success rate but even so, there are a lot of factors that can cause dental implants failure. Fatigue is very sensitive to many variables involved in this phenomenon. This paper takes a close look at fatigue analysis and explains a new method to study fatigue from a probabilistic point of view, based on a cumulative damage model and probabilistic finite elements, with the goal of obtaining the expected life and the probability of failure. Two different dental implants were analysed. The model simulated a load of 178 N applied with an angle of 0°, 15°, and 20° and a force of 489 N with the same angles. Von Mises stress distribution was evaluated and once the methodology proposed here was used, the statistic of the fatigue life and the probability cumulative function were obtained. This function allows us to relate each cycle life with its probability of failure. Cylindrical implant has a worst behaviour under the same loading force compared to the conical implant analysed here. Methodology employed in the present study provides very accuracy results because all possible uncertainties have been taken in mind from the beginning.

  8. VISA-2, Reactor Vessel Failure Probability Under Thermal Shock

    International Nuclear Information System (INIS)

    Simonen, F.; Johnson, K.

    1992-01-01

    1 - Description of program or function: VISA2 (Vessel Integrity Simulation Analysis) was developed to estimate the failure probability of nuclear reactor pressure vessels under pressurized thermal shock conditions. The deterministic portion of the code performs heat transfer, stress, and fracture mechanics calculations for a vessel subjected to a user-specified temperature and pressure transient. The probabilistic analysis performs a Monte Carlo simulation to estimate the probability of vessel failure. Parameters such as initial crack size and position, copper and nickel content, fluence, and the fracture toughness values for crack initiation and arrest are treated as random variables. Linear elastic fracture mechanics methods are used to model crack initiation and growth. This includes cladding effects in the heat transfer, stress, and fracture mechanics calculations. The simulation procedure treats an entire vessel and recognizes that more than one flaw can exist in a given vessel. The flaw model allows random positioning of the flaw within the vessel wall thickness, and the user can specify either flaw length or length-to-depth aspect ratio for crack initiation and arrest predictions. The flaw size distribution can be adjust on the basis of different inservice inspection techniques and inspection conditions. The toughness simulation model includes a menu of alternative equations for predicting the shift in the reference temperature of the nil-ductility transition. 2 - Method of solution: The solution method uses closed form equations for temperatures, stresses, and stress intensity factors. A polynomial fitting procedure approximates the specified pressure and temperature transient. Failure probabilities are calculated by a Monte Carlo simulation. 3 - Restrictions on the complexity of the problem: Maxima of 30 welds. VISA2 models only the belt-line (cylindrical) region of a reactor vessel. The stresses are a function of the radial (through-wall) coordinate only

  9. Minimum Probability of Error-Based Equalization Algorithms for Fading Channels

    Directory of Open Access Journals (Sweden)

    Janos Levendovszky

    2007-06-01

    Full Text Available Novel channel equalizer algorithms are introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithms are based on newly derived bounds on the probability of error (PE and guarantee better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE algorithms. The new equalization methods require channel state information which is obtained by a fast adaptive channel identification algorithm. As a result, the combined convergence time needed for channel identification and PE minimization still remains smaller than the convergence time of traditional adaptive algorithms, yielding real-time equalization. The performance of the new algorithms is tested by extensive simulations on standard mobile channels.

  10. Evolution of thermal stress and failure probability during reduction and re-oxidation of solid oxide fuel cell

    Science.gov (United States)

    Wang, Yu; Jiang, Wenchun; Luo, Yun; Zhang, Yucai; Tu, Shan-Tung

    2017-12-01

    The reduction and re-oxidation of anode have significant effects on the integrity of the solid oxide fuel cell (SOFC) sealed by the glass-ceramic (GC). The mechanical failure is mainly controlled by the stress distribution. Therefore, a three dimensional model of SOFC is established to investigate the stress evolution during the reduction and re-oxidation by finite element method (FEM) in this paper, and the failure probability is calculated using the Weibull method. The results demonstrate that the reduction of anode can decrease the thermal stresses and reduce the failure probability due to the volumetric contraction and porosity increasing. The re-oxidation can result in a remarkable increase of the thermal stresses, and the failure probabilities of anode, cathode, electrolyte and GC all increase to 1, which is mainly due to the large linear strain rather than the porosity decreasing. The cathode and electrolyte fail as soon as the linear strains are about 0.03% and 0.07%. Therefore, the re-oxidation should be controlled to ensure the integrity, and a lower re-oxidation temperature can decrease the stress and failure probability.

  11. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  12. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  13. Assessing changes in failure probability of dams in a changing climate

    Science.gov (United States)

    Mallakpour, I.; AghaKouchak, A.; Moftakhari, H.; Ragno, E.

    2017-12-01

    Dams are crucial infrastructures and provide resilience against hydrometeorological extremes (e.g., droughts and floods). In 2017, California experienced series of flooding events terminating a 5-year drought, and leading to incidents such as structural failure of Oroville Dam's spillway. Because of large socioeconomic repercussions of such incidents, it is of paramount importance to evaluate dam failure risks associated with projected shifts in the streamflow regime. This becomes even more important as the current procedures for design of hydraulic structures (e.g., dams, bridges, spillways) are based on the so-called stationary assumption. Yet, changes in climate are anticipated to result in changes in statistics of river flow (e.g., more extreme floods) and possibly increasing the failure probability of already aging dams. Here, we examine changes in discharge under two representative concentration pathways (RCPs): RCP4.5 and RCP8.5. In this study, we used routed daily streamflow data from ten global climate models (GCMs) in order to investigate possible climate-induced changes in streamflow in northern California. Our results show that while the average flow does not show a significant change, extreme floods are projected to increase in the future. Using the extreme value theory, we estimate changes in the return periods of 50-year and 100-year floods in the current and future climates. Finally, we use the historical and future return periods to quantify changes in failure probability of dams in a warming climate.

  14. The role of minimum supply and social vulnerability assessment for governing critical infrastructure failure: current gaps and future agenda

    Directory of Open Access Journals (Sweden)

    M. Garschagen

    2018-04-01

    Full Text Available Increased attention has lately been given to the resilience of critical infrastructure in the context of natural hazards and disasters. The major focus therein is on the sensitivity of critical infrastructure technologies and their management contingencies. However, strikingly little attention has been given to assessing and mitigating social vulnerabilities towards the failure of critical infrastructure and to the development, design and implementation of minimum supply standards in situations of major infrastructure failure. Addressing this gap and contributing to a more integrative perspective on critical infrastructure resilience is the objective of this paper. It asks which role social vulnerability assessments and minimum supply considerations can, should and do – or do not – play for the management and governance of critical infrastructure failure. In its first part, the paper provides a structured review on achievements and remaining gaps in the management of critical infrastructure and the understanding of social vulnerabilities towards disaster-related infrastructure failures. Special attention is given to the current state of minimum supply concepts with a regional focus on policies in Germany and the EU. In its second part, the paper then responds to the identified gaps by developing a heuristic model on the linkages of critical infrastructure management, social vulnerability and minimum supply. This framework helps to inform a vision of a future research agenda, which is presented in the paper's third part. Overall, the analysis suggests that the assessment of socially differentiated vulnerabilities towards critical infrastructure failure needs to be undertaken more stringently to inform the scientifically and politically difficult debate about minimum supply standards and the shared responsibilities for securing them.

  15. The role of minimum supply and social vulnerability assessment for governing critical infrastructure failure: current gaps and future agenda

    Science.gov (United States)

    Garschagen, Matthias; Sandholz, Simone

    2018-04-01

    Increased attention has lately been given to the resilience of critical infrastructure in the context of natural hazards and disasters. The major focus therein is on the sensitivity of critical infrastructure technologies and their management contingencies. However, strikingly little attention has been given to assessing and mitigating social vulnerabilities towards the failure of critical infrastructure and to the development, design and implementation of minimum supply standards in situations of major infrastructure failure. Addressing this gap and contributing to a more integrative perspective on critical infrastructure resilience is the objective of this paper. It asks which role social vulnerability assessments and minimum supply considerations can, should and do - or do not - play for the management and governance of critical infrastructure failure. In its first part, the paper provides a structured review on achievements and remaining gaps in the management of critical infrastructure and the understanding of social vulnerabilities towards disaster-related infrastructure failures. Special attention is given to the current state of minimum supply concepts with a regional focus on policies in Germany and the EU. In its second part, the paper then responds to the identified gaps by developing a heuristic model on the linkages of critical infrastructure management, social vulnerability and minimum supply. This framework helps to inform a vision of a future research agenda, which is presented in the paper's third part. Overall, the analysis suggests that the assessment of socially differentiated vulnerabilities towards critical infrastructure failure needs to be undertaken more stringently to inform the scientifically and politically difficult debate about minimum supply standards and the shared responsibilities for securing them.

  16. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    Science.gov (United States)

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  17. Formulating informative, data-based priors for failure probability estimation in reliability analysis

    International Nuclear Information System (INIS)

    Guikema, Seth D.

    2007-01-01

    Priors play an important role in the use of Bayesian methods in risk analysis, and using all available information to formulate an informative prior can lead to more accurate posterior inferences. This paper examines the practical implications of using five different methods for formulating an informative prior for a failure probability based on past data. These methods are the method of moments, maximum likelihood (ML) estimation, maximum entropy estimation, starting from a non-informative 'pre-prior', and fitting a prior based on confidence/credible interval matching. The priors resulting from the use of these different methods are compared qualitatively, and the posteriors are compared quantitatively based on a number of different scenarios of observed data used to update the priors. The results show that the amount of information assumed in the prior makes a critical difference in the accuracy of the posterior inferences. For situations in which the data used to formulate the informative prior is an accurate reflection of the data that is later observed, the ML approach yields the minimum variance posterior. However, the maximum entropy approach is more robust to differences between the data used to formulate the prior and the observed data because it maximizes the uncertainty in the prior subject to the constraints imposed by the past data

  18. Bounds on survival probability given mean probability of failure per demand; and the paradoxical advantages of uncertainty

    International Nuclear Information System (INIS)

    Strigini, Lorenzo; Wright, David

    2014-01-01

    When deciding whether to accept into service a new safety-critical system, or choosing between alternative systems, uncertainty about the parameters that affect future failure probability may be a major problem. This uncertainty can be extreme if there is the possibility of unknown design errors (e.g. in software), or wide variation between nominally equivalent components. We study the effect of parameter uncertainty on future reliability (survival probability), for systems required to have low risk of even only one failure or accident over the long term (e.g. their whole operational lifetime) and characterised by a single reliability parameter (e.g. probability of failure per demand – pfd). A complete mathematical treatment requires stating a probability distribution for any parameter with uncertain value. This is hard, so calculations are often performed using point estimates, like the expected value. We investigate conditions under which such simplified descriptions yield reliability values that are sure to be pessimistic (or optimistic) bounds for a prediction based on the true distribution. Two important observations are (i) using the expected value of the reliability parameter as its true value guarantees a pessimistic estimate of reliability, a useful property in most safety-related decisions; (ii) with a given expected pfd, broader distributions (in a formally defined meaning of “broader”), that is, systems that are a priori “less predictable”, lower the risk of failures or accidents. Result (i) justifies the simplification of using a mean in reliability modelling; we discuss within which scope this justification applies, and explore related scenarios, e.g. how things improve if we can test the system before operation. Result (ii) not only offers more flexible ways of bounding reliability predictions, but also has important, often counter-intuitive implications for decision making in various areas, like selection of components, project management

  19. Optimum principle for a vehicular traffic network: minimum probability of congestion

    International Nuclear Information System (INIS)

    Kerner, Boris S

    2011-01-01

    We introduce an optimum principle for a vehicular traffic network with road bottlenecks. This network breakdown minimization (BM) principle states that the network optimum is reached when link flow rates are assigned in the network in such a way that the probability for spontaneous occurrence of traffic breakdown in at least one of the network bottlenecks during a given observation time reaches the minimum possible value. Based on numerical simulations with a stochastic three-phase traffic flow model, we show that in comparison to the well-known Wardrop's principles, the application of the BM principle permits considerably greater network inflow rates at which no traffic breakdown occurs and, therefore, free flow remains in the whole network. (fast track communication)

  20. Research on Probability for Failures in VW Cars During Warranty and Post-Warranty Periods

    Directory of Open Access Journals (Sweden)

    Dainius Luneckas

    2014-12-01

    Full Text Available The present paper examines the distribution of failures in „Volkswagen“ car during warranty and post-warranty periods. A statistical mathematical model has been developed upon collecting distribution data on car failures. Considering mileage rates, probabilities for a failure in the systems, including suspension and transmission, cooling, electrical, etc., have been determined during warranty and expiration periods. The obtained results of the conducted research have been compared. The reached conclusions have been formulated and summarized.

  1. Probability of failure prediction for step-stress fatigue under sine or random stress

    Science.gov (United States)

    Lambert, R. G.

    1979-01-01

    A previously proposed cumulative fatigue damage law is extended to predict the probability of failure or fatigue life for structural materials with S-N fatigue curves represented as a scatterband of failure points. The proposed law applies to structures subjected to sinusoidal or random stresses and includes the effect of initial crack (i.e., flaw) sizes. The corrected cycle ratio damage function is shown to have physical significance.

  2. Fishnet model for failure probability tail of nacre-like imbricated lamellar materials

    Science.gov (United States)

    Luo, Wen; Bažant, Zdeněk P.

    2017-12-01

    Nacre, the iridescent material of the shells of pearl oysters and abalone, consists mostly of aragonite (a form of CaCO3), a brittle constituent of relatively low strength (≈10 MPa). Yet it has astonishing mean tensile strength (≈150 MPa) and fracture energy (≈350 to 1,240 J/m2). The reasons have recently become well understood: (i) the nanoscale thickness (≈300 nm) of nacre's building blocks, the aragonite lamellae (or platelets), and (ii) the imbricated, or staggered, arrangement of these lamellea, bound by biopolymer layers only ≈25 nm thick, occupying engineering applications, however, the failure probability of ≤10-6 is generally required. To guarantee it, the type of probability density function (pdf) of strength, including its tail, must be determined. This objective, not pursued previously, is hardly achievable by experiments alone, since >10^8 tests of specimens would be needed. Here we outline a statistical model of strength that resembles a fishnet pulled diagonally, captures the tail of pdf of strength and, importantly, allows analytical safety assessments of nacreous materials. The analysis shows that, in terms of safety, the imbricated lamellar structure provides a major additional advantage—˜10% strength increase at tail failure probability 10^-6 and a 1 to 2 orders of magnitude tail probability decrease at fixed stress. Another advantage is that a high scatter of microstructure properties diminishes the strength difference between the mean and the probability tail, compared with the weakest link model. These advantages of nacre-like materials are here justified analytically and supported by millions of Monte Carlo simulations.

  3. Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...

  4. The probability and the management of human error

    International Nuclear Information System (INIS)

    Dufey, R.B.; Saull, J.W.

    2004-01-01

    Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error (λ) that combines the influences of early inexperience, learning from experience (ε) and stochastic occurrences with having a finite minimum rate, this equation is λ 5.10 -5 + ((1/ε) - 5.10 -5 ) exp(-3*ε). The future failure rate is entirely determined by the experience: thus the past defines the future

  5. Optimum principle for a vehicular traffic network: minimum probability of congestion

    Energy Technology Data Exchange (ETDEWEB)

    Kerner, Boris S, E-mail: boris.kerner@daimler.com [Daimler AG, GR/PTF, HPC: G021, 71059 Sindelfingen (Germany)

    2011-03-04

    We introduce an optimum principle for a vehicular traffic network with road bottlenecks. This network breakdown minimization (BM) principle states that the network optimum is reached when link flow rates are assigned in the network in such a way that the probability for spontaneous occurrence of traffic breakdown in at least one of the network bottlenecks during a given observation time reaches the minimum possible value. Based on numerical simulations with a stochastic three-phase traffic flow model, we show that in comparison to the well-known Wardrop's principles, the application of the BM principle permits considerably greater network inflow rates at which no traffic breakdown occurs and, therefore, free flow remains in the whole network. (fast track communication)

  6. Application of a few orthogonal polynomials to the assessment of the fracture failure probability of a spherical tank

    International Nuclear Information System (INIS)

    Cao Tianjie; Zhou Zegong

    1993-01-01

    This paper presents some methods to assess the fracture failure probability of a spherical tank. These methods convert the assessment of the fracture failure probability into the calculation of the moment of cracks and a one-dimensional integral. In the paper, we first derive series' formulae to calculation the moments of cracks on the occasion of the crack fatigue growth and the moments of crack opening displacements according to JWES-2805 code. We then use the first n moments of crack opening displacements and a few orthogonal polynomials to compose the probability density function of the crack opening displacement. Lastly, the fracture failure probability is obtained according to the interference theory. An example proves that these methods are simpler, quicker, and more accurate. At the same time, these methods avoid the disadvantage of Edgeworth's series method. (author)

  7. Failure probability assessment of wall-thinned nuclear pipes using probabilistic fracture mechanics

    International Nuclear Information System (INIS)

    Lee, Sang-Min; Chang, Yoon-Suk; Choi, Jae-Boong; Kim, Young-Jin

    2006-01-01

    The integrity of nuclear piping system has to be maintained during operation. In order to maintain the integrity, reliable assessment procedures including fracture mechanics analysis, etc., are required. Up to now, this has been performed using conventional deterministic approaches even though there are many uncertainties to hinder a rational evaluation. In this respect, probabilistic approaches are considered as an appropriate method for piping system evaluation. The objectives of this paper are to estimate the failure probabilities of wall-thinned pipes in nuclear secondary systems and to propose limited operating conditions under different types of loadings. To do this, a probabilistic assessment program using reliability index and simulation techniques was developed and applied to evaluate failure probabilities of wall-thinned pipes subjected to internal pressure, bending moment and combined loading of them. The sensitivity analysis results as well as prototypal integrity assessment results showed a promising applicability of the probabilistic assessment program, necessity of practical evaluation reflecting combined loading condition and operation considering limited condition

  8. An empirical study on the human error recovery failure probability when using soft controls in NPP advanced MCRs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2014-01-01

    Highlights: • Many researchers have tried to understand human recovery process or step. • Modeling human recovery process is not sufficient to be applied to HRA. • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • Recovery failure probability in a soft control operation environment is investigated. • Recovery failure probability here would be important evidence for expert judgment. - Abstract: It is well known that probabilistic safety assessments (PSAs) today consider not just hardware failures and environmental events that can impact upon risk, but also human error contributions. Consequently, the focus on reliability and performance management has been on the prevention of human errors and failures rather than the recovery of human errors. However, the recovery of human errors is as important as the prevention of human errors and failures for the safe operation of nuclear power plants (NPPs). For this reason, many researchers have tried to find a human recovery process or step. However, modeling the human recovery process is not sufficient enough to be applied to human reliability analysis (HRA), which requires human error and recovery probabilities. In this study, therefore, human error recovery failure probabilities based on predefined human error modes were investigated by conducting experiments in the operation mockup of advanced/digital main control rooms (MCRs) in NPPs. To this end, 48 subjects majoring in nuclear engineering participated in the experiments. In the experiments, using the developed accident scenario based on tasks from the standard post trip action (SPTA), the steam generator tube rupture (SGTR), and predominant soft control tasks, which are derived from the loss of coolant accident (LOCA) and the excess steam demand event (ESDE), all error detection and recovery data based on human error modes were checked with the performance sheet and the statistical analysis of error recovery/detection was then

  9. The probability of containment failure by steam explosion in a PWR

    International Nuclear Information System (INIS)

    Briggs, A.J.

    1983-12-01

    The study of the risk associated with operation of a PWR includes assessment of severe accidents in which a combination of faults results in melting of the core. Probabilistic methods are used in such assessment, hence it is necessary to estimate the probability of key events. One such event is the occurrence of a large steam explosion when molten core debris slumps into the base of the reactor vessel. This report considers recent information, and recommends an upper limit to the range of probability values for containment failure by steam explosion for risk assessment for a plant such as the proposed Sizewell B station. (U.K.)

  10. Probability of Failure Analysis Standards and Guidelines for Expendable Launch Vehicles

    Science.gov (United States)

    Wilde, Paul D.; Morse, Elisabeth L.; Rosati, Paul; Cather, Corey

    2013-09-01

    Recognizing the central importance of probability of failure estimates to ensuring public safety for launches, the Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST), the National Aeronautics and Space Administration (NASA), and U.S. Air Force (USAF), through the Common Standards Working Group (CSWG), developed a guide for conducting valid probability of failure (POF) analyses for expendable launch vehicles (ELV), with an emphasis on POF analysis for new ELVs. A probability of failure analysis for an ELV produces estimates of the likelihood of occurrence of potentially hazardous events, which are critical inputs to launch risk analysis of debris, toxic, or explosive hazards. This guide is intended to document a framework for POF analyses commonly accepted in the US, and should be useful to anyone who performs or evaluates launch risk analyses for new ELVs. The CSWG guidelines provide performance standards and definitions of key terms, and are being revised to address allocation to flight times and vehicle response modes. The POF performance standard allows a launch operator to employ alternative, potentially innovative methodologies so long as the results satisfy the performance standard. Current POF analysis practice at US ranges includes multiple methodologies described in the guidelines as accepted methods, but not necessarily the only methods available to demonstrate compliance with the performance standard. The guidelines include illustrative examples for each POF analysis method, which are intended to illustrate an acceptable level of fidelity for ELV POF analyses used to ensure public safety. The focus is on providing guiding principles rather than "recipe lists." Independent reviews of these guidelines were performed to assess their logic, completeness, accuracy, self- consistency, consistency with risk analysis practices, use of available information, and ease of applicability. The independent reviews confirmed the

  11. Mechanistic considerations used in the development of the probability of failure in transient increases in power (PROFIT) pellet-zircaloy cladding (thermo-mechanical-chemical) interactions (pci) fuel failure model

    International Nuclear Information System (INIS)

    Pankaskie, P.J.

    1980-05-01

    A fuel Pellet-Zircaloy Cladding (thermo-mechanical-chemical) interactions (PCI) failure model for estimating the Probability of Failure in Transient Increases in Power (PROFIT) was developed. PROFIT is based on (1) standard statistical methods applied to available PCI fuel failure data and (2) a mechanistic analysis of the environmental and strain-rate-dependent stress versus strain characteristics of Zircaloy cladding. The statistical analysis of fuel failures attributable to PCI suggested that parameters in addition to power, transient increase in power, and burnup are needed to define PCI fuel failures in terms of probability estimates with known confidence limits. The PROFIT model, therefore, introduces an environmental and strain-rate dependent Strain Energy Absorption to Failure (SEAF) concept to account for the stress versus strain anomalies attributable to interstitial-dislocation interaction effects in the Zircaloy cladding

  12. Differentiated protection services with failure probability guarantee for workflow-based applications

    Science.gov (United States)

    Zhong, Yaoquan; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng

    2010-12-01

    A cost-effective and service-differentiated provisioning strategy is very desirable to service providers so that they can offer users satisfactory services, while optimizing network resource allocation. Providing differentiated protection services to connections for surviving link failure has been extensively studied in recent years. However, the differentiated protection services for workflow-based applications, which consist of many interdependent tasks, have scarcely been studied. This paper investigates the problem of providing differentiated services for workflow-based applications in optical grid. In this paper, we develop three differentiated protection services provisioning strategies which can provide security level guarantee and network-resource optimization for workflow-based applications. The simulation demonstrates that these heuristic algorithms provide protection cost-effectively while satisfying the applications' failure probability requirements.

  13. Failure probability estimate of type 304 stainless steel piping

    International Nuclear Information System (INIS)

    Daugherty, W.L.; Awadalla, N.G.; Sindelar, R.L.; Mehta, H.S.; Ranganath, S.

    1989-01-01

    The primary source of in-service degradation of the SRS production reactor process water piping is intergranular stress corrosion cracking (IGSCC). IGSCC has occurred in a limited number of weld heat affected zones, areas known to be susceptible to IGSCC. A model has been developed to combine crack growth rates, crack size distributions, in-service examination reliability estimates and other considerations to estimate the pipe large-break frequency. This frequency estimates the probability that an IGSCC crack will initiate, escape detection by ultrasonic (UT) examination, and grow to instability prior to extending through-wall and being detected by the sensitive leak detection system. These events are combined as the product of four factors: (1) the probability that a given weld heat affected zone contains IGSCC; (2) the conditional probability, given the presence of IGSCC, that the cracking will escape detection during UT examination; (3) the conditional probability, given a crack escapes detection by UT, that it will not grow through-wall and be detected by leakage; (4) the conditional probability, given a crack is not detected by leakage, that it grows to instability prior to the next UT exam. These four factors estimate the occurrence of several conditions that must coexist in order for a crack to lead to a large break of the process water piping. When evaluated for the SRS production reactors, they produce an extremely low break frequency. The objective of this paper is to present the assumptions, methodology, results and conclusions of a probabilistic evaluation for the direct failure of the primary coolant piping resulting from normal operation and seismic loads. This evaluation was performed to support the ongoing PRA effort and to complement deterministic analyses addressing the credibility of a double-ended guillotine break

  14. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo; Huang, Jianhua Z.; Ding, Yu

    2010-01-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  15. A Computable Plug-In Estimator of Minimum Volume Sets for Novelty Detection

    KAUST Repository

    Park, Chiwoo

    2010-10-01

    A minimum volume set of a probability density is a region of minimum size among the regions covering a given probability mass of the density. Effective methods for finding the minimum volume sets are very useful for detecting failures or anomalies in commercial and security applications-a problem known as novelty detection. One theoretical approach of estimating the minimum volume set is to use a density level set where a kernel density estimator is plugged into the optimization problem that yields the appropriate level. Such a plug-in estimator is not of practical use because solving the corresponding minimization problem is usually intractable. A modified plug-in estimator was proposed by Hyndman in 1996 to overcome the computation difficulty of the theoretical approach but is not well studied in the literature. In this paper, we provide theoretical support to this estimator by showing its asymptotic consistency. We also show that this estimator is very competitive to other existing novelty detection methods through an extensive empirical study. ©2010 INFORMS.

  16. Reactor pressure vessel failure probability following through-wall cracks due to pressurized thermal shock events

    International Nuclear Information System (INIS)

    Simonen, F.A.; Garnich, M.R.; Simonen, E.P.; Bian, S.H.; Nomura, K.K.; Anderson, W.E.; Pedersen, L.T.

    1986-04-01

    A fracture mechanics model was developed at the Pacific Northwest Laboratory (PNL) to predict the behavior of a reactor pressure vessel following a through-wall crack that occurs during a pressurized thermal shock (PTS) event. This study, which contributed to a US Nuclear Regulatory Commission (NRC) program to study PTS risk, was coordinated with the Integrated Pressurized Thermal Shock (IPTS) Program at Oak Ridge National Laboratory (ORNL). The PNL fracture mechanics model uses the critical transients and probabilities of through-wall cracks from the IPTS Program. The PNL model predicts the arrest, reinitiation, and direction of crack growth for a postulated through-wall crack and thereby predicts the mode of vessel failure. A Monte-Carlo type of computer code was written to predict the probabilities of the alternative failure modes. This code treats the fracture mechanics properties of the various welds and plates of a vessel as random variables. Plant-specific calculations were performed for the Oconee-1, Calvert Cliffs-1, and H.B. Robinson-2 reactor pressure vessels for the conditions of postulated transients. The model predicted that 50% or more of the through-wall axial cracks will turn to follow a circumferential weld. The predicted failure mode is a complete circumferential fracture of the vessel, which results in a potential vertically directed missile consisting of the upper head assembly. Missile arrest calculations for the three nuclear plants predict that such vertical missiles, as well as all potential horizontally directed fragmentation type missiles, will be confined to the vessel enclosre cavity. The PNL failure mode model is recommended for use in future evaluations of other plants, to determine the failure modes that are most probable for postulated PTS events

  17. The influence of frequency and reliability of in-service inspection on reactor pressure vessel disruptive failure probability

    International Nuclear Information System (INIS)

    Jordan, G.M.

    1977-01-01

    A simple probabilistic methodology is used to investigate the benefit, in terms of reduction of disruptive failure probability, which comes from the application of periodic In Service Inspection (ISI) to nuclear pressure vessels. The analysis indicates the strong interaction between inspection benefit and the intrinsic quality of the structure. In order to quantify the inspection benefit, assumptions are made which allow the quality to be characterized in terms of the parameters governing a Log Normal distribution of time-to-failure. Using these assumptions, it is shown that the overall benefit of ISI is unlikely to exceed an order of magnitude in terms of reduction of disruptive failure probability. The method is extended to evaluate the effect of the periodicity and reliability of the inspection process itself. (author)

  18. The influence of frequency and reliability of in-service inspection on reactor pressure vessel disruptive failure probability

    International Nuclear Information System (INIS)

    Jordan, G.M.

    1978-01-01

    A simple probabilistic methodology is used to investigate the benefit, in terms of reduction of disruptive failure probability, which comes from the application of periodic in-service inspection to nuclear pressure vessels. The analysis indicates the strong interaction between inspection benefit and the intrinsic quality of the structure. In order to quantify the inspection benefit, assumptions are made which allow the quality to be characterised in terms of the parameters governing a log normal distribution of time - to - failure. Using these assumptions, it is shown that the overall benefit of in-service inspection unlikely to exceed an order of magnitude in terms of reduction of disruptive failure probability. The method is extended to evaluate the effect of the periodicity and reliability of the inspection process itself. (author)

  19. An analysis of the annual probability of failure of the waste hoist brake system at the Waste Isolation Pilot Plant (WIPP)

    Energy Technology Data Exchange (ETDEWEB)

    Greenfield, M.A. [Univ. of California, Los Angeles, CA (United States); Sargent, T.J.

    1995-11-01

    The Environmental Evaluation Group (EEG) previously analyzed the probability of a catastrophic accident in the waste hoist of the Waste Isolation Pilot Plant (WIPP) and published the results in Greenfield (1990; EEG-44) and Greenfield and Sargent (1993; EEG-53). The most significant safety element in the waste hoist is the hydraulic brake system, whose possible failure was identified in these studies as the most important contributor in accident scenarios. Westinghouse Electric Corporation, Waste Isolation Division has calculated the probability of an accident involving the brake system based on studies utilizing extensive fault tree analyses. This analysis conducted for the U.S. Department of Energy (DOE) used point estimates to describe the probability of failure and includes failure rates for the various components comprising the brake system. An additional controlling factor in the DOE calculations is the mode of operation of the brake system. This factor enters for the following reason. The basic failure rate per annum of any individual element is called the Event Probability (EP), and is expressed as the probability of failure per annum. The EP in turn is the product of two factors. One is the {open_quotes}reported{close_quotes} failure rate, usually expressed as the probability of failure per hour and the other is the expected number of hours that the element is in use, called the {open_quotes}mission time{close_quotes}. In many instances the {open_quotes}mission time{close_quotes} will be the number of operating hours of the brake system per annum. However since the operation of the waste hoist system includes regular {open_quotes}reoperational check{close_quotes} tests, the {open_quotes}mission time{close_quotes} for standby components is reduced in accordance with the specifics of the operational time table.

  20. An analysis of the annual probability of failure of the waste hoist brake system at the Waste Isolation Pilot Plant (WIPP)

    International Nuclear Information System (INIS)

    Greenfield, M.A.; Sargent, T.J.

    1995-11-01

    The Environmental Evaluation Group (EEG) previously analyzed the probability of a catastrophic accident in the waste hoist of the Waste Isolation Pilot Plant (WIPP) and published the results in Greenfield (1990; EEG-44) and Greenfield and Sargent (1993; EEG-53). The most significant safety element in the waste hoist is the hydraulic brake system, whose possible failure was identified in these studies as the most important contributor in accident scenarios. Westinghouse Electric Corporation, Waste Isolation Division has calculated the probability of an accident involving the brake system based on studies utilizing extensive fault tree analyses. This analysis conducted for the U.S. Department of Energy (DOE) used point estimates to describe the probability of failure and includes failure rates for the various components comprising the brake system. An additional controlling factor in the DOE calculations is the mode of operation of the brake system. This factor enters for the following reason. The basic failure rate per annum of any individual element is called the Event Probability (EP), and is expressed as the probability of failure per annum. The EP in turn is the product of two factors. One is the open-quotes reportedclose quotes failure rate, usually expressed as the probability of failure per hour and the other is the expected number of hours that the element is in use, called the open-quotes mission timeclose quotes. In many instances the open-quotes mission timeclose quotes will be the number of operating hours of the brake system per annum. However since the operation of the waste hoist system includes regular open-quotes reoperational checkclose quotes tests, the open-quotes mission timeclose quotes for standby components is reduced in accordance with the specifics of the operational time table

  1. The HYDROMED model and its application to semi-arid Mediterranean catchments with hill reservoirs 3: Reservoir storage capacity and probability of failure model

    Directory of Open Access Journals (Sweden)

    R. Ragab

    2001-01-01

    Full Text Available This paper addresses the issue of "what reservoir storage capacity is required to maintain a yield with a given probability of failure?". It is an important issue in terms of construction and cost. HYDROMED offers a solution based on the modified Gould probability matrix method. This method has the advantage of sampling all years data without reference to the sequence and is therefore particularly suitable for catchments with patchy data. In the HYDROMED model, the probability of failure is calculated on a monthly basis. The model has been applied to the El-Gouazine catchment in Tunisia using a long rainfall record from Kairouan together with the estimated Hortonian runoff, class A pan evaporation data and estimated abstraction data. Generally, the probability of failure differed from winter to summer. Generally, the probability of failure approaches zero when the reservoir capacity is 500,000 m3. The 25% probability of failure (75% success is achieved with a reservoir capacity of 58,000 m3 in June and 95,000 m3 in January. The probability of failure for a 240,000 m3 capacity reservoir (closer to storage capacity of El-Gouazine 233,000 m3, is approximately 5% in November, December and January, 3% in March, and 1.1% in May and June. Consequently there is no high risk of El-Gouazine being unable to meet its requirements at a capacity of 233,000 m3. Subsequently the benefit, in terms of probability of failure, by increasing the reservoir volume of El-Gouazine to greater than the 250,000 m3 is not high. This is important for the design engineers and the funding organizations. However, the analysis is based on the existing water abstraction policy, absence of siltation rate data and on the assumption that the present climate will prevail during the lifetime of the reservoir. Should these conditions change, a new analysis should be carried out. Keywords: HYDROMED, reservoir, storage capacity, probability of failure, Mediterranean

  2. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  3. Sensitivity analysis on the effect of software-induced common cause failure probability in the computer-based reactor trip system unavailability

    International Nuclear Information System (INIS)

    Kamyab, Shahabeddin; Nematollahi, Mohammadreza; Shafiee, Golnoush

    2013-01-01

    Highlights: ► Importance and sensitivity analysis has been performed for a digitized reactor trip system. ► The results show acceptable trip unavailability, for software failure probabilities below 1E −4 . ► However, the value of Fussell–Vesley indicates that software common cause failure is still risk significant. ► Diversity and effective test is founded beneficial to reduce software contribution. - Abstract: The reactor trip system has been digitized in advanced nuclear power plants, since the programmable nature of computer based systems has a number of advantages over non-programmable systems. However, software is still vulnerable to common cause failure (CCF). Residual software faults represent a CCF concern, which threat the implemented achievements. This study attempts to assess the effectiveness of so-called defensive strategies against software CCF with respect to reliability. Sensitivity analysis has been performed by re-quantifying the models upon changing the software failure probability. Importance measures then have been estimated in order to reveal the specific contribution of software CCF in the trip failure probability. The results reveal the importance and effectiveness of signal and software diversity as applicable strategies to ameliorate inefficiencies due to software CCF in the reactor trip system (RTS). No significant change has been observed in the rate of RTS failure probability for the basic software CCF greater than 1 × 10 −4 . However, the related Fussell–Vesley has been greater than 0.005, for the lower values. The study concludes that consideration of risk associated with the software based systems is a multi-variant function which requires compromising among them in more precise and comprehensive studies

  4. Modeling tumor control probability for spatially inhomogeneous risk of failure based on clinical outcome data

    DEFF Research Database (Denmark)

    Lühr, Armin; Löck, Steffen; Jakobi, Annika

    2017-01-01

    PURPOSE: Objectives of this work are (1) to derive a general clinically relevant approach to model tumor control probability (TCP) for spatially variable risk of failure and (2) to demonstrate its applicability by estimating TCP for patients planned for photon and proton irradiation. METHODS AND ...

  5. Probability of failure of the watershed algorithm for peak detection in comprehensive two-dimensional chromatography

    NARCIS (Netherlands)

    Vivó-Truyols, G.; Janssen, H.-G.

    2010-01-01

    The watershed algorithm is the most common method used for peak detection and integration In two-dimensional chromatography However, the retention time variability in the second dimension may render the algorithm to fail A study calculating the probabilities of failure of the watershed algorithm was

  6. Personnel reliability impact on petrochemical facilities monitoring system's failure skipping probability

    Science.gov (United States)

    Kostyukov, V. N.; Naumenko, A. P.

    2017-08-01

    The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under

  7. FAILPROB-A Computer Program to Compute the Probability of Failure of a Brittle Component; TOPICAL

    International Nuclear Information System (INIS)

    WELLMAN, GERALD W.

    2002-01-01

    FAILPROB is a computer program that applies the Weibull statistics characteristic of brittle failure of a material along with the stress field resulting from a finite element analysis to determine the probability of failure of a component. FAILPROB uses the statistical techniques for fast fracture prediction (but not the coding) from the N.A.S.A. - CARES/life ceramic reliability package. FAILPROB provides the analyst at Sandia with a more convenient tool than CARES/life because it is designed to behave in the tradition of structural analysis post-processing software such as ALGEBRA, in which the standard finite element database format EXODUS II is both read and written. This maintains compatibility with the entire SEACAS suite of post-processing software. A new technique to deal with the high local stresses computed for structures with singularities such as glass-to-metal seals and ceramic-to-metal braze joints is proposed and implemented. This technique provides failure probability computation that is insensitive to the finite element mesh employed in the underlying stress analysis. Included in this report are a brief discussion of the computational algorithms employed, user instructions, and example problems that both demonstrate the operation of FAILPROB and provide a starting point for verification and validation

  8. Estimation of failure probability of the end induced current depending on uncertain parameters of a transmission line

    International Nuclear Information System (INIS)

    Larbi, M.; Besnier, P.; Pecqueux, B.

    2014-01-01

    This paper treats about the risk analysis of an EMC default using a statistical approach based on reliability methods. A probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is computed by taking into account uncertainties on input parameters influencing extreme levels of interference in the context of transmission lines. Results are compared to Monte Carlo simulation (MCS). (authors)

  9. Probability of Accurate Heart Failure Diagnosis and the Implications for Hospital Readmissions.

    Science.gov (United States)

    Carey, Sandra A; Bass, Kyle; Saracino, Giovanna; East, Cara A; Felius, Joost; Grayburn, Paul A; Vallabhan, Ravi C; Hall, Shelley A

    2017-04-01

    Heart failure (HF) is a complex syndrome with inherent diagnostic challenges. We studied the scope of possibly inaccurately documented HF in a large health care system among patients assigned a primary diagnosis of HF at discharge. Through a retrospective record review and a classification schema developed from published guidelines, we assessed the probability of the documented HF diagnosis being accurate and determined factors associated with HF-related and non-HF-related hospital readmissions. An arbitration committee of 3 experts reviewed a subset of records to corroborate the results. We assigned a low probability of accurate diagnosis to 133 (19%) of the 712 patients. A subset of patients were also reviewed by an expert panel, which concluded that 13% to 35% of patients probably did not have HF (inter-rater agreement, kappa = 0.35). Low-probability HF was predictive of being readmitted more frequently for non-HF causes (p = 0.018), as well as documented arrhythmias (p = 0.023), and age >60 years (p = 0.006). Documented sleep apnea (p = 0.035), percutaneous coronary intervention (p = 0.006), non-white race (p = 0.047), and B-type natriuretic peptide >400 pg/ml (p = 0.007) were determined to be predictive of HF readmissions in this cohort. In conclusion, approximately 1 in 5 patients documented to have HF were found to have a low probability of actually having it. Moreover, the determination of low-probability HF was twice as likely to result in readmission for non-HF causes and, thus, should be considered a determinant for all-cause readmissions in this population. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. The probability of containment failure by direct containment heating in Zion

    International Nuclear Information System (INIS)

    Pilch, M.M.; Yan, H.; Theofanous, T.G.

    1994-12-01

    This report is the first step in the resolution of the Direct Containment Heating (DCH) issue for the Zion Nuclear Power Plant using the Risk Oriented Accident Analysis Methodology (ROAAM). This report includes the definition of a probabilistic framework that decomposes the DCH problem into three probability density functions that reflect the most uncertain initial conditions (UO 2 mass, zirconium oxidation fraction, and steel mass). Uncertainties in the initial conditions are significant, but our quantification approach is based on establishing reasonable bounds that are not unnecessarily conservative. To this end, we also make use of the ROAAM ideas of enveloping scenarios and ''splintering.'' Two causal relations (CRs) are used in this framework: CR1 is a model that calculates the peak pressure in the containment as a function of the initial conditions, and CR2 is a model that returns the frequency of containment failure as a function of pressure within the containment. Uncertainty in CR1 is accounted for by the use of two independently developed phenomenological models, the Convection Limited Containment Heating (CLCH) model and the Two-Cell Equilibrium (TCE) model, and by probabilistically distributing the key parameter in both, which is the ratio of the melt entrainment time to the system blowdown time constant. The two phenomenological models have been compared with an extensive database including recent integral simulations at two different physical scales. The containment load distributions do not intersect the containment strength (fragility) curve in any significant way, resulting in containment failure probabilities less than 10 -3 for all scenarios considered. Sensitivity analyses did not show any areas of large sensitivity

  11. Advanced RESTART method for the estimation of the probability of failure of highly reliable hybrid dynamic systems

    International Nuclear Information System (INIS)

    Turati, Pietro; Pedroni, Nicola; Zio, Enrico

    2016-01-01

    The efficient estimation of system reliability characteristics is of paramount importance for many engineering applications. Real world system reliability modeling calls for the capability of treating systems that are: i) dynamic, ii) complex, iii) hybrid and iv) highly reliable. Advanced Monte Carlo (MC) methods offer a way to solve these types of problems, which are feasible according to the potentially high computational costs. In this paper, the REpetitive Simulation Trials After Reaching Thresholds (RESTART) method is employed, extending it to hybrid systems for the first time (to the authors’ knowledge). The estimation accuracy and precision of RESTART highly depend on the choice of the Importance Function (IF) indicating how close the system is to failure: in this respect, proper IFs are here originally proposed to improve the performance of RESTART for the analysis of hybrid systems. The resulting overall simulation approach is applied to estimate the probability of failure of the control system of a liquid hold-up tank and of a pump-valve subsystem subject to degradation induced by fatigue. The results are compared to those obtained by standard MC simulation and by RESTART with classical IFs available in the literature. The comparison shows the improvement in the performance obtained by our approach. - Highlights: • We consider the issue of estimating small failure probabilities in dynamic systems. • We employ the RESTART method to estimate the failure probabilities. • New Importance Functions (IFs) are introduced to increase the method performance. • We adopt two dynamic, hybrid, highly reliable systems as case studies. • A comparison with literature IFs proves the effectiveness of the new IFs.

  12. Probability distribution of machining center failures

    International Nuclear Information System (INIS)

    Jia Yazhou; Wang Molin; Jia Zhixin

    1995-01-01

    Through field tracing research for 24 Chinese cutter-changeable CNC machine tools (machining centers) over a period of one year, a database of operation and maintenance for machining centers was built, the failure data was fitted to the Weibull distribution and the exponential distribution, the effectiveness was tested, and the failure distribution pattern of machining centers was found. Finally, the reliability characterizations for machining centers are proposed

  13. Estimation of the common cause failure probabilities on the component group with mixed testing scheme

    International Nuclear Information System (INIS)

    Hwang, Meejeong; Kang, Dae Il

    2011-01-01

    Highlights: ► This paper presents a method to estimate the common cause failure probabilities on the common cause component group with mixed testing schemes. ► The CCF probabilities are dependent on the testing schemes such as staggered testing or non-staggered testing. ► There are many CCCGs with specific mixed testing schemes in real plant operation. ► Therefore, a general formula which is applicable to both alternate periodic testing scheme and train level mixed testing scheme was derived. - Abstract: This paper presents a method to estimate the common cause failure (CCF) probabilities on the common cause component group (CCCG) with mixed testing schemes such as the train level mixed testing scheme or the alternate periodic testing scheme. In the train level mixed testing scheme, the components are tested in a non-staggered way within the same train, but the components are tested in a staggered way between the trains. The alternate periodic testing scheme indicates that all components in the same CCCG are tested in a non-staggered way during the planned maintenance period, but they are tested in a staggered way during normal plant operation. Since the CCF probabilities are dependent on the testing schemes such as staggered testing or non-staggered testing, CCF estimators have two kinds of formulas in accordance with the testing schemes. Thus, there are general formulas to estimate the CCF probability on the staggered testing scheme and non-staggered testing scheme. However, in real plant operation, there are many CCCGs with specific mixed testing schemes. Recently, Barros () and Kang () proposed a CCF factor estimation method to reflect the alternate periodic testing scheme and the train level mixed testing scheme. In this paper, a general formula which is applicable to both the alternate periodic testing scheme and the train level mixed testing scheme was derived.

  14. The probability of containment failure by direct containment heating in zion

    International Nuclear Information System (INIS)

    Pilch, M.M.; Yan, H.; Theofanous, T.G.

    1994-01-01

    This report is the first step in the resolution of the Direct Containment Heating (DCH) issue for the Zion Nuclear Power Plant using the Risk Oriented Accident Analysis Methodology (ROAAM). This report includes the definition of a probabilistic framework that decomposes the DCH problem into three probability density functions that reflect the most uncertain initial conditions (UO 2 mass, zirconium oxidation fraction, and steel mass). Uncertainties in the initial conditions are significant, but the quantification approach is based on establishing reasonable bounds that are not unnecessarily conservative. To this end, the authors also make use of the ROAAM ideas of enveloping scenarios and open-quotes splinteringclose quotes. Two casual relations (CRs) are used in this framework: CR1 is a model that calculates the peak pressure in the containment as a function of the initial conditions, and CR2 is a model that returns the frequency of containment failure as a function of pressure within the containment. Uncertainty in CR1 is accounted for by the use of two independently developed phenomenological models, the Convection Limited Containment Heating (CLCH) model and the Two-Cell Equilibrium (TCE) model, and by probabilistically distributing the key parameter in both, which is the ratio of the melt entrainment time to the system blowdown time constant. The two phenomenological models have been compared with an extensive data base including recent integral simulations at two different physical scales (1/10th scale in the Surtsey facility at Sandia National Laboratories and 1/40th scale in the COREXIT facility at Argonne National Laboratory). The loads predicted by these models were significantly lower than those from previous parametric calculations. The containment load distributions do not intersect the containment strength curve in any significant way, resulting in containment failure probabilities less than 10 -3 for all scenarios considered

  15. Classification of resistance to passive motion using minimum probability of error criterion.

    Science.gov (United States)

    Chan, H C; Manry, M T; Kondraske, G V

    1987-01-01

    Neurologists diagnose many muscular and nerve disorders by classifying the resistance to passive motion of patients' limbs. Over the past several years, a computer-based instrument has been developed for automated measurement and parameterization of this resistance. In the device, a voluntarily relaxed lower extremity is moved at constant velocity by a motorized driver. The torque exerted on the extremity by the machine is sampled, along with the angle of the extremity. In this paper a computerized technique is described for classifying a patient's condition as 'Normal' or 'Parkinson disease' (rigidity), from the torque versus angle curve for the knee joint. A Legendre polynomial, fit to the curve, is used to calculate a set of eight normally distributed features of the curve. The minimum probability of error approach is used to classify the curve as being from a normal or Parkinson disease patient. Data collected from 44 different subjects was processes and the results were compared with an independent physician's subjective assessment of rigidity. There is agreement in better than 95% of the cases, when all of the features are used.

  16. Estimation of the common cause failure probabilities of the components under mixed testing schemes

    International Nuclear Information System (INIS)

    Kang, Dae Il; Hwang, Mee Jeong; Han, Sang Hoon

    2009-01-01

    For the case where trains or channels of standby safety systems consisting of more than two redundant components are tested in a staggered manner, the standby safety components within a train can be tested simultaneously or consecutively. In this case, mixed testing schemes, staggered and non-staggered testing schemes, are used for testing the components. Approximate formulas, based on the basic parameter method, were developed for the estimation of the common cause failure (CCF) probabilities of the components under mixed testing schemes. The developed formulas were applied to the four redundant check valves of the auxiliary feed water system as a demonstration study for their appropriateness. For a comparison, we estimated the CCF probabilities of the four redundant check valves for the mixed, staggered, and non-staggered testing schemes. The CCF probabilities of the four redundant check valves for the mixed testing schemes were estimated to be higher than those for the staggered testing scheme, and lower than those for the non-staggered testing scheme.

  17. Estimation of Partial Safety Factors and Target Failure Probability Based on Cost Optimization of Rubble Mound Breakwaters

    DEFF Research Database (Denmark)

    Kim, Seung-Woo; Suh, Kyung-Duck; Burcharth, Hans F.

    2010-01-01

    The breakwaters are designed by considering the cost optimization because a human risk is seldom considered. Most breakwaters, however, were constructed without considering the cost optimization. In this study, the optimum return period, target failure probability and the partial safety factors...

  18. Modelling the impact of creep on the probability of failure of a solid oxidefuel cell stack

    DEFF Research Database (Denmark)

    Greco, Fabio; Frandsen, Henrik Lund; Nakajo, Arata

    2014-01-01

    In solid oxide fuel cell (SOFC) technology a major challenge lies in balancing thermal stresses from an inevitable thermal field. The cells are known to creep, changing over time the stress field. The main objective of this study was to assess the influence of creep on the failure probability of ...

  19. The Human Bathtub: Safety and Risk Predictions Including the Dynamic Probability of Operator Errors

    International Nuclear Information System (INIS)

    Duffey, Romney B.; Saull, John W.

    2006-01-01

    Reactor safety and risk are dominated by the potential and major contribution for human error in the design, operation, control, management, regulation and maintenance of the plant, and hence to all accidents. Given the possibility of accidents and errors, now we need to determine the outcome (error) probability, or the chance of failure. Conventionally, reliability engineering is associated with the failure rate of components, or systems, or mechanisms, not of human beings in and interacting with a technological system. The probability of failure requires a prior knowledge of the total number of outcomes, which for any predictive purposes we do not know or have. Analysis of failure rates due to human error and the rate of learning allow a new determination of the dynamic human error rate in technological systems, consistent with and derived from the available world data. The basis for the analysis is the 'learning hypothesis' that humans learn from experience, and consequently the accumulated experience defines the failure rate. A new 'best' equation has been derived for the human error, outcome or failure rate, which allows for calculation and prediction of the probability of human error. We also provide comparisons to the empirical Weibull parameter fitting used in and by conventional reliability engineering and probabilistic safety analysis methods. These new analyses show that arbitrary Weibull fitting parameters and typical empirical hazard function techniques cannot be used to predict the dynamics of human errors and outcomes in the presence of learning. Comparisons of these new insights show agreement with human error data from the world's commercial airlines, the two shuttle failures, and from nuclear plant operator actions and transient control behavior observed in transients in both plants and simulators. The results demonstrate that the human error probability (HEP) is dynamic, and that it may be predicted using the learning hypothesis and the minimum

  20. Flexural strength and the probability of failure of cold isostatic pressed zirconia core ceramics.

    Science.gov (United States)

    Siarampi, Eleni; Kontonasaki, Eleana; Papadopoulou, Lambrini; Kantiranis, Nikolaos; Zorba, Triantafillia; Paraskevopoulos, Konstantinos M; Koidis, Petros

    2012-08-01

    The flexural strength of zirconia core ceramics must predictably withstand the high stresses developed during oral function. The in-depth interpretation of strength parameters and the probability of failure during clinical performance could assist the clinician in selecting the optimum materials while planning treatment. The purpose of this study was to evaluate the flexural strength based on survival probability and Weibull statistical analysis of 2 zirconia cores for ceramic restorations. Twenty bar-shaped specimens were milled from 2 core ceramics, IPS e.max ZirCAD and Wieland ZENO Zr, and were loaded until fracture according to ISO 6872 (3-point bending test). An independent samples t test was used to assess significant differences of fracture strength (α=.05). Weibull statistical analysis of the flexural strength data provided 2 parameter estimates: Weibull modulus (m) and characteristic strength (σ(0)). The fractured surfaces of the specimens were evaluated by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). The investigation of the crystallographic state of the materials was performed with x-ray diffraction analysis (XRD) and Fourier transform infrared (FTIR) spectroscopy. Higher mean flexural strength (Plines zones). Both groups primarily sustained the tetragonal phase of zirconia and a negligible amount of the monoclinic phase. Although both zirconia ceramics presented similar fractographic and crystallographic properties, the higher flexural strength of WZ ceramics was associated with a lower m and more voids in their microstructure. These findings suggest a greater scattering of strength values and a flaw distribution that are expected to increase failure probability. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  1. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rushdi, Ahmad A. [Univ. of Texas, Austin, TX (United States); Abdelkader, Ahmad [Univ. of Maryland, College Park, MD (United States)

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  2. Retrieval system for emplaced spent unreprocessed fuel (SURF) in salt bed depository: accident event analysis and mechanical failure probabilities. Final report

    International Nuclear Information System (INIS)

    Bhaskaran, G.; McCleery, J.E.

    1979-10-01

    This report provides support in developing an accident prediction event tree diagram, with an analysis of the baseline design concept for the retrieval of emplaced spent unreprocessed fuel (SURF) contained in a degraded Canister. The report contains an evaluation check list, accident logic diagrams, accident event tables, fault trees/event trees and discussions of failure probabilities for the following subsystems as potential contributors to a failure: (a) Canister extraction, including the core and ram units; (b) Canister transfer at the hoist area; and (c) Canister hoisting. This report is the second volume of a series. It continues and expands upon the report Retrieval System for Emplaced Spent Unreprocessed Fuel (SURF) in Salt Bed Depository: Baseline Concept Criteria Specifications and Mechanical Failure Probabilities. This report draws upon the baseline conceptual specifications contained in the first report

  3. ANALYSIS OF RELIABILITY OF NONRECTORABLE REDUNDANT POWER SYSTEMS TAKING INTO ACCOUNT COMMON FAILURES

    Directory of Open Access Journals (Sweden)

    V. A. Anischenko

    2014-01-01

    Full Text Available Reliability Analysis of nonrestorable redundant power Systems of industrial plants and other consumers of electric energy was carried out. The main attention was paid to numbers failures influence, caused by failures of all elements of System due to one general reason. Noted the main possible reasons of common failures formation. Two main indicators of reliability of non-restorable systems are considered: average time of no-failure operation and mean probability of no-failure operation. Modeling of failures were carried out by mean of division of investigated system into two in-series connected subsystems, one of them indicated independent failures, but the other indicated common failures. Due to joined modeling of single and common failures resulting intensity of failures is the amount incompatible components: intensity statistically independent failures and intensity of common failures of elements and system in total.It is shown the influence of common failures of elements on average time of no-failure operation of system. There is built the scale of preference of systems according to criterion of  average time maximum of no-failure operation, depending on portion of common failures. It is noticed that such common failures don’t influence on the scale of preference, but  change intervals of time, determining the moments of systems failures and excepting them from the number of comparators. There were discussed two problems  of conditionally optimization of  systems’  reservation choice, taking into account their reliability and cost. The first problem is solved due to criterion of minimum cost of system providing mean probability of no-failure operation, the second problem is solved due to criterion of maximum of mean probability of no-failure operation with cost limitation of system.

  4. Temperature Analysis and Failure Probability of the Fuel Element in HTR-PM

    International Nuclear Information System (INIS)

    Yang Lin; Liu Bing; Tang Chunhe

    2014-01-01

    Spherical fuel element is applied in the 200-MW High Temperature Reactor-Pebble-bed Modular (HTR-PM). Each spherical fuel element contains approximately 12,000 coated fuel particles in the inner graphite matrix with a diameter of 50mm to form the fuel zone, while the outer shell with a thickness of 5mm is a fuel-free zone made up of the same graphite material. Under high burnup irradiation, the temperature of fuel element rises and the stress will result in the damage of fuel element. The purpose of this study is to analyze the temperature of fuel element and to discuss the stress and failure probability. (author)

  5. Integrating Preventive Maintenance Scheduling As Probability Machine Failure And Batch Production Scheduling

    Directory of Open Access Journals (Sweden)

    Zahedi Zahedi

    2016-06-01

    Full Text Available This paper discusses integrated model of batch production scheduling and machine maintenance scheduling. Batch production scheduling uses minimize total actual flow time criteria and machine maintenance scheduling uses the probability of machine failure based on Weibull distribution. The model assumed no nonconforming parts in a planning horizon. The model shows an increase in the number of the batch (length of production run up to a certain limit will minimize the total actual flow time. Meanwhile, an increase in the length of production run will implicate an increase in the number of PM. An example was given to show how the model and algorithm work.

  6. Probability of Loss of Assured Safety in Systems with Multiple Time-Dependent Failure Modes: Incorporation of Delayed Link Failure in the Presence of Aleatory Uncertainty.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon C. [Arizona State Univ., Tempe, AZ (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sallaberry, Cedric Jean-Marie. [Engineering Mechanics Corp. of Columbus, OH (United States)

    2018-02-01

    Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximation and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.

  7. Failure detection system risk reduction assessment

    Science.gov (United States)

    Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)

    2012-01-01

    A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.

  8. The relative impact of sizing errors on steam generator tube failure probability

    International Nuclear Information System (INIS)

    Cizelj, L.; Dvorsek, T.

    1998-01-01

    The Outside Diameter Stress Corrosion Cracking (ODSCC) at tube support plates is currently the major degradation mechanism affecting the steam generator tubes made of Inconel 600. This caused development and licensing of degradation specific maintenance approaches, which addressed two main failure modes of the degraded piping: tube rupture; and excessive leakage through degraded tubes. A methodology aiming at assessing the efficiency of a given set of possible maintenance approaches has already been proposed by the authors. It pointed out better performance of the degradation specific over generic approaches in (1) lower probability of single and multiple steam generator tube rupture (SGTR), (2) lower estimated accidental leak rates and (3) less tubes plugged. A sensitivity analysis was also performed pointing out the relative contributions of uncertain input parameters to the tube rupture probabilities. The dominant contribution was assigned to the uncertainties inherent to the regression models used to correlate the defect size and tube burst pressure. The uncertainties, which can be estimated from the in-service inspections, are further analysed in this paper. The defect growth was found to have significant and to some extent unrealistic impact on the probability of single tube rupture. Since the defect growth estimates were based on the past inspection records they strongly depend on the sizing errors. Therefore, an attempt was made to filter out the sizing errors and to arrive at more realistic estimates of the defect growth. The impact of different assumptions regarding sizing errors on the tube rupture probability was studied using a realistic numerical example. The data used is obtained from a series of inspection results from Krsko NPP with 2 Westinghouse D-4 steam generators. The results obtained are considered useful in safety assessment and maintenance of affected steam generators. (author)

  9. Failures probability calculation of the energy supply of the Angra-1 reactor rods assembly

    International Nuclear Information System (INIS)

    Borba, P.R.

    1978-01-01

    This work analyses the electric power system of the Angra I PWR plant. It is demonstrated that this system is closely coupled with the safety engineering features, which are the equipments provided to prevent, limit, or mitigate the release of radioactive material and to permit the safe reactor shutdown. Event trees are used to analyse the operation of those systems which can lead to the release of radioactivity following a specified initial event. The fault trees technique is used to calculate the failure probability of the on-site electric power system [pt

  10. A Rare Case of Acute Renal Failure Secondary to Rhabdomyolysis Probably Induced by Donepezil

    Directory of Open Access Journals (Sweden)

    Osman Zikrullah Sahin

    2014-01-01

    Full Text Available Introduction. Acute renal failure (ARF develops in 33% of the patients with rhabdomyolysis. The main etiologic factors are alcoholism, trauma, exercise overexertion, and drugs. In this report we present a rare case of ARF secondary to probably donepezil-induced rhabdomyolysis. Case Presentation. An 84-year-old male patient was admitted to the emergency department with a complaint of generalized weakness and reduced consciousness for two days. He had a history of Alzheimer’s disease for one year and he had taken donepezil 5 mg daily for two months. The patient’s physical examination revealed apathy, loss of cooperation, and decreased muscle strength. Laboratory studies revealed the following: urea: 128 mg/dL; Creatinine 6.06 mg/dL; creatine kinase: 3613 mg/dL. Donepezil was discontinued and the patient’s renal function tests improved gradually. Conclusion. Rhabdomyolysis-induced acute renal failure may develop secondary to donepezil therapy.

  11. Optimisation of the link volume for weakest link failure prediction in NBG-18 nuclear graphite

    International Nuclear Information System (INIS)

    Hindley, Michael P.; Groenwold, Albert A.; Blaine, Deborah C.; Becker, Thorsten H.

    2014-01-01

    This paper describes the process for approximating the optimal size of a link volume required for weakest link failure calculation in nuclear graphite, with NBG-18 used as an example. As part of the failure methodology, the link volume is defined in terms of two grouping criteria. The first criterion is a factor of the maximum grain size and the second criterion is a function of an equivalent stress limit. A methodology for approximating these grouping criteria is presented. The failure methodology employs finite element analysis (FEA) in order to predict the failure load, at 50% probability of failure. The average experimental failure load, as determined for 26 test geometries, is used to evaluate the accuracy of the weakest link failure calculations. The influence of the two grouping criteria on the failure load prediction is evaluated by defining an error in prediction across all test cases. Mathematical optimisation is used to find the minimum error across a range of test case failure predictions. This minimum error is shown to deliver the most accurate failure prediction across a whole range of components, although some test cases in the range predict conservative failure load. The mathematical optimisation objective function is penalised to account for non-conservative prediction of the failure load for any test case. The optimisation is repeated and a link volume found for conservative failure prediction. The failure prediction for each test case is evaluated, in detail, for the proposed link volumes. Based on the analysis, link design volumes for NBG-18 are recommended for either accurate or conservative failure prediction

  12. Process Equipment Failure Mode Analysis in a Chemical Industry

    Directory of Open Access Journals (Sweden)

    J. Nasl Seraji

    2008-04-01

    Full Text Available Background and aims   Prevention of potential accidents and safety promotion in chemical processes requires systematic safety management in them. The main objective of this study was analysis of important process equipment components failure modes and effects in H2S and CO2  isolation from extracted natural gas process.   Methods   This study was done in sweetening unit of an Iranian gas refinery. Failure Mode and Effect Analysis (FMEA used for identification of process equipments failures.   Results   Totally 30 failures identified and evaluated using FMEA. P-1 blower's blade breaking and sour gas pressure control valve bearing tight moving had maximum risk Priority number (RPN, P-1 body corrosion and increasing plug lower side angle of reach DEAlevel control valve  in tower - 1 were minimum calculated RPN.   Conclusion   By providing a reliable documentation system for equipment failures and  incidents recording, maintaining of basic information for later safety assessments would be  possible. Also, the probability of failures and effects could be minimized by conducting preventive maintenance.

  13. Prediction of accident sequence probabilities in a nuclear power plant due to earthquake events

    International Nuclear Information System (INIS)

    Hudson, J.M.; Collins, J.D.

    1980-01-01

    This paper presents a methodology to predict accident probabilities in nuclear power plants subject to earthquakes. The resulting computer program accesses response data to compute component failure probabilities using fragility functions. Using logical failure definitions for systems, and the calculated component failure probabilities, initiating event and safety system failure probabilities are synthesized. The incorporation of accident sequence expressions allows the calculation of terminal event probabilities. Accident sequences, with their occurrence probabilities, are finally coupled to a specific release category. A unique aspect of the methodology is an analytical procedure for calculating top event probabilities based on the correlated failure of primary events

  14. Probability of failure of the waste hoist brake system at the Waste Isolation Pilot Plant (WIPP)

    International Nuclear Information System (INIS)

    Greenfield, M.A.; Sargent, T.J.; Stanford Univ., CA

    1998-01-01

    In its most recent report on the annual probability of failure of the waste hoist brake system at the Waste Isolation Pilot Plant (WIPP), the annual failure rate is calculated to be 1.3E(-7)(1/yr), rounded off from 1.32E(-7). A calculation by the Environmental Evaluation Group (EEG) produces a result that is about 4% higher, namely 1.37E(-7)(1/yr). The difference is due to a minor error in the US Department of Energy (DOE) calculations in the Westinghouse 1996 report. WIPP's hoist safety relies on a braking system consisting of a number of components including two crucial valves. The failure rate of the system needs to be recalculated periodically to accommodate new information on component failure, changes in maintenance and inspection schedules, occasional incidents such as a hoist traveling out-of-control, either up or down, and changes in the design of the brake system. This report examines DOE's last two reports on the redesigned waste hoist system. In its calculations, the DOE has accepted one EEG recommendation and is using more current information about the component failures rates, the Nonelectronic Parts Reliability Data (NPRD). However, the DOE calculations fail to include the data uncertainties which are described in detail in the NPRD reports. The US Nuclear Regulatory Commission recommended that a system evaluation include mean estimates of component failure rates and take into account the potential uncertainties that exist so that an estimate can be made on the confidence level to be ascribed to the quantitative results. EEG has made this suggestion previously and the DOE has indicated why it does not accept the NRC recommendation. Hence, this EEG report illustrates the importance of including data uncertainty using a simple statistical example

  15. Fuzzy Failure Probability of Transmission Pipelines in the Niger ...

    African Journals Online (AJOL)

    We undertake the apportioning of failure possibility on twelve identified third party activities contributory to failure of transmission pipelines in the Niger Delta region of Nigeria, using the concept of fuzzy possibility scores. Expert elicitation technique generates linguistic variables that are transformed using fuzzy set theory ...

  16. Calculation of the pipes failure probability of the Rcic system of a nuclear power station by means of software WinPRAISE 07

    International Nuclear Information System (INIS)

    Jasso G, J.; Diaz S, A.; Mendoza G, G.; Sainz M, E.; Garcia de la C, F. M.

    2014-10-01

    The growth and the cracks propagation by fatigue are a typical degradation mechanism that is presented in the nuclear industry as in the conventional industry; the unstable propagation of a crack can cause the catastrophic failure of a metallic component even with high ductility; for this reason, activities of programmed maintenance have been established in the industry using inspection and visual techniques and/or ultrasound with an established periodicity allowing to follow up to these growths, controlling the undesirable effects; however, these activities increase the operation costs; and in the peculiar case of the nuclear industry, they increase the radiation exposure to the participant personnel. The use of mathematical processes that integrate concepts of uncertainty, material properties and the probability associated to the inspection results, has been constituted as a powerful tool of evaluation of the component reliability, reducing costs and exposure levels. In this work the evaluation of the failure probability by cracks growth preexisting by fatigue is presented, in pipes of a Reactor Core Isolation Cooling system (Rcic) in a nuclear power station. The software WinPRAISE 07 (Piping Reliability Analysis Including Seismic Events) was used supported in the probabilistic fracture mechanics principles. The obtained values of failure probability evidenced a good behavior of the analyzed pipes with a maximum order of 1.0 E-6, therefore is concluded that the performance of the lines of these pipes is reliable even extrapolating the calculations at 10, 20, 30 and 40 years of service. (Author)

  17. Use of fault tree technique to determine the failure probability of electrical systems of IE class in nuclear installations

    International Nuclear Information System (INIS)

    Cruz S, W.D.

    1988-01-01

    This paper refers to emergency safety systems of Angra INPP (Brazil 1626 Mw(e)) such as containment, heat removal, emergency removal system, radioactive elements removal from containment environment, berated water infection, etc. Associated with these systems, the failure probability calculation of IE Class bars is achieved, this is a safety classification for electrical equipment essential for the systems mentioned above

  18. Uncertainty about probability: a decision analysis perspective

    International Nuclear Information System (INIS)

    Howard, R.A.

    1988-01-01

    The issue of how to think about uncertainty about probability is framed and analyzed from the viewpoint of a decision analyst. The failure of nuclear power plants is used as an example. The key idea is to think of probability as describing a state of information on an uncertain event, and to pose the issue of uncertainty in this quantity as uncertainty about a number that would be definitive: it has the property that you would assign it as the probability if you knew it. Logical consistency requires that the probability to assign to a single occurrence in the absence of further information be the mean of the distribution of this definitive number, not the medium as is sometimes suggested. Any decision that must be made without the benefit of further information must also be made using the mean of the definitive number's distribution. With this formulation, they find further that the probability of r occurrences in n exchangeable trials will depend on the first n moments of the definitive number's distribution. In making decisions, the expected value of clairvoyance on the occurrence of the event must be at least as great as that on the definitive number. If one of the events in question occurs, then the increase in probability of another such event is readily computed. This means, in terms of coin tossing, that unless one is absolutely sure of the fairness of a coin, seeing a head must increase the probability of heads, in distinction to usual thought. A numerical example for nuclear power shows that the failure of one plant of a group with a low probability of failure can significantly increase the probability that must be assigned to failure of a second plant in the group

  19. Planetary tides during the Maunder sunspot minimum

    International Nuclear Information System (INIS)

    Smythe, C.M.; Eddy, J.A.

    1977-01-01

    Sun-centered planetary conjunctions and tidal potentials are here constructed for the AD1645 to 1715 period of sunspot absence, referred to as the 'Maunder Minimum'. These are found to be effectively indistinguishable from patterns of conjunctions and power spectra of tidal potential in the present era of a well established 11 year sunspot cycle. This places a new and difficult restraint on any tidal theory of sunspot formation. Problems arise in any direct gravitational theory due to the apparently insufficient forces and tidal heights involved. Proponents of the tidal hypothesis usually revert to trigger mechanisms, which are difficult to criticise or test by observation. Any tidal theory rests on the evidence of continued sunspot periodicity and the substantiation of a prolonged period of solar anomaly in the historical past. The 'Maunder Minimum' was the most drastic change in the behaviour of solar activity in the last 300 years; sunspots virtually disappeared for a 70 year period and the 11 year cycle was probably absent. During that time, however, the nine planets were all in their orbits, and planetary conjunctions and tidal potentials were indistinguishable from those of the present era, in which the 11 year cycle is well established. This provides good evidence against the tidal theory. The pattern of planetary tidal forces during the Maunder Minimum was reconstructed to investigate the possibility that the multiple planet forces somehow fortuitously cancelled at the time, that is that the positions of the slower moving planets in the 17th and early 18th centuries were such that conjunctions and tidal potentials were at the time reduced in number and force. There was no striking dissimilarity between the time of the Maunder Minimum and any period investigated. The failure of planetary conjunction patterns to reflect the drastic drop in sunspots during the Maunder Minimum casts doubt on the tidal theory of solar activity, but a more quantitative test

  20. Methods, apparatus and system for notification of predictable memory failure

    Energy Technology Data Exchange (ETDEWEB)

    Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.

    2017-01-03

    A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.

  1. Dependency models and probability of joint events

    International Nuclear Information System (INIS)

    Oerjasaeter, O.

    1982-08-01

    Probabilistic dependencies between components/systems are discussed with reference to a broad classification of potential failure mechanisms. Further, a generalized time-dependency model, based on conditional probabilities for estimation of the probability of joint events and event sequences is described. The applicability of this model is clarified/demonstrated by various examples. It is concluded that the described model of dependency is a useful tool for solving a variety of practical problems concerning the probability of joint events and event sequences where common cause and time-dependent failure mechanisms are involved. (Auth.)

  2. Corrosion induced failure analysis of subsea pipelines

    International Nuclear Information System (INIS)

    Yang, Yongsheng; Khan, Faisal; Thodi, Premkumar; Abbassi, Rouzbeh

    2017-01-01

    Pipeline corrosion is one of the main causes of subsea pipeline failure. It is necessary to monitor and analyze pipeline condition to effectively predict likely failure. This paper presents an approach to analyze the observed abnormal events to assess the condition of subsea pipelines. First, it focuses on establishing a systematic corrosion failure model by Bow-Tie (BT) analysis, and subsequently the BT model is mapped into a Bayesian Network (BN) model. The BN model facilitates the modelling of interdependency of identified corrosion causes, as well as the updating of failure probabilities depending on the arrival of new information. Furthermore, an Object-Oriented Bayesian Network (OOBN) has been developed to better structure the network and to provide an efficient updating algorithm. Based on this OOBN model, probability updating and probability adaptation are performed at regular intervals to estimate the failure probabilities due to corrosion and potential consequences. This results in an interval-based condition assessment of subsea pipeline subjected to corrosion. The estimated failure probabilities would help prioritize action to prevent and control failures. Practical application of the developed model is demonstrated using a case study. - Highlights: • A Bow-Tie (BT) based corrosion failure model linking causation with the potential losses. • A novel Object-Oriented Bayesian Network (OOBN) based corrosion failure risk model. • Probability of failure updating and adaptation with respect to time using OOBN model. • Application of the proposed model to develop and test strategies to minimize failure risk.

  3. Uncertainty analysis with statistically correlated failure data

    International Nuclear Information System (INIS)

    Modarres, M.; Dezfuli, H.; Roush, M.L.

    1987-01-01

    Likelihood of occurrence of the top event of a fault tree or sequences of an event tree is estimated from the failure probability of components that constitute the events of the fault/event tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. At present most fault tree calculations are based on uncorrelated component failure data. This chapter describes a methodology for assessing the probability intervals for the top event failure probability of fault trees or frequency of occurrence of event tree sequences when event failure data are statistically correlated. To estimate mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. Moment matching technique is used to obtain the probability distribution function of the top event through fitting the Johnson Ssub(B) distribution. The computer program, CORRELATE, was developed to perform the calculations necessary for the implementation of the method developed. (author)

  4. A conservative bound for the probability of failure of a 1-out-of-2 protection system with one hardware-only and one software-based protection train

    International Nuclear Information System (INIS)

    Bishop, Peter; Bloomfield, Robin; Littlewood, Bev; Popov, Peter; Povyakalo, Andrey; Strigini, Lorenzo

    2014-01-01

    Redundancy and diversity have long been used as means to obtain high reliability in critical systems. While it is easy to show that, say, a 1-out-of-2 diverse system will be more reliable than each of its two individual “trains”, assessing the actual reliability of such systems can be difficult because the trains cannot be assumed to fail independently. If we cannot claim independence of train failures, the computation of system reliability is difficult, because we would need to know the probability of failure on demand (pfd) for every possible demand. These are unlikely to be known in the case of software. Claims for software often concern its marginalpfd, i.e. average across all possible demands. In this paper we consider the case of a 1-out-of-2 safety protection system in which one train contains software (and hardware), and the other train contains only hardware equipment. We show that a useful upper (i.e. conservative) bound can be obtained for the system pfd using only the unconditional pfd for software together with information about the variation of hardware failure probability across demands, which is likely to be known or estimatable. The worst-case result is obtained by “allocating” software failure probability among demand “classes” so as to maximize system pfd

  5. The use of lifetime functions in the optimization of interventions on existing bridges considering maintenance and failure costs

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Seung-Ie [Department of Civil, Enviromental, and Architectural Enginnering, University of Colorado, Campus Box 428, Boulder, CO 80309-0428 (United States)]. E-mail: yangsione@dreamwiz.com; Frangopol, Dan M. [Department of Civil, Enviromental, and Architectural Enginnering, University of Colorado, Campus Box 428, Boulder, CO 80309-0428 (United States)]. E-mail: dan.frangopol@colorado.edu; Kawakami, Yoriko [Hanshin Expressway Public Corporation, Kobe Maintenance Department, 16-1 Shinko-cho Chuo-ku Kobe City, Hyogo, 650-0041 (Japan)]. E-mail: yoriko-kawakami@hepc.go.jp; Neves, Luis C. [Department of Civil, Enviromental, and Architectural Enginnering, University of Colorado, Campus Box 428, Boulder, CO 80309-0428 (United States)]. E-mail: lneves@civil.uminho.pt

    2006-06-15

    In the last decade, it became clear that life-cycle cost analysis of existing civil infrastructure must be used to optimally manage the growing number of aging and deteriorating structures. The uncertainties associated with deteriorating structures require the use of probabilistic methods to properly evaluate their lifetime performance. In this paper, the deterioration and the effect of maintenance actions are analyzed considering the performance of existing structures characterized by lifetime functions. These functions allow, in a simple manner, the consideration of the effect of aging on the decrease of the probability of survival of a structure, as well as the effect of maintenance actions. Models for the effects of proactive and reactive preventive maintenance, and essential maintenance actions are presented. Since the probability of failure is different from zero during the entire service life of a deteriorating structure and depends strongly on the maintenance strategy, the cost of failure is included in this analysis. The failure of one component in a structure does not usually lead to failure of the structure and, as a result, the safety of existing structures must be analyzed using a system reliability framework. The optimization consists of minimizing the sum of the cumulative maintenance and expected failure cost during the prescribed time horizon. Two examples of application of the proposed methodology are presented. In the first example, the sum of the maintenance and failure costs of a bridge in Colorado is minimized considering essential maintenance only and a fixed minimum acceptable probability of failure. In the second example, the expected lifetime cost, including maintenance and expected failure costs, of a multi-girder bridge is minimized considering reactive preventive maintenance actions.

  6. The use of lifetime functions in the optimization of interventions on existing bridges considering maintenance and failure costs

    International Nuclear Information System (INIS)

    Yang, Seung-Ie; Frangopol, Dan M.; Kawakami, Yoriko; Neves, Luis C.

    2006-01-01

    In the last decade, it became clear that life-cycle cost analysis of existing civil infrastructure must be used to optimally manage the growing number of aging and deteriorating structures. The uncertainties associated with deteriorating structures require the use of probabilistic methods to properly evaluate their lifetime performance. In this paper, the deterioration and the effect of maintenance actions are analyzed considering the performance of existing structures characterized by lifetime functions. These functions allow, in a simple manner, the consideration of the effect of aging on the decrease of the probability of survival of a structure, as well as the effect of maintenance actions. Models for the effects of proactive and reactive preventive maintenance, and essential maintenance actions are presented. Since the probability of failure is different from zero during the entire service life of a deteriorating structure and depends strongly on the maintenance strategy, the cost of failure is included in this analysis. The failure of one component in a structure does not usually lead to failure of the structure and, as a result, the safety of existing structures must be analyzed using a system reliability framework. The optimization consists of minimizing the sum of the cumulative maintenance and expected failure cost during the prescribed time horizon. Two examples of application of the proposed methodology are presented. In the first example, the sum of the maintenance and failure costs of a bridge in Colorado is minimized considering essential maintenance only and a fixed minimum acceptable probability of failure. In the second example, the expected lifetime cost, including maintenance and expected failure costs, of a multi-girder bridge is minimized considering reactive preventive maintenance actions

  7. A technique for estimating the probability of radiation-stimulated failures of integrated microcircuits in low-intensity radiation fields: Application to the Spektr-R spacecraft

    Science.gov (United States)

    Popov, V. D.; Khamidullina, N. M.

    2006-10-01

    In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.

  8. Differential subsidence and its effect on subsurface infrastructure: predicting probability of pipeline failure (STOOP project)

    Science.gov (United States)

    de Bruijn, Renée; Dabekaussen, Willem; Hijma, Marc; Wiersma, Ane; Abspoel-Bukman, Linda; Boeije, Remco; Courage, Wim; van der Geest, Johan; Hamburg, Marc; Harmsma, Edwin; Helmholt, Kristian; van den Heuvel, Frank; Kruse, Henk; Langius, Erik; Lazovik, Elena

    2017-04-01

    Due to heterogeneity of the subsurface in the delta environment of the Netherlands, differential subsidence over short distances results in tension and subsequent wear of subsurface infrastructure, such as water and gas pipelines. Due to uncertainties in the build-up of the subsurface, however, it is unknown where this problem is the most prominent. This is a problem for asset managers deciding when a pipeline needs replacement: damaged pipelines endanger security of supply and pose a significant threat to safety, yet premature replacement raises needless expenses. In both cases, costs - financial or other - are high. Therefore, an interdisciplinary research team of geotechnicians, geologists and Big Data engineers from research institutes TNO, Deltares and SkyGeo developed a stochastic model to predict differential subsidence and the probability of consequent pipeline failure on a (sub-)street level. In this project pipeline data from company databases is combined with a stochastic geological model and information on (historical) groundwater levels and overburden material. Probability of pipeline failure is modelled by a coupling with a subsidence model and two separate models on pipeline behaviour under stress, using a probabilistic approach. The total length of pipelines (approx. 200.000 km operational in the Netherlands) and the complexity of the model chain that is needed to calculate a chance of failure, results in large computational challenges, as it requires massive evaluation of possible scenarios to reach the required level of confidence. To cope with this, a scalable computational infrastructure has been developed, composing a model workflow in which components have a heterogeneous technological basis. Three pilot areas covering an urban, a rural and a mixed environment, characterised by different groundwater-management strategies and different overburden histories, are used to evaluate the differences in subsidence and uncertainties that come with

  9. PWR reactor pressure vessel failure probabilities

    International Nuclear Information System (INIS)

    Dufresne, J.; Lanore, J.M.; Lucia, A.C.; Elbaz, J.; Brunnhuber, R.

    1980-05-01

    To evaluate the rupture probability of a LWR vessel a probabilistic method using the fracture mechanics under probabilistic form has been proposed previously, but it appears that more accurate evaluation is possible. In consequence a joint collaboration agreement signed in 1976 between CEA, EURATOM, JRC Ispra and FRAMATOME set up and started a research program covering three parts: a computer code development, data acquisition and processing, and a support experimental program which aims at clarifying the most important parameters used in the COVASTOL computer code

  10. Least-cost failure diagnosis in uncertain reliability systems

    International Nuclear Information System (INIS)

    Cox, Louis Anthony; Chiu, Steve Y.; Sun Xiaorong

    1996-01-01

    In many textbook solutions, for systems failure diagnosis problems studied using reliability theory and artificial intelligence, the prior probabilities of different failure states can be estimated and used to guide the sequential search for failed components after the whole system fails. In practice, however, both the component failure probabilities and the structure function of the system being examined--i.e., the mapping between the states of its components and the state of the system--may not be known with certainty. At best:, the probabilities of different hypothesized system descriptions, each specifying the component failure probabilities and the system's structure function, may be known to a useful approximation, perhaps based on sample data and previous experience. Cost-effective diagnosis of the system's failure state is then a challenging problem. Although the probabilities of component failures are aleatory, uncertainties about these probabilities and about the system structure function are epistemic. This paper examines how to make best use of both epistemic prior probabilities for system descriptions and the information gleaned from costly inspections of component states after the system fails, to minimize the average cost of identifying the failure state. Two approaches are introduced for systems dominated by aleatory uncertainties, one motivated by information theory and the other based on the idea of trying to prove a hypothesis about the identity of the failure state as efficiently as possible. While the general problem of cost-effective failure diagnosis is computationally intractable (NP-hard), both heuristics provide useful approximations on small to moderate sized problems and optimal results for certain common types of reliability systems, including series, parallel, parallel-series, and k-out-of-n systems. A hybrid heuristic that adaptively chooses which heuristic to apply next after any sequence of observations (component test results

  11. The Use of Conditional Probability Integral Transformation Method for Testing Accelerated Failure Time Models

    Directory of Open Access Journals (Sweden)

    Abdalla Ahmed Abdel-Ghaly

    2016-06-01

    Full Text Available This paper suggests the use of the conditional probability integral transformation (CPIT method as a goodness of fit (GOF technique in the field of accelerated life testing (ALT, specifically for validating the underlying distributional assumption in accelerated failure time (AFT model. The method is based on transforming the data into independent and identically distributed (i.i.d Uniform (0, 1 random variables and then applying the modified Watson statistic to test the uniformity of the transformed random variables. This technique is used to validate each of the exponential, Weibull and lognormal distributions' assumptions in AFT model under constant stress and complete sampling. The performance of the CPIT method is investigated via a simulation study. It is concluded that this method performs well in case of exponential and lognormal distributions. Finally, a real life example is provided to illustrate the application of the proposed procedure.

  12. Reliability of piping system components. Framework for estimating failure parameters from service data

    International Nuclear Information System (INIS)

    Nyman, R.; Hegedus, D.; Tomic, B.; Lydell, B.

    1997-12-01

    This report summarizes results and insights from the final phase of a R and D project on piping reliability sponsored by the Swedish Nuclear Power Inspectorate (SKI). The technical scope includes the development of an analysis framework for estimating piping reliability parameters from service data. The R and D has produced a large database on the operating experience with piping systems in commercial nuclear power plants worldwide. It covers the period 1970 to the present. The scope of the work emphasized pipe failures (i.e., flaws/cracks, leaks and ruptures) in light water reactors (LWRs). Pipe failures are rare events. A data reduction format was developed to ensure that homogenous data sets are prepared from scarce service data. This data reduction format distinguishes between reliability attributes and reliability influence factors. The quantitative results of the analysis of service data are in the form of conditional probabilities of pipe rupture given failures (flaws/cracks, leaks or ruptures) and frequencies of pipe failures. Finally, the R and D by SKI produced an analysis framework in support of practical applications of service data in PSA. This, multi-purpose framework, termed 'PFCA'-Pipe Failure Cause and Attribute- defines minimum requirements on piping reliability analysis. The application of service data should reflect the requirements of an application. Together with raw data summaries, this analysis framework enables the development of a prior and a posterior pipe rupture probability distribution. The framework supports LOCA frequency estimation, steam line break frequency estimation, as well as the development of strategies for optimized in-service inspection strategies

  13. Reliability of piping system components. Framework for estimating failure parameters from service data

    Energy Technology Data Exchange (ETDEWEB)

    Nyman, R [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hegedus, D; Tomic, B [ENCONET Consulting GesmbH, Vienna (Austria); Lydell, B [RSA Technologies, Vista, CA (United States)

    1997-12-01

    This report summarizes results and insights from the final phase of a R and D project on piping reliability sponsored by the Swedish Nuclear Power Inspectorate (SKI). The technical scope includes the development of an analysis framework for estimating piping reliability parameters from service data. The R and D has produced a large database on the operating experience with piping systems in commercial nuclear power plants worldwide. It covers the period 1970 to the present. The scope of the work emphasized pipe failures (i.e., flaws/cracks, leaks and ruptures) in light water reactors (LWRs). Pipe failures are rare events. A data reduction format was developed to ensure that homogenous data sets are prepared from scarce service data. This data reduction format distinguishes between reliability attributes and reliability influence factors. The quantitative results of the analysis of service data are in the form of conditional probabilities of pipe rupture given failures (flaws/cracks, leaks or ruptures) and frequencies of pipe failures. Finally, the R and D by SKI produced an analysis framework in support of practical applications of service data in PSA. This, multi-purpose framework, termed `PFCA`-Pipe Failure Cause and Attribute- defines minimum requirements on piping reliability analysis. The application of service data should reflect the requirements of an application. Together with raw data summaries, this analysis framework enables the development of a prior and a posterior pipe rupture probability distribution. The framework supports LOCA frequency estimation, steam line break frequency estimation, as well as the development of strategies for optimized in-service inspection strategies. 63 refs, 30 tabs, 22 figs.

  14. A procedure for estimation of pipe break probabilities due to IGSCC

    International Nuclear Information System (INIS)

    Bergman, M.; Brickstad, B.; Nilsson, F.

    1998-06-01

    A procedure has been developed for estimation of the failure probability of welds joints in nuclear piping susceptible to intergranular stress corrosion cracking. The procedure aims at a robust and rapid estimate of the failure probability for a specific weld with known stress state. Random properties are taken into account of the crack initiation rate, the initial crack length, the in-service inspection efficiency and the leak rate. A computer realization of the procedure has been developed for user friendly applications by design engineers. Some examples are considered to investigate the sensitivity of the failure probability to different input quantities. (au)

  15. System reliability analysis using dominant failure modes identified by selective searching technique

    International Nuclear Information System (INIS)

    Kim, Dong-Seok; Ok, Seung-Yong; Song, Junho; Koh, Hyun-Moo

    2013-01-01

    The failure of a redundant structural system is often described by innumerable system failure modes such as combinations or sequences of local failures. An efficient approach is proposed to identify dominant failure modes in the space of random variables, and then perform system reliability analysis to compute the system failure probability. To identify dominant failure modes in the decreasing order of their contributions to the system failure probability, a new simulation-based selective searching technique is developed using a genetic algorithm. The system failure probability is computed by a multi-scale matrix-based system reliability (MSR) method. Lower-scale MSR analyses evaluate the probabilities of the identified failure modes and their statistical dependence. A higher-scale MSR analysis evaluates the system failure probability based on the results of the lower-scale analyses. Three illustrative examples demonstrate the efficiency and accuracy of the approach through comparison with existing methods and Monte Carlo simulations. The results show that the proposed method skillfully identifies the dominant failure modes, including those neglected by existing approaches. The multi-scale MSR method accurately evaluates the system failure probability with statistical dependence fully considered. The decoupling between the failure mode identification and the system reliability evaluation allows for effective applications to larger structural systems

  16. A probabilistic approach for RIA fuel failure criteria

    International Nuclear Information System (INIS)

    Carlo Vitanza, Dr.

    2008-01-01

    Substantial experimental data have been produced in support of the definition of the RIA safety limits for water reactor fuels at high burn up. Based on these data, fuel failure enthalpy limits can be derived based on methods having a varying degree of complexity. However, regardless of sophistication, it is unlikely that any deterministic approach would result in perfect predictions of all failure and non failure data obtained in RIA tests. Accordingly, a probabilistic approach is proposed in this paper, where in addition to a best estimate evaluation of the failure enthalpy, a RIA fuel failure probability distribution is defined within an enthalpy band surrounding the best estimate failure enthalpy. The band width and the failure probability distribution within this band are determined on the basis of the whole data set, including failure and non failure data and accounting for the actual scatter of the database. The present probabilistic approach can be used in conjunction with any deterministic model or correlation. For deterministic models or correlations having good prediction capability, the probability distribution will be sharply increasing within a narrow band around the best estimate value. For deterministic predictions of lower quality, instead, the resulting probability distribution will be broad and coarser

  17. Evaluation of Failure Probability of BWR Vessel Under Cool-down and LTOP Transient Conditions Using PROFAS-RV PFM Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong-Min; Lee, Bong-Sang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The round robin project was proposed by the PFM Research Subcommittee of the Japan Welding Engineering Society to Asian Society for Integrity of Nuclear Components (ASINCO) members, which is designated in Korea as Phase 2 of A-Pro2. The objective of this phase 2 of RR analysis is to compare the scheme and results related to the assessment of structural integrity of RPV for the events important to safety in the design consideration but relatively low fracture probability. In this study, probabilistic fracture mechanics analysis was performed for the round robin cases using PROFAS-RV code. The effects of key parameters such as different transient, fluence level, Cu and Ni content, initial RT{sub NDT} and RT{sub NDT} shift model on the failure probability were systematically compared and reviewed. These efforts can minimize the uncertainty of the integrity evaluation for the reactor pressure vessel.

  18. Fragility estimation for seismically isolated nuclear structures by high confidence low probability of failure values and bi-linear regression

    International Nuclear Information System (INIS)

    Carausu, A.

    1996-01-01

    A method for the fragility estimation of seismically isolated nuclear power plant structure is proposed. The relationship between the ground motion intensity parameter (e.g. peak ground velocity or peak ground acceleration) and the response of isolated structures is expressed in terms of a bi-linear regression line, whose coefficients are estimated by the least-square method in terms of available data on seismic input and structural response. The notion of high confidence low probability of failure (HCLPF) value is also used for deriving compound fragility curves for coupled subsystems. (orig.)

  19. Rate based failure detection

    Science.gov (United States)

    Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward

    2018-01-02

    This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or data paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.

  20. Probability and containment of turbine missiles

    International Nuclear Information System (INIS)

    Yeh, G.C.K.

    1976-01-01

    With the trend toward ever larger power generating plants with large high-speed turbines, an important plant design consideration is the potential for and consequences of mechanical failure of turbine rotors. Such rotor failure could result in high-velocity disc fragments (turbine missiles) perforating the turbine casing and jeopardizing vital plant systems. The designer must first estimate the probability of any turbine missile damaging any safety-related plant component for his turbine and his plant arrangement. If the probability is not low enough to be acceptable to the regulatory agency, he must design a shield to contain the postulated turbine missiles. Alternatively, the shield could be designed to retard (to reduce the velocity of) the missiles such that they would not damage any vital plant system. In this paper, some of the presently available references that can be used to evaluate the probability, containment and retardation of turbine missiles are reviewed; various alternative methods are compared; and subjects for future research are recommended. (Auth.)

  1. Simulation of Daily Weather Data Using Theoretical Probability Distributions.

    Science.gov (United States)

    Bruhn, J. A.; Fry, W. E.; Fick, G. W.

    1980-09-01

    A computer simulation model was constructed to supply daily weather data to a plant disease management model for potato late blight. In the weather model Monte Carlo techniques were employed to generate daily values of precipitation, maximum temperature, minimum temperature, minimum relative humidity and total solar radiation. Each weather variable is described by a known theoretical probability distribution but the values of the parameters describing each distribution are dependent on the occurrence of rainfall. Precipitation occurrence is described by a first-order Markov chain. The amount of rain, given that rain has occurred, is described by a gamma probability distribution. Maximum and minimum temperature are simulated with a trivariate normal probability distribution involving maximum temperature on the previous day, maximum temperature on the current day and minimum temperature on the current day. Parameter values for this distribution are dependent on the occurrence of rain on the previous day. Both minimum relative humidity and total solar radiation are assumed to be normally distributed. The values of the parameters describing the distribution of minimum relative humidity is dependent on rainfall occurrence on the previous day and current day. Parameter values for total solar radiation are dependent on the occurrence of rain on the current day. The assumptions made during model construction were found to be appropriate for actual weather data from Geneva, New York. The performance of the weather model was evaluated by comparing the cumulative frequency distributions of simulated weather data with the distributions of actual weather data from Geneva, New York and Fort Collins, Colorado. For each location, simulated weather data were similar to actual weather data in terms of mean response, variability and autocorrelation. The possible applications of this model when used with models of other components of the agro-ecosystem are discussed.

  2. Selection of risk reduction portfolios under interval-valued probabilities

    International Nuclear Information System (INIS)

    Toppila, Antti; Salo, Ahti

    2017-01-01

    A central problem in risk management is that of identifying the optimal combination (or portfolio) of improvements that enhance the reliability of the system most through reducing failure event probabilities, subject to the availability of resources. This optimal portfolio can be sensitive with regard to epistemic uncertainties about the failure events' probabilities. In this paper, we develop an optimization model to support the allocation of resources to improvements that mitigate risks in coherent systems in which interval-valued probabilities defined by lower and upper bounds are employed to capture epistemic uncertainties. Decision recommendations are based on portfolio dominance: a resource allocation portfolio is dominated if there exists another portfolio that improves system reliability (i) at least as much for all feasible failure probabilities and (ii) strictly more for some feasible probabilities. Based on non-dominated portfolios, recommendations about improvements to implement are derived by inspecting in how many non-dominated portfolios a given improvement is contained. We present an exact method for computing the non-dominated portfolios. We also present an approximate method that simplifies the reliability function using total order interactions so that larger problem instances can be solved with reasonable computational effort. - Highlights: • Reliability allocation under epistemic uncertainty about probabilities. • Comparison of alternatives using dominance. • Computational methods for generating the non-dominated alternatives. • Deriving decision recommendations that are robust with respect to epistemic uncertainty.

  3. Failure Diameter Resolution Study

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-12-19

    Previously the SURFplus reactive burn model was calibrated for the TATB based explosive PBX 9502. The calibration was based on fitting Pop plot data, the failure diameter and the limiting detonation speed, and curvature effect data for small curvature. The model failure diameter is determined utilizing 2-D simulations of an unconfined rate stick to find the minimum diameter for which a detonation wave propagates. Here we examine the effect of mesh resolution on an unconfined rate stick with a diameter (10mm) slightly greater than the measured failure diameter (8 to 9 mm).

  4. Uncertainty analysis of reactor safety systems with statistically correlated failure data

    International Nuclear Information System (INIS)

    Dezfuli, H.; Modarres, M.

    1985-01-01

    The probability of occurrence of the top event of a fault tree is estimated from failure probability of components that constitute the fault tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. Most fault tree evaluations have so far been based on uncorrelated component failure data. The subject of this paper is the description of a method of assessing the probability intervals for the top event failure probability of fault trees when component failure data are statistically correlated. To estimate the mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte-Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. A moment matching technique is used to obtain the probability distribution function of the top event through fitting a Johnson Ssub(B) distribution. The computer program (CORRELATE) was developed to perform the calculations necessary for the implementation of the method developed. The CORRELATE code is very efficient and consumes minimal computer time. This is primarily because it does not employ the time-consuming Monte-Carlo method. (author)

  5. Probabilistic analysis of ''common mode failures''

    International Nuclear Information System (INIS)

    Easterling, R.G.

    1978-01-01

    Common mode failure is a topic of considerable interest in reliability and safety analyses of nuclear reactors. Common mode failures are often discussed in terms of examples: two systems fail simultaneously due to an external event such as an earthquake; two components in redundant channels fail because of a common manufacturing defect; two systems fail because a component common to both fails; the failure of one system increases the stress on other systems and they fail. The common thread running through these is a dependence of some sort--statistical or physical--among multiple failure events. However, the nature of the dependence is not the same in all these examples. An attempt is made to model situations, such as the above examples, which have been termed ''common mode failures.'' In doing so, it is found that standard probability concepts and terms, such as statistically dependent and independent events, and conditional and unconditional probabilities, suffice. Thus, it is proposed that the term ''common mode failures'' be dropped, at least from technical discussions of these problems. A corollary is that the complementary term, ''random failures,'' should also be dropped. The mathematical model presented may not cover all situations which have been termed ''common mode failures,'' but provides insight into the difficulty of obtaining estimates of the probabilities of these events

  6. Reactor instrumentation. Definition of the single failure criterion

    International Nuclear Information System (INIS)

    1980-12-01

    The standard defines the single failure criterion which is used in other IEC publications on reactor safety systems. The purpose of the single failure criterion is the assurance of minimum redundancy. (orig./HP) [de

  7. Age replacement policy based on imperfect repair with random probability

    International Nuclear Information System (INIS)

    Lim, J.H.; Qu, Jian; Zuo, Ming J.

    2016-01-01

    In most of literatures of age replacement policy, failures before planned replacement age can be either minimally repaired or perfectly repaired based on the types of failures, cost for repairs and so on. In this paper, we propose age replacement policy based on imperfect repair with random probability. The proposed policy incorporates the case that such intermittent failure can be either minimally repaired or perfectly repaired with random probabilities. The mathematical formulas of the expected cost rate per unit time are derived for both the infinite-horizon case and the one-replacement-cycle case. For each case, we show that the optimal replacement age exists and is finite. - Highlights: • We propose a new age replacement policy with random probability of perfect repair. • We develop the expected cost per unit time. • We discuss the optimal age for replacement minimizing the expected cost rate.

  8. Limited test data: The choice between confidence limits and inverse probability

    International Nuclear Information System (INIS)

    Nichols, P.

    1975-01-01

    For a unit which has been successfully designed to a high standard of reliability, any test programme of reasonable size will result in only a small number of failures. In these circumstances the failure rate estimated from the tests will depend on the statistical treatment applied. When a large number of units is to be manufactured, an unexpected high failure rate will certainly result in a large number of failures, so it is necessary to guard against optimistic unrepresentative test results by using a confidence limit approach. If only a small number of production units is involved, failures may not occur even with a higher than expected failure rate, and so one may be able to accept a method which allows for the possibility of either optimistic or pessimistic test results, and in this case an inverse probability approach, based on Bayes' theorem, might be used. The paper first draws attention to an apparently significant difference in the numerical results from the two methods, particularly for the overall probability of several units arranged in redundant logic. It then discusses a possible objection to the inverse method, followed by a demonstration that, for a large population and a very reasonable choice of prior probability, the inverse probability and confidence limit methods give the same numerical result. Finally, it is argued that a confidence limit approach is overpessimistic when a small number of production units is involved, and that both methods give the same answer for a large population. (author)

  9. Minimum resolvable power contrast model

    Science.gov (United States)

    Qian, Shuai; Wang, Xia; Zhou, Jingjing

    2018-01-01

    Signal-to-noise ratio and MTF are important indexs to evaluate the performance of optical systems. However,whether they are used alone or joint assessment cannot intuitively describe the overall performance of the system. Therefore, an index is proposed to reflect the comprehensive system performance-Minimum Resolvable Radiation Performance Contrast (MRP) model. MRP is an evaluation model without human eyes. It starts from the radiance of the target and the background, transforms the target and background into the equivalent strips,and considers attenuation of the atmosphere, the optical imaging system, and the detector. Combining with the signal-to-noise ratio and the MTF, the Minimum Resolvable Radiation Performance Contrast is obtained. Finally the detection probability model of MRP is given.

  10. Estimating the probability of rare events: addressing zero failure data.

    Science.gov (United States)

    Quigley, John; Revie, Matthew

    2011-07-01

    Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data-dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with 1/2.5n, where n is the number of trials. © 2011 Society for Risk Analysis.

  11. Component failure data base of TRIGA reactors

    International Nuclear Information System (INIS)

    Djuricic, M.

    2004-10-01

    This compilation provides failure data such as first criticality, component type description (reactor component, population, cumulative calendar time, cumulative operating time, demands, failure mode, failures, failure rate, failure probability) and specific information on each type of component of TRIGA Mark-II reactors in Austria, Bangladesh, Germany, Finland, Indonesia, Italy, Indonesia, Slovenia and Romania. (nevyjel)

  12. Minimum wage hikes and the wage growth of low-wage workers

    OpenAIRE

    Joanna K Swaffield

    2012-01-01

    This paper presents difference-in-differences estimates of the impact of the British minimum wage on the wage growth of low-wage employees. Estimates of the probability of low-wage employees receiving positive wage growth have been significantly increased by the minimum wage upratings or hikes. However, whether the actual wage growth of these workers has been significantly raised or not depends crucially on the magnitude of the minimum wage hike considered. Findings are consistent with employ...

  13. Probability elicitation to inform early health economic evaluations of new medical technologies: a case study in heart failure disease management.

    Science.gov (United States)

    Cao, Qi; Postmus, Douwe; Hillege, Hans L; Buskens, Erik

    2013-06-01

    Early estimates of the commercial headroom available to a new medical device can assist producers of health technology in making appropriate product investment decisions. The purpose of this study was to illustrate how this quantity can be captured probabilistically by combining probability elicitation with early health economic modeling. The technology considered was a novel point-of-care testing device in heart failure disease management. First, we developed a continuous-time Markov model to represent the patients' disease progression under the current care setting. Next, we identified the model parameters that are likely to change after the introduction of the new device and interviewed three cardiologists to capture the probability distributions of these parameters. Finally, we obtained the probability distribution of the commercial headroom available per measurement by propagating the uncertainty in the model inputs to uncertainty in modeled outcomes. For a willingness-to-pay value of €10,000 per life-year, the median headroom available per measurement was €1.64 (interquartile range €0.05-€3.16) when the measurement frequency was assumed to be daily. In the subsequently conducted sensitivity analysis, this median value increased to a maximum of €57.70 for different combinations of the willingness-to-pay threshold and the measurement frequency. Probability elicitation can successfully be combined with early health economic modeling to obtain the probability distribution of the headroom available to a new medical technology. Subsequently feeding this distribution into a product investment evaluation method enables stakeholders to make more informed decisions regarding to which markets a currently available product prototype should be targeted. Copyright © 2013. Published by Elsevier Inc.

  14. Probability of inadvertent operation of electrical components in harsh environments

    International Nuclear Information System (INIS)

    Knoll, A.

    1989-01-01

    Harsh environment, which means humidity and high temperature, may and will affect unsealed electrical components by causing leakage ground currents in ungrounded direct current systems. The concern in a nuclear power plant is that such harsh environment conditions could cause inadvertent operation of normally deenergized components, which may have a safety-related isolation function. Harsh environment is a common cause failure, and one way to approach the problem is to assume that all the unsealed electrical components will simultaneously and inadvertently energize as a result of the environmental common cause failure. This assumption is unrealistically conservative. Test results indicated that insulating resistences of any terminal block in harsh environments have a random distribution in the range of 1 to 270 kΩ, with a mean value ∼59 kΩ. The objective of this paper is to evaluate a realistic conditional failure probability for inadvertent operation of electrical components in harsh environments. This value will be used thereafter in probabilistic safety evaluations of harsh environment events and will replace both the overconservative common cause probability of 1 and the random failure probability used for mild environments

  15. average probability of failure on demand estimation for burner

    African Journals Online (AJOL)

    HOD

    Pij – Probability from state i to j. 1. INTRODUCTION. In the process .... the numerical value of the PFD as result of components, sub-system ... ignored in probabilistic risk assessment it may lead to ...... Markov chains for a holistic modeling of SIS.

  16. The probability of containment failure by direct containment heating in surry

    International Nuclear Information System (INIS)

    Pilch, M.M.; Allen, M.D.; Bergeron, K.D.; Tadios, E.L.; Stamps, D.W.; Spencer, B.W.; Quick, K.S.; Knudson, D.L.

    1995-05-01

    In a light-water reactor core melt accident, if the reactor pressure vessel (RPV) fails while the reactor coolant system (RCS) at high pressure, the expulsion of molten core debris may pressurize the reactor containment building (RCB) beyond its failure pressure. A failure in the bottom head of the RPV, followed by melt expulsion and blowdown of the RCS, will entrain molten core debris in the high-velocity steam blowdown gas. This chain of events is called a high-pressure melt ejection (HPME). Four mechanisms may cause a rapid increase in pressure and temperature in the reactor containment: (1) blowdown of the RCS, (2) efficient debris-to-gas heat transfer, (3) exothermic metal-steam and metal-oxygen reactions, and (4) hydrogen combustion. These processes, which lead to increased loads on the containment building, are collectively referred to as direct containment heating (DCH). It is necessary to understand factors that enhance or mitigate DCH because the pressure load imposed on the RCB may lead to early failure of the containment

  17. Estimating Recovery Failure Probabilities in Off-normal Situations from Full-Scope Simulator Data

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yochan; Park, Jinkyun; Kim, Seunghwan; Choi, Sun Yeong; Jung, Wondea [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    As part of this effort, KAERI developed the Human Reliability data EXtraction (HuREX) framework and is collecting full-scope simulator-based human reliability data into the OPERA (Operator PErformance and Reliability Analysis) database. In this study, with the series of estimation research for HEPs or PSF effects, significant information for a quantitative HRA analysis, recovery failure probabilities (RFPs), were produced from the OPERA database. Unsafe acts can occur at any time in safety-critical systems and the operators often manage the systems by discovering their errors and eliminating or mitigating them. To model the recovery processes or recovery strategies, there were several researches that categorize the recovery behaviors. Because the recent human error trends are required to be considered during a human reliability analysis, Jang et al. can be seen as an essential effort of the data collection. However, since the empirical results regarding soft controls were produced from a controlled laboratory environment with student participants, it is necessary to analyze a wide range of operator behaviors using full-scope simulators. This paper presents the statistics related with human error recovery behaviors obtained from the full-scope simulations that in-site operators participated in. In this study, the recovery effects by shift changes or technical support centers were not considered owing to a lack of simulation data.

  18. Development of component failure data for seismic risk analysis

    International Nuclear Information System (INIS)

    Fray, R.R.; Moulia, T.A.

    1981-01-01

    This paper describes the quantification and utilization of seismic failure data used in the Diablo Canyon Seismic Risk Study. A single variable representation of earthquake severity that uses peak horizontal ground acceleration to characterize earthquake severity was employed. The use of a multiple variable representation would allow direct consideration of vertical accelerations and the spectral nature of earthquakes but would have added such complexity that the study would not have been feasible. Vertical accelerations and spectral nature were indirectly considered because component failure data were derived from design analyses, qualification tests and engineering judgment that did include such considerations. Two types of functions were used to describe component failure probabilities. Ramp functions were used for components, such as piping and structures, qualified by stress analysis. 'Anchor points' for ramp functions were selected by assuming a zero probability of failure at code allowable stress levels and unity probability of failure at ultimate stress levels. The accelerations corresponding to allowable and ultimate stress levels were determined by conservatively assuming a linear relationship between seismic stress and ground acceleration. Step functions were used for components, such as mechanical and electrical equipment, qualified by testing. Anchor points for step functions were selected by assuming a unity probability of failure above the qualification acceleration. (orig./HP)

  19. The impact of the minimum wage on health.

    Science.gov (United States)

    Andreyeva, Elena; Ukert, Benjamin

    2018-03-07

    This study evaluates the effect of minimum wage on risky health behaviors, healthcare access, and self-reported health. We use data from the 1993-2015 Behavioral Risk Factor Surveillance System, and employ a difference-in-differences strategy that utilizes time variation in new minimum wage laws across U.S. states. Results suggest that the minimum wage increases the probability of being obese and decreases daily fruit and vegetable intake, but also decreases days with functional limitations while having no impact on healthcare access. Subsample analyses reveal that the increase in weight and decrease in fruit and vegetable intake are driven by the older population, married, and whites. The improvement in self-reported health is especially strong among non-whites, females, and married.

  20. Statistical Analysis Of Failure Strength Of Material Using Weibull Distribution

    International Nuclear Information System (INIS)

    Entin Hartini; Mike Susmikanti; Antonius Sitompul

    2008-01-01

    In evaluation of ceramic and glass materials strength a statistical approach is necessary Strength of ceramic and glass depend on its measure and size distribution of flaws in these material. The distribution of strength for ductile material is narrow and close to a Gaussian distribution while strength of brittle materials as ceramic and glass following Weibull distribution. The Weibull distribution is an indicator of the failure of material strength resulting from a distribution of flaw size. In this paper, cumulative probability of material strength to failure probability, cumulative probability of failure versus fracture stress and cumulative probability of reliability of material were calculated. Statistical criteria calculation supporting strength analysis of Silicon Nitride material were done utilizing MATLAB. (author)

  1. Definition of containment failure

    International Nuclear Information System (INIS)

    Cybulskis, P.

    1982-01-01

    Core meltdown accidents of the types considered in probabilistic risk assessments (PRA's) have been predicted to lead to pressures that will challenge the integrity of containment structures. Review of a number of PRA's indicates considerable variation in the predicted probability of containment failure as a function of pressure. Since the results of PRA's are sensitive to the prediction of the occurrence and the timing of containment failure, better understanding of realistic containment capabilities and a more consistent approach to the definition of containment failure pressures are required. Additionally, since the size and location of the failure can also significantly influence the prediction of reactor accident risk, further understanding of likely failure modes is required. The thresholds and modes of containment failure may not be independent

  2. Importance Sampling for Failure Probabilities in Computing and Data Transmission

    DEFF Research Database (Denmark)

    Asmussen, Søren

    We study efficient simulation algorithms for estimating P(Χ > χ), where Χ is the total time of a job with ideal time T that needs to be restarted after a failure. The main tool is importance sampling where one tries to identify a good importance distribution via an asymptotic description...... of the conditional distribution of T given Χ > χ. If T ≡ t is constant, the problem reduces to the efficient simulation of geometric sums, and a standard algorithm involving a Cramér type root  γ(t) is available. However, we also discuss  an algorithm avoiding the rootfinding. If T is random, particular attention...... is given to T having either a gamma-like tail or a regularly varying tail, and to failures at Poisson times. Different type of conditional limits occur, in particular exponentially tilted Gumbel distributions and Pareto distributions. The algorithms based upon importance distributions for T using...

  3. Importance sampling for failure probabilities in computing and data transmission

    DEFF Research Database (Denmark)

    Asmussen, Søren

    2009-01-01

    In this paper we study efficient simulation algorithms for estimating P(X›x), where X is the total time of a job with ideal time $T$ that needs to be restarted after a failure. The main tool is importance sampling, where a good importance distribution is identified via an asymptotic description...... of the conditional distribution of T given X›x. If T≡t is constant, the problem reduces to the efficient simulation of geometric sums, and a standard algorithm involving a Cramér-type root, γ(t), is available. However, we also discuss an algorithm that avoids finding the root. If T is random, particular attention...... is given to T having either a gamma-like tail or a regularly varying tail, and to failures at Poisson times. Different types of conditional limit occur, in particular exponentially tilted Gumbel distributions and Pareto distributions. The algorithms based upon importance distributions for T using...

  4. Evaluation of the probability of arrester failure in a high-voltage transmission line using a Q learning artificial neural network model

    International Nuclear Information System (INIS)

    Ekonomou, L; Karampelas, P; Vita, V; Chatzarakis, G E

    2011-01-01

    One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service

  5. Evaluation of the probability of arrester failure in a high-voltage transmission line using a Q learning artificial neural network model

    Science.gov (United States)

    Ekonomou, L.; Karampelas, P.; Vita, V.; Chatzarakis, G. E.

    2011-04-01

    One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service.

  6. Reliability model for common mode failures in redundant safety systems

    International Nuclear Information System (INIS)

    Fleming, K.N.

    1974-12-01

    A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)

  7. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    Science.gov (United States)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  8. FRELIB, Failure Reliability Index Calculation

    International Nuclear Information System (INIS)

    Parkinson, D.B.; Oestergaard, C.

    1984-01-01

    1 - Description of problem or function: Calculation of the reliability index given the failure boundary. A linearization point (design point) is found on the failure boundary for a stationary reliability index (min) and a stationary failure probability density function along the failure boundary, provided that the basic variables are normally distributed. 2 - Method of solution: Iteration along the failure boundary which must be specified - together with its partial derivatives with respect to the basic variables - by the user in a subroutine FSUR. 3 - Restrictions on the complexity of the problem: No distribution information included (first-order-second-moment-method). 20 basic variables (could be extended)

  9. Landslide Probability Assessment by the Derived Distributions Technique

    Science.gov (United States)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  10. Evaluation of Brace Treatment for Infant Hip Dislocation in a Prospective Cohort: Defining the Success Rate and Variables Associated with Failure.

    Science.gov (United States)

    Upasani, Vidyadhar V; Bomar, James D; Matheney, Travis H; Sankar, Wudbhav N; Mulpuri, Kishore; Price, Charles T; Moseley, Colin F; Kelley, Simon P; Narayanan, Unni; Clarke, Nicholas M P; Wedge, John H; Castañeda, Pablo; Kasser, James R; Foster, Bruce K; Herrera-Soto, Jose A; Cundy, Peter J; Williams, Nicole; Mubarak, Scott J

    2016-07-20

    The use of a brace has been shown to be an effective treatment for hip dislocation in infants; however, previous studies of such treatment have been single-center or retrospective. The purpose of the current study was to evaluate the success rate for brace use in the treatment of infant hip dislocation in an international, multicenter, prospective cohort, and to identify the variables associated with brace failure. All dislocations were verified with use of ultrasound or radiography prior to the initiation of treatment, and patients were followed prospectively for a minimum of 18 months. Successful treatment was defined as the use of a brace that resulted in a clinically and radiographically reduced hip, without surgical intervention. The Mann-Whitney test, chi-square analysis, and Fisher exact test were used to identify risk factors for brace failure. A multivariate logistic regression model was used to determine the probability of brace failure according to the risk factors identified. Brace treatment was successful in 162 (79%) of the 204 dislocated hips in this series. Six variables were found to be significant risk factors for failure: developing femoral nerve palsy during brace treatment (p = 0.001), treatment with a static brace (p failure, whereas hips with 4 or 5 risk factors had a 100% probability of failure. These data provide valuable information for patient families and their providers regarding the important variables that influence successful brace treatment for dislocated hips in infants. Prognostic Level I. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.

  11. Most Probable Failures in LHC Magnets and Time Constants of their Effects on the Beam.

    CERN Document Server

    Gomez Alonso, Andres

    2006-01-01

    During the LHC operation, energies up to 360 MJ will be stored in each proton beam and over 10 GJ in the main electrical circuits. With such high energies, beam losses can quickly lead to important equipment damage. The Machine Protection Systems have been designed to provide reliable protection of the LHC through detection of the failures leading to beam losses and fast dumping of the beams. In order to determine the protection strategies, it is important to know the time constants of the failure effects on the beam. In this report, we give an estimation of the time constants of quenches and powering failures in LHC magnets. The most critical failures are powering failures in certain normal conducting circuits, leading to relevant effects on the beam in ~1 ms. The failures on super conducting magnets leading to fastest losses are quenches. In this case, the effects on the beam can be signficant ~10 ms after the quench occurs.

  12. Minimum risk trigger indices

    International Nuclear Information System (INIS)

    Tingey, F.H.

    1979-01-01

    A viable safeguards system includes among other things the development and use of indices which trigger various courses of action. The usual limit of error calculation provides such an index. The classical approach is one of constructing tests which, under certain assumptions, make the likelihood of a false alarm small. Of concern also is the test's failure to indicate a loss (diversion) when in fact one has occurred. Since false alarms are usually costly and losses both costly and of extreme strategic sinificance, there remains the task of balancing the probability of false alarm and its consequences against the probability of undetected loss and its consequences. The application of other than classical hypothesis testing procedures are considered in this paper. Using various consequence models, trigger indices are derived which have certain optimum properties. Application of the techniques would enhance the material control function

  13. [Survival analysis with competing risks: estimating failure probability].

    Science.gov (United States)

    Llorca, Javier; Delgado-Rodríguez, Miguel

    2004-01-01

    To show the impact of competing risks of death on survival analysis. We provide an example of survival time without chronic rejection after heart transplantation, where death before rejection acts as a competing risk. Using a computer simulation, we compare the Kaplan-Meier estimator and the multiple decrement model. The Kaplan-Meier method overestimated the probability of rejection. Next, we illustrate the use of the multiple decrement model to analyze secondary end points (in our example: death after rejection). Finally, we discuss Kaplan-Meier assumptions and why they fail in the presence of competing risks. Survival analysis should be adjusted for competing risks of death to avoid overestimation of the risk of rejection produced with the Kaplan-Meier method.

  14. An estimation method of system failure frequency using both structure and component failure data

    International Nuclear Information System (INIS)

    Takaragi, Kazuo; Sasaki, Ryoichi; Shingai, Sadanori; Tominaga, Kenji

    1981-01-01

    In recent years, the importance of reliability analysis is appreciated for large systems such as nuclear power plants. A reliability analysis method is described for a whole system, using structure failure data for its main working subsystem and component failure data for its safety protection subsystem. The subsystem named main working system operates normally, and the subsystem named safety protection system acts as standby or protection. Thus the main and the protection systems are given mutually different failure data; then, between the subsystems, there exists common mode failure, i.e. the component failure affecting the reliability of both two. A calculation formula for sytem failure frequency is first derived. Then, a calculation method with digraphs is proposed for conditional system failure probability. Finally the results of numerical calculation are given for the purpose of explanation. (J.P.N.)

  15. Failure probabilities of SiC clad fuel during a LOCA in public acceptable simple SMR (PASS)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youho, E-mail: euo@kaist.ac.kr; Kim, Ho Sik, E-mail: hskim25@kaist.ac.kr; NO, Hee Cheon, E-mail: hcno@kaist.ac.kr

    2015-10-15

    Highlights: • Graceful operating conditions of SMRs markedly lower SiC cladding stress. • Steady-state fracture probabilities of SiC cladding is below 10{sup −7} in SMRs. • PASS demonstrates fuel coolability (T < 1300 °C) with sole radiation in LOCA. • SiC cladding failure probabilities of PASS are ∼10{sup −2} in LOCA. • Cold gas gap pressure controls SiC cladding tensile stress level in LOCA. - Abstract: Structural integrity of SiC clad fuels in reference Small Modular Reactors (SMRs) (NuScale, SMART, IRIS) and a commercial pressurized water reactor (PWR) are assessed with a multi-layered SiC cladding structural analysis code. Featured with low fuel pin power and temperature, SMRs demonstrate markedly reduced incore-residence fracture probabilities below ∼10{sup −7}, compared to those of commercial PWRs ∼10{sup −6}–10{sup −1}. This demonstrates that SMRs can serve as a near-term deployment fit to SiC cladding with a sound management of its statistical brittle fracture. We proposed a novel SMR named Public Acceptable Simple SMR (PASS), which is featured with 14 × 14 assemblies of SiC clad fuels arranged in a square ring layout. PASS aims to rely on radiative cooling of fuel rods during a loss of coolant accident (LOCA) by fully leveraging high temperature tolerance of SiC cladding. An overarching assessment of SiC clad fuel performance in PASS was conducted with a combined methodology—(1) FRAPCON-SiC for steady-state performance analysis of PASS fuel rods, (2) computational fluid dynamics code FLUENT for radiative cooling rate of fuel rods during a LOCA, and (3) multi-layered SiC cladding structural analysis code with previously developed SiC recession correlations under steam environments for both steady-state and LOCA. The results show that PASS simultaneously maintains desirable fuel cooling rate with the sole radiation and sound structural integrity of fuel rods for over 36 days of a LOCA without water supply. The stress level of

  16. On the relationship between stress intensity factor (K) and minimum ...

    African Journals Online (AJOL)

    Studies on crack-tip plastic zones are of fundamental importance in describing the process of failure and in formulating various fracture criteria. Minimum plastic zone radius (MPZR) theory is widely used in prediction of crack initiation angle in mixed mode fracture analysis of engineering materials. In this study, shape and ...

  17. Reliability-based fatigue life estimation of shear riveted connections considering dependency of rivet hole failures

    Directory of Open Access Journals (Sweden)

    Leonetti* Davide

    2018-01-01

    Full Text Available Standards and guidelines for the fatigue design of riveted connections make use of a stress range-endurance (S-N curve based on the net section stress range regardless of the number and the position of the rivets. Almost all tests on which S-N curves are based, are performed with a minimum number of rivets. However, the number of rivets in a row is expected to increase the fail-safe behaviour of the connection, whereas the number of rows is supposed to decrease the theoretical stress concentration at the critical locations, and hence these aspects are not considered in the S-N curves. This paper presents a numerical model predicting the fatigue life of riveted connections by performing a system reliability analysis on a double cover plated riveted butt joint. The connection is considered in three geometries, with different number of rivets in a row and different number of rows. The stress state in the connection is evaluated using a finite element model in which the friction coefficient and the clamping force in the rivets are considered in a deterministic manner. The probability of failure is evaluated for the main plate, and fatigue failure is assumed to be originating at the sides of the rivet holes, the critical locations, or hot-spots. The notch stress approach is applied to assess the fatigue life, considered to be a stochastic quantity. Unlike other system reliability models available in the literature, the evaluation of the probability of failure takes into account the stochastic dependence between the failures at each critical location modelled as a parallel system, which means considering the change of the state of stress in the connection when a ligament between two rivets fails. A sensitivity study is performed to evaluate the effect of the pretension in the rivet and the friction coefficient on the fatigue life.

  18. Failure analysis of storage tank component in LNG regasification unit using fault tree analysis method (FTA)

    Science.gov (United States)

    Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo

    2017-03-01

    Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.

  19. Bounding probabilistic safety assessment probabilities by reality

    International Nuclear Information System (INIS)

    Fragola, J.R.; Shooman, M.L.

    1991-01-01

    The investigation of the failure in systems where failure is a rare event makes the continual comparisons between the developed probabilities and empirical evidence difficult. The comparison of the predictions of rare event risk assessments with historical reality is essential to prevent probabilistic safety assessment (PSA) predictions from drifting into fantasy. One approach to performing such comparisons is to search out and assign probabilities to natural events which, while extremely rare, have a basis in the history of natural phenomena or human activities. For example the Segovian aqueduct and some of the Roman fortresses in Spain have existed for several millennia and in many cases show no physical signs of earthquake damage. This evidence could be used to bound the probability of earthquakes above a certain magnitude to less than 10 -3 per year. On the other hand, there is evidence that some repetitive actions can be performed with extremely low historical probabilities when operators are properly trained and motivated, and sufficient warning indicators are provided. The point is not that low probability estimates are impossible, but continual reassessment of the analysis assumptions, and a bounding of the analysis predictions by historical reality. This paper reviews the probabilistic predictions of PSA in this light, attempts to develop, in a general way, the limits which can be historically established and the consequent bounds that these limits place upon the predictions, and illustrates the methodology used in computing such limits. Further, the paper discusses the use of empirical evidence and the requirement for disciplined systematic approaches within the bounds of reality and the associated impact on PSA probabilistic estimates

  20. Statin Treatment and Clinical Outcomes of Heart Failure Among Africans: An Inverse Probability Treatment Weighted Analysis.

    Science.gov (United States)

    Bonsu, Kwadwo Osei; Owusu, Isaac Kofi; Buabeng, Kwame Ohene; Reidpath, Daniel D; Kadirvelu, Amudha

    2017-04-01

    Randomized control trials of statins have not demonstrated significant benefits in outcomes of heart failure (HF). However, randomized control trials may not always be generalizable. The aim was to determine whether statin and statin type-lipophilic or -hydrophilic improve long-term outcomes in Africans with HF. This was a retrospective longitudinal study of HF patients aged ≥18 years hospitalized at a tertiary healthcare center between January 1, 2009 and December 31, 2013 in Ghana. Patients were eligible if they were discharged from first admission for HF (index admission) and followed up to time of all-cause, cardiovascular, and HF mortality or end of study. Multivariable time-dependent Cox model and inverse-probability-of-treatment weighting of marginal structural model were used to estimate associations between statin treatment and outcomes. Adjusted hazard ratios were also estimated for lipophilic and hydrophilic statin compared with no statin use. The study included 1488 patients (mean age 60.3±14.2 years) with 9306 person-years of observation. Using the time-dependent Cox model, the 5-year adjusted hazard ratios with 95% CI for statin treatment on all-cause, cardiovascular, and HF mortality were 0.68 (0.55-0.83), 0.67 (0.54-0.82), and 0.63 (0.51-0.79), respectively. Use of inverse-probability-of-treatment weighting resulted in estimates of 0.79 (0.65-0.96), 0.77 (0.63-0.96), and 0.77 (0.61-0.95) for statin treatment on all-cause, cardiovascular, and HF mortality, respectively, compared with no statin use. Among Africans with HF, statin treatment was associated with significant reduction in mortality. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  1. Collective probabilities algorithm for surface hopping calculations

    International Nuclear Information System (INIS)

    Bastida, Adolfo; Cruz, Carlos; Zuniga, Jose; Requena, Alberto

    2003-01-01

    General equations that transition probabilities of the hopping algorithms in surface hopping calculations must obey to assure the equality between the average quantum and classical populations are derived. These equations are solved for two particular cases. In the first it is assumed that probabilities are the same for all trajectories and that the number of hops is kept to a minimum. These assumptions specify the collective probabilities (CP) algorithm, for which the transition probabilities depend on the average populations for all trajectories. In the second case, the probabilities for each trajectory are supposed to be completely independent of the results from the other trajectories. There is, then, a unique solution of the general equations assuring that the transition probabilities are equal to the quantum population of the target state, which is referred to as the independent probabilities (IP) algorithm. The fewest switches (FS) algorithm developed by Tully is accordingly understood as an approximate hopping algorithm which takes elements from the accurate CP and IP solutions. A numerical test of all these hopping algorithms is carried out for a one-dimensional two-state problem with two avoiding crossings which shows the accuracy and computational efficiency of the collective probabilities algorithm proposed, the limitations of the FS algorithm and the similarity between the results offered by the IP algorithm and those obtained with the Ehrenfest method

  2. Importance analysis for the systems with common cause failures

    International Nuclear Information System (INIS)

    Pan Zhijie; Nonaka, Yasuo

    1995-01-01

    This paper extends the importance analysis technique to the research field of common cause failures to evaluate the structure importance, probability importance, and β-importance for the systems with common cause failures. These importance measures would help reliability analysts to limit the common cause failure analysis framework and find efficient defence strategies against common cause failures

  3. PROBABILITY OF FAILURE OF THE TRUDOCK CRANE SYSTEM AT THE WASTE ISOLATION PILOT PLANT (WIPP)

    International Nuclear Information System (INIS)

    Greenfield, M.A.; Sargent, T.J.

    2000-01-01

    This probabilistic analysis of WIPP TRUDOCK crane failure is based on two sources of failure data. The source for operator errors is the report by Swain and Guttman, NUREG/CR-1278-F, August 1983. The source for crane cable hook breaks was initially made by WIPP/WID-96- 2196, Rev. O by using relatively old (1970s) U.S. Navy data (NUREG-0612). However, a helpful analysis by R.K. Deremer of PLG guided the authors to values that were more realistic and more conservative, with the recommendation that the crane cable/hook failure rate should be 2.5 x 10-6 per demand. This value was adopted and used. Based on these choices a mean failure rate of 9.70 x 10-3(1/yr) was calculated. However, a mean rate by itself does not reveal the level of confidence to be associated with this number. Guidance to making confidence calculations came from the report by Swain and Guttman, who stated that failure data could be described by lognormal distributions. This is in agreement with the widely use d reports (by DOE and others) NPRD-95 and NPRD-91, on failure data. The calculations of confidence levels showed that the mean failure rate of 9.70x 10-3(1/yr) corresponded to a percentile value of approximately 71; i.e. there is a 71% likelihood that the failure rate is less than 9.70x 10-3(1/yr). One also calculated that there is a 95% likelihood that the failure rate is less than 29.6x 10-3(1/yr). Or, as stated previously, there is a 71% likelihood that not more than one dropped load will occur in 103 years. Also, there is a 95% likelihood that not more than one dropped load will occur in approximately 34 years. It is the responsibility of DOE to select the confidence level at which it desires to operate

  4. The association of minimum wage change on child nutritional status in LMICs: A quasi-experimental multi-country study.

    Science.gov (United States)

    Ponce, Ninez; Shimkhada, Riti; Raub, Amy; Daoud, Adel; Nandi, Arijit; Richter, Linda; Heymann, Jody

    2017-08-02

    There is recognition that social protection policies such as raising the minimum wage can favourably impact health, but little evidence links minimum wage increases to child health outcomes. We used multi-year data (2003-2012) on national minimum wages linked to individual-level data from the Demographic and Health Surveys (DHS) from 23 low- and middle-income countries (LMICs) that had least two DHS surveys to establish pre- and post-observation periods. Over a pre- and post-interval ranging from 4 to 8 years, we examined minimum wage growth and four nutritional status outcomes among children under 5 years: stunting, wasting, underweight, and anthropometric failure. Using a differences-in-differences framework with country and time-fixed effects, a 10% increase in minimum wage growth over time was associated with a 0.5 percentage point decline in stunting (-0.054, 95% CI (-0.084,-0.025)), and a 0.3 percentage point decline in failure (-0.031, 95% CI (-0.057,-0.005)). We did not observe statistically significant associations between minimum wage growth and underweight or wasting. We found similar results for the poorest households working in non-agricultural and non-professional jobs, where minimum wage growth may have the most leverage. Modest increases in minimum wage over a 4- to 8-year period might be effective in reducing child undernutrition in LMICs.

  5. Retrieval system for emplaced spent unreprocessed fuel (SURF) in salt bed depository. Baseline concept criteria specifications and mechanical failure probabilities

    International Nuclear Information System (INIS)

    Hudson, E.E.; McCleery, J.E.

    1979-05-01

    One of the integral elements of the Nuclear Waste Management Program is the material handling task of retrieving Canisters containing spent unreprocessed fuel from their emplacement in a deep geologic salt bed Depository. A study of the retrieval concept data base predicated this report. In this report, alternative concepts for the tasks are illustrated and critiqued, a baseline concept in scenario form is derived and basic retrieval subsystem specifications are presented with cyclic failure probabilities predicted. The report is based on the following assumptions: (a) during retrieval, a temporary radiation seal is placed over each Canister emplacement; (b) a sleeve, surrounding the Canister, was initially installed during the original emplacement; (c) the emplacement room's physical and environmental conditions established in this report are maintained while the task is performed

  6. Early failure analysis of machining centers: a case study

    International Nuclear Information System (INIS)

    Wang Yiqiang; Jia Yazhou; Jiang Weiwei

    2001-01-01

    To eliminate the early failures and improve the reliability, nine ex-factory machining centers are traced under field conditions in workshops. Their early failure information throughout the ex-factory run-in test is collected. The field early failure database is constructed based on the collection of field early failure data and the codification of data. Early failure mode and effects analysis is performed to indicate the weak subsystem of a machining center or the troublemaker. The distribution of the time between early failures is analyzed and the optimal ex-factory run-in test time for machining center that may expose sufficiently the early failures and cost minimum is discussed. Suggestions how to arrange ex-factory run-in test and how to take actions to reduce early failures for machining center is proposed

  7. Risk-based decision making to manage water quality failures caused by combined sewer overflows

    Science.gov (United States)

    Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.

    2017-12-01

    Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean

  8. Mining of high utility-probability sequential patterns from uncertain databases.

    Directory of Open Access Journals (Sweden)

    Binbin Zhang

    Full Text Available High-utility sequential pattern mining (HUSPM has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs. They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM for mining high utility-probability sequential patterns (HUPSPs in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds.

  9. Improved methods for dependent failure analysis in PSA

    International Nuclear Information System (INIS)

    Ballard, G.M.; Games, A.M.

    1988-01-01

    The basic design principle used in ensuring the safe operation of nuclear power plant is defence in depth. This normally takes the form of redundant equipment and systems which provide protection even if a number of equipment failures occur. Such redundancy is particularly effective in ensuring that multiple, independent equipment failures with the potential for jeopardising reactor safety will be rare events. However the achievement of high reliability has served to highlight the potentially dominant role of multiple, dependent failures of equipment and systems. Analysis of reactor operating experience has shown that dependent failure events are the major contributors to safety system failures and reactor incidents and accidents. In parallel PSA studies have shown that the results of a safety analysis are sensitive to assumptions made about the dependent failure (CCF) probability for safety systems. Thus a Westinghouse Analysis showed that increasing system dependent failure probabilities by a factor of 5 led to a factor 4 increase in core. This paper particularly refers to the engineering concepts underlying dependent failure assessment touching briefly on aspects of data. It is specifically not the intent of our work to develop a new mathematical model of CCF but to aid the use of existing models

  10. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    Science.gov (United States)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  11. Combinatorial analysis of systems with competing failures subject to failure isolation and propagation effects

    International Nuclear Information System (INIS)

    Xing Liudong; Levitin, Gregory

    2010-01-01

    This paper considers the reliability analysis of binary-state systems, subject to propagated failures with global effect, and failure isolation phenomena. Propagated failures with global effect are common-cause failures originated from a component of a system/subsystem causing the failure of the entire system/subsystem. Failure isolation occurs when the failure of one component (referred to as a trigger component) causes other components (referred to as dependent components) within the same system to become isolated from the system. On the one hand, failure isolation makes the isolated dependent components unusable; on the other hand, it prevents the propagation of failures originated from those dependent components. However, the failure isolation effect does not exist if failures originated in the dependent components already propagate globally before the trigger component fails. In other words, there exists a competition in the time domain between the failure of the trigger component that causes failure isolation and propagated failures originated from the dependent components. This paper presents a combinatorial method for the reliability analysis of systems subject to such competing propagated failures and failure isolation effect. Based on the total probability theorem, the proposed method is analytical, exact, and has no limitation on the type of time-to-failure distributions for the system components. An illustrative example is given to demonstrate the basics and advantages of the proposed method.

  12. Probability of extreme interference levels computed from reliability approaches: application to transmission lines with uncertain parameters

    International Nuclear Information System (INIS)

    Larbi, M.; Besnier, P.; Pecqueux, B.

    2014-01-01

    This paper deals with the risk analysis of an EMC default using a statistical approach. It is based on reliability methods from probabilistic engineering mechanics. A computation of probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is established by taking into account uncertainties on input parameters influencing levels of interference in the context of transmission lines. The study has allowed us to evaluate the probability of failure of the induced current by using reliability methods having a relative low computational cost compared to Monte Carlo simulation. (authors)

  13. Optimal selective renewal policy for systems subject to propagated failures with global effect and failure isolation phenomena

    International Nuclear Information System (INIS)

    Maaroufi, Ghofrane; Chelbi, Anis; Rezg, Nidhal

    2013-01-01

    This paper considers a selective maintenance policy for multi-component systems for which a minimum level of reliability is required for each mission. Such systems need to be maintained between consecutive missions. The proposed strategy aims at selecting the components to be maintained (renewed) after the completion of each mission such that a required reliability level is warranted up to the next stop with the minimum cost, taking into account the time period allotted for maintenance between missions and the possibility to extend it while paying a penalty cost. This strategy is applied to binary-state systems subject to propagated failures with global effect, and failure isolation phenomena. A set of rules to reduce the solutions space for such complex systems is developed. A numerical example is presented to illustrate the modeling approach and the use of the reduction rules. Finally, the Monte-Carlo simulation is used in combination with the selective maintenance optimization model to deal with a number of successive missions

  14. Probabilistic analysis on the failure of reactivity control for the PWR

    Science.gov (United States)

    Sony Tjahyani, D. T.; Deswandri; Sunaryo, G. R.

    2018-02-01

    The fundamental safety function of the power reactor is to control reactivity, to remove heat from the reactor, and to confine radioactive material. The safety analysis is used to ensure that each parameter is fulfilled during the design and is done by deterministic and probabilistic method. The analysis of reactivity control is important to be done because it will affect the other of fundamental safety functions. The purpose of this research is to determine the failure probability of the reactivity control and its failure contribution on a PWR design. The analysis is carried out by determining intermediate events, which cause the failure of reactivity control. Furthermore, the basic event is determined by deductive method using the fault tree analysis. The AP1000 is used as the object of research. The probability data of component failure or human error, which is used in the analysis, is collected from IAEA, Westinghouse, NRC and other published documents. The results show that there are six intermediate events, which can cause the failure of the reactivity control. These intermediate events are uncontrolled rod bank withdrawal at low power or full power, malfunction of boron dilution, misalignment of control rod withdrawal, malfunction of improper position of fuel assembly and ejection of control rod. The failure probability of reactivity control is 1.49E-03 per year. The causes of failures which are affected by human factor are boron dilution, misalignment of control rod withdrawal and malfunction of improper position for fuel assembly. Based on the assessment, it is concluded that the failure probability of reactivity control on the PWR is still within the IAEA criteria.

  15. Clan structure analysis and rapidity gap probability

    International Nuclear Information System (INIS)

    Lupia, S.; Giovannini, A.; Ugoccioni, R.

    1995-01-01

    Clan structure analysis in rapidity intervals is generalized from negative binomial multiplicity distribution to the wide class of compound Poisson distributions. The link of generalized clan structure analysis with correlation functions is also established. These theoretical results are then applied to minimum bias events and evidentiate new interesting features, which can be inspiring and useful in order to discuss data on rapidity gap probability at TEVATRON and HERA. (orig.)

  16. Clan structure analysis and rapidity gap probability

    Energy Technology Data Exchange (ETDEWEB)

    Lupia, S. [Turin Univ. (Italy). Ist. di Fisica Teorica]|[Istituto Nazionale di Fisica Nucleare, Turin (Italy); Giovannini, A. [Turin Univ. (Italy). Ist. di Fisica Teorica]|[Istituto Nazionale di Fisica Nucleare, Turin (Italy); Ugoccioni, R. [Turin Univ. (Italy). Ist. di Fisica Teorica]|[Istituto Nazionale di Fisica Nucleare, Turin (Italy)

    1995-03-01

    Clan structure analysis in rapidity intervals is generalized from negative binomial multiplicity distribution to the wide class of compound Poisson distributions. The link of generalized clan structure analysis with correlation functions is also established. These theoretical results are then applied to minimum bias events and evidentiate new interesting features, which can be inspiring and useful in order to discuss data on rapidity gap probability at TEVATRON and HERA. (orig.)

  17. Development and testing of an algorithm to detect implantable cardioverter-defibrillator lead failure.

    Science.gov (United States)

    Gunderson, Bruce D; Gillberg, Jeffrey M; Wood, Mark A; Vijayaraman, Pugazhendhi; Shepard, Richard K; Ellenbogen, Kenneth A

    2006-02-01

    Implantable cardioverter-defibrillator (ICD) lead failures often present as inappropriate shock therapy. An algorithm that can reliably discriminate between ventricular tachyarrhythmias and noise due to lead failure may prevent patient discomfort and anxiety and avoid device-induced proarrhythmia by preventing inappropriate ICD shocks. The goal of this analysis was to test an ICD tachycardia detection algorithm that differentiates noise due to lead failure from ventricular tachyarrhythmias. We tested an algorithm that uses a measure of the ventricular intracardiac electrogram baseline to discriminate the sinus rhythm isoelectric line from the right ventricular coil-can (i.e., far-field) electrogram during oversensing of noise caused by a lead failure. The baseline measure was defined as the product of the sum (mV) and standard deviation (mV) of the voltage samples for a 188-ms window centered on each sensed electrogram. If the minimum baseline measure of the last 12 beats was algorithm to detect lead failures. The minimum baseline measure for the 24 lead failure episodes (0.28 +/- 0.34 mV-mV) was smaller than the 135 ventricular tachycardia (40.8 +/- 43.0 mV-mV, P <.0001) and 55 ventricular fibrillation episodes (19.1 +/- 22.8 mV-mV, P <.05). A minimum baseline <0.35 mV-mV threshold had a sensitivity of 83% (20/24) with a 100% (190/190) specificity. A baseline measure of the far-field electrogram had a high sensitivity and specificity to detect lead failure noise compared with ventricular tachycardia or fibrillation.

  18. Impact of Dual-Link Failures on Impairment-Aware Routed Networks

    DEFF Research Database (Denmark)

    Georgakilas, Konstantinos N; Katrinis, Kostas; Tzanakaki, Anna

    2010-01-01

    This paper evaluates the impact of dual-link failures on single-link failure resilient networks, while physical layer constraints are taken into consideration during demand routing, as dual link failures and equivalent situations appear to be quite probable in core optical networks. In particular...

  19. Evaluation of burst probability for tubes by Weibull distributions

    International Nuclear Information System (INIS)

    Kao, S.

    1975-10-01

    The investigations of candidate distributions that best describe the burst pressure failure probability characteristics of nuclear power steam generator tubes has been continued. To date it has been found that the Weibull distribution provides an acceptable fit for the available data from both the statistical and physical viewpoints. The reasons for the acceptability of the Weibull distribution are stated together with the results of tests for the suitability of fit. In exploring the acceptability of the Weibull distribution for the fitting, a graphical method to be called the ''density-gram'' is employed instead of the usual histogram. With this method a more sensible graphical observation on the empirical density may be made for cases where the available data is very limited. Based on these methods estimates of failure pressure are made for the left-tail probabilities

  20. Two viewpoints for software failures and their relation in probabilistic safety assessment of digital instrumentation and control systems

    International Nuclear Information System (INIS)

    Kim, Man Cheol

    2015-01-01

    As the use of digital systems in nuclear power plants increases, the reliability of the software becomes one of the important issues in probabilistic safety assessment. In this paper, two viewpoints for a software failure during the operation of a digital system or a statistical software test are identified, and the relation between them is provided. In conventional software reliability analysis, a failure is mainly viewed with respect to the system operation. A new viewpoint with respect to the system input is suggested. The failure probability density functions for the two viewpoints are defined, and the relation between the two failure probability density functions is derived. Each failure probability density function can be derived from the other failure probability density function by applying the derived relation between the two failure probability density functions. The usefulness of the derived relation is demonstrated by applying it to the failure data obtained from the software testing of a real system. The two viewpoints and their relation, as identified in this paper, are expected to help us extend our understanding of the reliability of safety-critical software. (author)

  1. Imaging multipole gravity anomaly sources by 3D probability tomography

    International Nuclear Information System (INIS)

    Alaia, Raffaele; Patella, Domenico; Mauriello, Paolo

    2009-01-01

    We present a generalized theory of the probability tomography applied to the gravity method, assuming that any Bouguer anomaly data set can be caused by a discrete number of monopoles, dipoles, quadrupoles and octopoles. These elementary sources are used to characterize, in an as detailed as possible way and without any a priori assumption, the shape and position of the most probable minimum structure of the gravity sources compatible with the observed data set, by picking out the location of their centres and peculiar points of their boundaries related to faces, edges and vertices. A few synthetic examples using simple geometries are discussed in order to demonstrate the notably enhanced resolution power of the new approach, compared with a previous formulation that used only monopoles and dipoles. A field example related to a gravity survey carried out in the volcanic area of Mount Etna (Sicily, Italy) is presented, aimed at imaging the geometry of the minimum gravity structure down to 8 km of depth bsl

  2. Parameters governing the failure of steel components

    International Nuclear Information System (INIS)

    Schmitt, W.

    1977-01-01

    The most important feature of any component is the ability to carry safely the load it is designed for. The strength of the component is influenced mainly by three groups of parameters: 1. The loading of the structure; Here the possible loading cases are: normal operation, testing, emergency and faulted conditions; the kinds of loading can be divided into: internal pressure, external forces and moments, temperature loading. 2. The defects in the structure: cavities and inclusions, pores, flaws or cracks. 3. The material properties: Young's modulus, Yield - and ultimate strength, absorbed charpy energy, fracture toughness, etc. For different failure modes one has to take into account different material properties, the loading and the defects are assumed to be within certain deterministic bounds, from which deterministic safety factors can be determined with respect to any failure mode and failure criterion. However, since all parameters have a certain scatter about a mean value, there is a probability to exceed the given bounds. From the extrapolation of the distribution a value for the failure probability can be estimated. (orig.) [de

  3. A multiple shock model for common cause failures using discrete Markov chain

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Kang, Chang Soon

    1992-01-01

    The most widely used models in common cause analysis are (single) shock models such as the BFR, and the MFR. But, single shock model can not treat the individual common cause separately and has some irrational assumptions. Multiple shock model for common cause failures is developed using Markov chain theory. This model treats each common cause shock as separately and sequently occuring event to implicate the change in failure probability distribution due to each common cause shock. The final failure probability distribution is evaluated and compared with that from the BFR model. The results show that multiple shock model which minimizes the assumptions in the BFR model is more realistic and conservative than the BFR model. The further work for application is the estimations of parameters such as common cause shock rate and component failure probability given a shock,p, through the data analysis

  4. Reliability-based failure cause assessment of collapsed bridge during construction

    International Nuclear Information System (INIS)

    Choi, Hyun-Ho; Lee, Sang-Yoon; Choi, Il-Yoon; Cho, Hyo-Nam; Mahadevan, Sankaran

    2006-01-01

    Until now, in many forensic reports, the failure cause assessments are usually carried out by a deterministic approach so far. However, it may be possible for the forensic investigation to lead to unreasonable results far from the real collapse scenario, because the deterministic approach does not systematically take into account any information on the uncertainties involved in the failures of structures. Reliability-based failure cause assessment (reliability-based forensic engineering) methodology is developed which can incorporate the uncertainties involved in structural failures and structures, and to apply them to the collapsed bridge in order to identify the most critical failure scenario and find the cause that triggered the bridge collapse. Moreover, to save the time and cost of evaluation, an algorithm of automated event tree analysis (ETA) is proposed and possible to automatically calculate the failure probabilities of the failure events and the occurrence probabilities of failure scenarios. Also, for reliability analysis, uncertainties are estimated more reasonably by using the Bayesian approach based on the experimental laboratory testing data in the forensic report. For the applicability, the proposed approach is applied to the Hang-ju Grand Bridge, which collapsed during construction, and compared with deterministic approach

  5. Locally Minimum Storage Regenerating Codes in Distributed Cloud Storage Systems

    Institute of Scientific and Technical Information of China (English)

    Jing Wang; Wei Luo; Wei Liang; Xiangyang Liu; Xiaodai Dong

    2017-01-01

    In distributed cloud storage sys-tems, inevitably there exist multiple node fail-ures at the same time. The existing methods of regenerating codes, including minimum storage regenerating (MSR) codes and mini-mum bandwidth regenerating (MBR) codes, are mainly to repair one single or several failed nodes, unable to meet the repair need of distributed cloud storage systems. In this paper, we present locally minimum storage re-generating (LMSR) codes to recover multiple failed nodes at the same time. Specifically, the nodes in distributed cloud storage systems are divided into multiple local groups, and in each local group (4, 2) or (5, 3) MSR codes are constructed. Moreover, the grouping method of storage nodes and the repairing process of failed nodes in local groups are studied. The-oretical analysis shows that LMSR codes can achieve the same storage overhead as MSR codes. Furthermore, we verify by means of simulation that, compared with MSR codes, LMSR codes can reduce the repair bandwidth and disk I/O overhead effectively.

  6. Reliability analysis for the creep rupture mode of failure

    International Nuclear Information System (INIS)

    Vaidyanathan, S.

    1975-01-01

    An analytical study has been carried out to relate the factors of safety employed in the design of a component to the probability of failure in the thermal creep rupture mode. The analysis considers the statistical variations in the operating temperature, stress and rupture time, and applies the life fraction damage criterion as the indicator of failure. Typical results for solution annealed type 304-stainless steel material for the temperature and stress variations expected in an LMFBR environment have been obtained. The analytical problem was solved by considering the joint distribution of the independent variables and deriving the distribution for the function associated with the probability of failure by integrating over proper regions as dictated by the deterministic design rule. This leads to a triple integral for the final probability of failure where the coefficients of variation associated with the temperature, stress and rupture time distributions can be specified by the user. The derivation is general, and can be used for time varying stress histories and the case of irradiated material where the rupture time varies with accumulated fluence. Example calculations applied to solution annealed type 304 stainless steel material have been carried out for an assumed coefficient of variation of 2% for temperature and 6% for stress. The results show that the probability of failure associated with dependent stress intensity limits specified in the ASME Boiler and Pressure Vessel Section III Code Case 1592 is less than 5x10 -8 . Rupture under thermal creep conditions is a highly complicated phenomenon. It is believed that the present study will help in quantizing the reliability to be expected with deterministic design factors of safety

  7. Methods for estimating drought streamflow probabilities for Virginia streams

    Science.gov (United States)

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  8. Clinical Investigation of Treatment Failure in Type 2 Diabetic ...

    African Journals Online (AJOL)

    HP

    contributory factors in treatment failure in type 2 diabetic patients taking metformin and glibenclamide in a tertiary ... that took metformin and glibenclamide for a minimum of 1 year were examined. Patients ..... obesity in adults and children.

  9. Application of importance sampling method in sliding failure simulation of caisson breakwaters

    Science.gov (United States)

    Wang, Yu-chi; Wang, Yuan-zhan; Li, Qing-mei; Chen, Liang-zhi

    2016-06-01

    It is assumed that the storm wave takes place once a year during the design period, and N histories of storm waves are generated on the basis of wave spectrum corresponding to the N-year design period. The responses of the breakwater to the N histories of storm waves in the N-year design period are calculated by mass-spring-dashpot mode and taken as a set of samples. The failure probability of caisson breakwaters during the design period of N years is obtained by the statistical analysis of many sets of samples. It is the key issue to improve the efficiency of the common Monte Carlo simulation method in the failure probability estimation of caisson breakwaters in the complete life cycle. In this paper, the kernel method of importance sampling, which can greatly increase the efficiency of failure probability calculation of caisson breakwaters, is proposed to estimate the failure probability of caisson breakwaters in the complete life cycle. The effectiveness of the kernel method is investigated by an example. It is indicated that the calculation efficiency of the kernel method is over 10 times the common Monte Carlo simulation method.

  10. Application of Probability Calculations to the Study of the Permissible Step and Touch Potentials to Ensure Personnel Safety

    International Nuclear Information System (INIS)

    Eisawy, E.A.

    2011-01-01

    The aim of this paper is to develop a practical method to evaluate the actual step and touch potential distributions in order to determine the risk of failure of the grounding system. The failure probability, indicating the safety level of the grounding system, is related to both applied (stress) and withstand (strength) step or touch potentials. The probability distributions of the applied step and touch potentials as well as the corresponding withstand step and touch potentials which represent the capability of the human body to resist stress potentials are presented. These two distributions are used to evaluate the failure probability of the grounding system which denotes the probability that the applied potential exceeds the withstand potential. The method is accomplished in considering the resistance of the human body, the foot contact resistance and the fault clearing time as an independent random variables, rather than fixed values as treated in the previous analysis in determining the safety requirements for a given grounding system

  11. Calculating the albedo characteristics by the method of transmission probabilities

    International Nuclear Information System (INIS)

    Lukhvich, A.A.; Rakhno, I.L.; Rubin, I.E.

    1983-01-01

    The possibility to use the method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones is studied. The transmission probabilities method is a numerical method for solving the transport equation in the integrated form. All calculations have been conducted as a one-group approximation for the planes and rods with different optical thicknesses and capture-to-scattering ratios. Above calculations for plane and cylindrical geometries have shown the possibility to use the numerical method of transmission probabilities for calculating the albedo characteristics of homogeneous and heterogeneous zones with high accuracy. In this case the computer time consumptions are minimum even with the cylindrical geometry, if the interpolation calculation of characteristics is used for the neutrons of the first path

  12. Probability problems in seismic risk analysis and load combinations for nuclear power plants

    International Nuclear Information System (INIS)

    George, L.L.

    1983-01-01

    This workshop describes some probability problems in power plant reliability and maintenance analysis. The problems are seismic risk analysis, loss of load probability, load combinations, and load sharing. The seismic risk problem is to compute power plant reliability given an earthquake and the resulting risk. Component survival occurs if its peak random response to the earthquake does not exceed its strength. Power plant survival is a complicated Boolean function of component failures and survivals. The responses and strengths of components are dependent random processes, and the peak responses are maxima of random processes. The resulting risk is the expected cost of power plant failure

  13. A Novel RSPF Approach to Prediction of High-Risk, Low-Probability Failure Events

    Data.gov (United States)

    National Aeronautics and Space Administration — Particle filters (PF) have been established as the de facto state of the art in failure prognosis, and particularly in the representation and management of...

  14. ESPEN guidelines on chronic intestinal failure in adults

    NARCIS (Netherlands)

    Pironi, L; Arends, J.; Bozzetti, F.; Cuerda, C.; Gillanders, L.; Jeppesen, P.B.; Joly, F.; Kelly, D.; Lal, S.; Staun, M.; Szczepanek, K.; Gossum, A. van; Wanten, G.J.A.; Schneider, S.M.

    2016-01-01

    BACKGROUND & AIMS: Chronic Intestinal Failure (CIF) is the long-lasting reduction of gut function, below the minimum necessary for the absorption of macronutrients and/or water and electrolytes, such that intravenous supplementation is required to maintain health and/or growth. CIF is the rarest

  15. Conditional probability of the tornado missile impact given a tornado occurrence

    International Nuclear Information System (INIS)

    Goodman, J.; Koch, J.E.

    1982-01-01

    Using an approach based on statistical mechanics, an expression for the probability of the first missile strike is developed. The expression depends on two generic parameters (injection probability eta(F) and height distribution psi(Z,F)), which are developed in this study, and one plant specific parameter (number of potential missiles N/sub p/). The expression for the joint probability of simultaneous impact of muitiple targets is also developed. This espression is applicable to calculation of the probability of common cause failure due to tornado missiles. It is shown that the probability of the first missile strike can be determined using a uniform missile distribution model. It is also shown that the conditional probability of the second strike, given the first, is underestimated by the uniform model. The probability of the second strike is greatly increased if the missiles are in clusters large enough to cover both targets

  16. A procedure to identify and to assess risk parameters in a SCR (Steel Catenary Riser) due to the fatigue failure

    Energy Technology Data Exchange (ETDEWEB)

    Stefane, Wania [Universidade Estadual de Campinas (UNICAMP), Campinas, SP (Brazil). Faculdade de Engenharia Mecanica; Morooka, Celso K. [Universidade Estadual de Campinas (UNICAMP), Campinas, SP (Brazil). Dept. de Engenharia de Petroleo. Centro de Estudos de Petroleo; Pezzi Filho, Mario [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). E and P. ENGP/IPMI/ES; Matt, Cyntia G.C.; Franciss, Ricardo [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Centro de Pesquisas (CENPES)

    2009-12-19

    The discovery of offshore fields in ultra deep water and the presence of reservoirs located in great depths below the seabed requires innovative solutions for offshore oil production systems. Many riser configurations have emerged as economically viable technological solutions for these scenarios. Therefore the study and the development of methodologies applied to riser design and procedures to calculate and to dimension production risers, taken into account the effects of mete ocean conditions, such as waves, current and platform motion in the fatigue failure is fundamental. The random nature of these conditions as well as the mechanical characteristics of the riser components are critical to a probabilistic treatment to ensure the greatest reliability for risers and minimum risks associated to different aspects of the operation like the safety of the installation, economical concerns and the environment. The current work presents a procedure of the identification and the assessment of main parameters of risk when considering fatigue failure. Static and dynamic behavior of Steel Catenary Riser (SCR) under the effects of mete ocean conditions and uncertainties related to total cumulative damage (Miner-Palmgren's rule) are taken into account. The methodology adopted is probabilistic and the approach is analytical. The procedure is based on the First Order Reliability Method (FORM) which usually presents low computational effort and acceptable accuracy. The procedure suggested is applied for two practical cases, one using data available from the literature and the second with data collected from an actual Brazilian offshore field operation. For both cases, results of the probability of failure due to fatigue were obtained for different locations along the SCR length connected to a semi-submersible platform. From these results, the sensitivity of the probability of failure due to fatigue for a SCR could be verified, and the most effective parameter could also be

  17. Understanding failures in petascale computers

    International Nuclear Information System (INIS)

    Schroeder, Bianca; Gibson, Garth A

    2007-01-01

    With petascale computers only a year or two away there is a pressing need to anticipate and compensate for a probable increase in failure and application interruption rates. Researchers, designers and integrators have available to them far too little detailed information on the failures and interruptions that even smaller terascale computers experience. The information that is available suggests that application interruptions will become far more common in the coming decade, and the largest applications may surrender large fractions of the computer's resources to taking checkpoints and restarting from a checkpoint after an interruption. This paper reviews sources of failure information for compute clusters and storage systems, projects failure rates and the corresponding decrease in application effectiveness, and discusses coping strategies such as application-level checkpoint compression and system level process-pairs fault-tolerance for supercomputing. The need for a public repository for detailed failure and interruption records is particularly concerning, as projections from one architectural family of machines to another are widely disputed. To this end, this paper introduces the Computer Failure Data Repository and issues a call for failure history data to publish in it

  18. Re‐estimated effects of deep episodic slip on the occurrence and probability of great earthquakes in Cascadia

    Science.gov (United States)

    Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy

    2013-01-01

    Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.

  19. Multi-state systems with selective propagated failures and imperfect individual and group protections

    International Nuclear Information System (INIS)

    Levitin, Gregory; Xing Liudong; Ben-Haim, Hanoch; Da, Yuanshun

    2011-01-01

    The paper presents an algorithm for evaluating performance distribution of complex series–parallel multi-state systems with propagated failures and imperfect protections. The failure propagation can have a selective effect, which means that the failures originated from different system elements can cause failures of different subsets of elements. Individual elements or some disjoint groups of elements can be protected from propagation of failures originated outside the group. The protections can fail with given probabilities. The suggested algorithm is based on the universal generating function approach and a generalized reliability block diagram method. The performance distribution evaluation procedure is repeated for each combination of propagated failures and protection failures. Both an analytical example and a numerical example are provided to illustrate the suggested algorithm. - Highlights: ► Systems with propagated failures and imperfect protections are considered. ► Failures originated from different elements can affect different subsets of elements. ► Protections of individual elements or groups of elements can fail with given probabilities. ► An algorithm for evaluating multi-state system performance distribution is suggested.

  20. Introduction to probability and statistics for ecosystem managers simulation and resampling

    CERN Document Server

    Haas, Timothy C

    2013-01-01

    Explores computer-intensive probability and statistics for ecosystem management decision making Simulation is an accessible way to explain probability and stochastic model behavior to beginners. This book introduces probability and statistics to future and practicing ecosystem managers by providing a comprehensive treatment of these two areas. The author presents a self-contained introduction for individuals involved in monitoring, assessing, and managing ecosystems and features intuitive, simulation-based explanations of probabilistic and statistical concepts. Mathematical programming details are provided for estimating ecosystem model parameters with Minimum Distance, a robust and computer-intensive method. The majority of examples illustrate how probability and statistics can be applied to ecosystem management challenges. There are over 50 exercises - making this book suitable for a lecture course in a natural resource and/or wildlife management department, or as the main text in a program of self-stud...

  1. Dependent failure analysis of NPP data bases

    International Nuclear Information System (INIS)

    Cooper, S.E.; Lofgren, E.V.; Samanta, P.K.; Wong Seemeng

    1993-01-01

    A technical approach for analyzing plant-specific data bases for vulnerabilities to dependent failures has been developed and applied. Since the focus of this work is to aid in the formulation of defenses to dependent failures, rather than to quantify dependent failure probabilities, the approach of this analysis is critically different. For instance, the determination of component failure dependencies has been based upon identical failure mechanisms related to component piecepart failures, rather than failure modes. Also, component failures involving all types of component function loss (e.g., catastrophic, degraded, incipient) are equally important to the predictive purposes of dependent failure defense development. Consequently, dependent component failures are identified with a different dependent failure definition which uses a component failure mechanism categorization scheme in this study. In this context, clusters of component failures which satisfy the revised dependent failure definition are termed common failure mechanism (CFM) events. Motor-operated valves (MOVs) in two nuclear power plant data bases have been analyzed with this approach. The analysis results include seven different failure mechanism categories; identified potential CFM events; an assessment of the risk-significance of the potential CFM events using existing probabilistic risk assessments (PRAs); and postulated defenses to the identified potential CFM events. (orig.)

  2. A pragmatic approach to estimate alpha factors for common cause failure analysis

    International Nuclear Information System (INIS)

    Hassija, Varun; Senthil Kumar, C.; Velusamy, K.

    2014-01-01

    Highlights: • Estimation of coefficients in alpha factor model for common cause analysis. • A derivation of plant specific alpha factors is demonstrated. • We examine sensitivity of common cause contribution to total system failure. • We compare beta factor and alpha factor models for various redundant configurations. • The use of alpha factors is preferable, especially for large redundant systems. - Abstract: Most of the modern technological systems are deployed with high redundancy but still they fail mainly on account of common cause failures (CCF). Various models such as Beta Factor, Multiple Greek Letter, Binomial Failure Rate and Alpha Factor exists for estimation of risk from common cause failures. Amongst all, alpha factor model is considered most suitable for high redundant systems as it arrives at common cause failure probabilities from a set of ratios of failures and the total component failure probability Q T . In the present study, alpha factor model is applied for the assessment of CCF of safety systems deployed at two nuclear power plants. A method to overcome the difficulties in estimation of the coefficients viz., alpha factors in the model, importance of deriving plant specific alpha factors and sensitivity of common cause contribution to the total system failure probability with respect to hazard imposed by various CCF events is highlighted. An approach described in NUREG/CR-5500 is extended in this study to provide more explicit guidance for a statistical approach to derive plant specific coefficients for CCF analysis especially for high redundant systems. The procedure is expected to aid regulators for independent safety assessment

  3. Single shell tank sluicing history and failure frequency

    International Nuclear Information System (INIS)

    HERTZEL, J.S.

    1998-01-01

    This document assesses the potential for failure of the single-shell tanks (SSTs) that are presumably sound and helps to establish the retrieval priorities for these and the assumed leakers. Furthermore, this report examines probabilities of SST failure as a function of age and operational history, and provides a simple statistical summary of historical leak volumes, leak rates, and corrosion factor

  4. Temporal-varying failures of nodes in networks

    Science.gov (United States)

    Knight, Georgie; Cristadoro, Giampaolo; Altmann, Eduardo G.

    2015-08-01

    We consider networks in which random walkers are removed because of the failure of specific nodes. We interpret the rate of loss as a measure of the importance of nodes, a notion we denote as failure centrality. We show that the degree of the node is not sufficient to determine this measure and that, in a first approximation, the shortest loops through the node have to be taken into account. We propose approximations of the failure centrality which are valid for temporal-varying failures, and we dwell on the possibility of externally changing the relative importance of nodes in a given network by exploiting the interference between the loops of a node and the cycles of the temporal pattern of failures. In the limit of long failure cycles we show analytically that the escape in a node is larger than the one estimated from a stochastic failure with the same failure probability. We test our general formalism in two real-world networks (air-transportation and e-mail users) and show how communities lead to deviations from predictions for failures in hubs.

  5. Use on non-conjugate prior distributions in compound failure models. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.

    1981-12-01

    Several theoretical and computational techniques are presented for compound failure models in which the failure rate or failure probability for a class of components is considered to be a random variable. Both the failure-on-demand and failure-rate situation are considered. Ten different prior families are presented for describing the variation or uncertainty of the failure parameter. Methods considered for estimating values for the prior parameters from a given set of failure data are (1) matching data moments to those of the prior distribution, (2) matching data moments to those of the compound marginal distribution, and (3) the marginal maximum likelihood method. Numerical methods for computing the parameter estimators for all ten prior families are presented, as well as methods for obtaining estimates of the variances and covariance of the parameter estimators, it is shown that various confidence, probability, and tolerance intervals can be evaluated. Finally, to test the resulting failure models against the given failure data, generalized chi-squage and Kolmogorov-Smirnov goodness-of-fit tests are proposed together with a test to eliminate outliers from the failure data. Computer codes based on the results presented here have been prepared and are presented in a companion report

  6. Beyond reliability, multi-state failure analysis of satellite subsystems: A statistical approach

    International Nuclear Information System (INIS)

    Castet, Jean-Francois; Saleh, Joseph H.

    2010-01-01

    Reliability is widely recognized as a critical design attribute for space systems. In recent articles, we conducted nonparametric analyses and Weibull fits of satellite and satellite subsystems reliability for 1584 Earth-orbiting satellites launched between January 1990 and October 2008. In this paper, we extend our investigation of failures of satellites and satellite subsystems beyond the binary concept of reliability to the analysis of their anomalies and multi-state failures. In reliability analysis, the system or subsystem under study is considered to be either in an operational or failed state; multi-state failure analysis introduces 'degraded states' or partial failures, and thus provides more insights through finer resolution into the degradation behavior of an item and its progression towards complete failure. The database used for the statistical analysis in the present work identifies five states for each satellite subsystem: three degraded states, one fully operational state, and one failed state (complete failure). Because our dataset is right-censored, we calculate the nonparametric probability of transitioning between states for each satellite subsystem with the Kaplan-Meier estimator, and we derive confidence intervals for each probability of transitioning between states. We then conduct parametric Weibull fits of these probabilities using the Maximum Likelihood Estimation (MLE) approach. After validating the results, we compare the reliability versus multi-state failure analyses of three satellite subsystems: the thruster/fuel; the telemetry, tracking, and control (TTC); and the gyro/sensor/reaction wheel subsystems. The results are particularly revealing of the insights that can be gleaned from multi-state failure analysis and the deficiencies, or blind spots, of the traditional reliability analysis. In addition to the specific results provided here, which should prove particularly useful to the space industry, this work highlights the importance

  7. Fire behavior simulation in Mediterranean forests using the minimum travel time algorithm

    Science.gov (United States)

    Kostas Kalabokidis; Palaiologos Palaiologou; Mark A. Finney

    2014-01-01

    Recent large wildfires in Greece exemplify the need for pre-fire burn probability assessment and possible landscape fire flow estimation to enhance fire planning and resource allocation. The Minimum Travel Time (MTT) algorithm, incorporated as FlamMap's version five module, provide valuable fire behavior functions, while enabling multi-core utilization for the...

  8. A case of multiple organ failure induced by postoperative radiation therapy probably evoking oxidative stress

    International Nuclear Information System (INIS)

    Soejima, Akinori; Ishizuka, Shynji; Suzuki, Michihiko; Minoshima, Shinobu; Nakabayashi, Kimimasa; Kitamoto, Kiyoshi; Nagasawa, Toshihiko

    1995-01-01

    In recent years, several laboratories have suggested that serum levels of antioxidant activity and redox balance are reduced in patients with chronic renal failure. Some clinical reports have also proposed that defective serum antioxidative enzymes may contribute to a certain uremic toxicity through peroxidative cell damage. A 48-year-old woman was referred to us from the surgical department of our hospital because of consciousness disturbance, panctytopenia and acute acceleration of chronic azotemia after postoperative radiation therapy. We diagnosed acute acceleration of chronic renal failure with severe acidemia and started hemodialysis therapy immediately. Two days after admission to our department, she developed upper abdominal sharp pain and bradyarrhythmia. Serum amylase activity was elevated markedly and the ECG finding showed myocardial ischemia. On the 24th hospital day these complications were treated successfully with conservative therapy and hemodialysis. We considered that radiation therapy in this patient with chronic renal failure evoked marked oxidative stress and that deficiency of transferrin played an important role in peroxidative cell damage. (author)

  9. Recurrent implantation failure: definition and management.

    Science.gov (United States)

    Coughlan, C; Ledger, W; Wang, Q; Liu, Fenghua; Demirol, Aygul; Gurgan, Timur; Cutting, R; Ong, K; Sallam, H; Li, T C

    2014-01-01

    Recurrent implantation failure refers to failure to achieve a clinical pregnancy after transfer of at least four good-quality embryos in a minimum of three fresh or frozen cycles in a woman under the age of 40 years. The failure to implant may be a consequence of embryo or uterine factors. Thorough investigations should be carried out to ascertain whether there is any underlying cause of the condition. Ovarian function should be assessed by measurement of antral follicle count, FSH and anti-Mu¨llerian hormone. Increased sperm DNA fragmentation may be a contributory cause. Various uterine pathology including fibroids, endometrial polyps, congenital anomalies and intrauterine adhesions should be excluded by ultrasonography and hysteroscopy. Hydrosalpinges are a recognized cause of implantation failure and should be excluded by hysterosalpingogram; if necessary, laparoscopy should be performed to confirm or refute the diagnosis. Treatment offered should be evidence based, aimed at improving embryo quality or endometrial receptivity. Gamete donation or surrogacy may be necessary if there is no realistic chance of success with further IVF attempts. Copyright © 2013. Published by Elsevier Ltd.

  10. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Qing Ye

    2015-01-01

    Full Text Available This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach.

  11. A comparative study of failure criteria in probabilistic fields and stochastic failure envelopes of composite materials

    International Nuclear Information System (INIS)

    Nakayasu, Hidetoshi; Maekawa, Zen'ichiro

    1997-01-01

    One of the major objectives of this paper is to offer a practical tool for materials design of unidirectional composite laminates under in-plane multiaxial load. Design-oriented failure criteria of composite materials are applied to construct the evaluation model of probabilistic safety based on the extended structural reliability theory. Typical failure criteria such as maximum stress, maximum strain and quadratic polynomial failure criteria are compared from the viewpoint of reliability-oriented materials design of composite materials. The new design diagram which shows the feasible region on in-plane strain space and corresponds to safety index or failure probability is also proposed. These stochastic failure envelope diagrams which are drawn in in-plane strain space enable one to evaluate the stochastic behavior of a composite laminate with any lamination angle under multi-axial stress or strain condition. Numerical analysis for a graphite/epoxy laminate of T300/5208 is shown for the comparative verification of failure criteria under the various combinations of multi-axial load conditions and lamination angles. The stochastic failure envelopes of T300/5208 were also described in in-plane strain space

  12. Exact results for survival probability in the multistate Landau-Zener model

    International Nuclear Information System (INIS)

    Volkov, M V; Ostrovsky, V N

    2004-01-01

    An exact formula is derived for survival probability in the multistate Landau-Zener model in the special case where the initially populated state corresponds to the extremal (maximum or minimum) slope of a linear diabatic potential curve. The formula was originally guessed by S Brundobler and V Elzer (1993 J. Phys. A: Math. Gen. 26 1211) based on numerical calculations. It is a simple generalization of the expression for the probability of diabatic passage in the famous two-state Landau-Zener model. Our result is obtained via analysis and summation of the entire perturbation theory series

  13. Estimating the probability that the Taser directly causes human ventricular fibrillation.

    Science.gov (United States)

    Sun, H; Haemmerich, D; Rahko, P S; Webster, J G

    2010-04-01

    This paper describes the first methodology and results for estimating the order of probability for Tasers directly causing human ventricular fibrillation (VF). The probability of an X26 Taser causing human VF was estimated using: (1) current density near the human heart estimated by using 3D finite-element (FE) models; (2) prior data of the maximum dart-to-heart distances that caused VF in pigs; (3) minimum skin-to-heart distances measured in erect humans by echocardiography; and (4) dart landing distribution estimated from police reports. The estimated mean probability of human VF was 0.001 for data from a pig having a chest wall resected to the ribs and 0.000006 for data from a pig with no resection when inserting a blunt probe. The VF probability for a given dart location decreased with the dart-to-heart horizontal distance (radius) on the skin surface.

  14. Family of probability distributions derived from maximal entropy principle with scale invariant restrictions.

    Science.gov (United States)

    Sonnino, Giorgio; Steinbrecher, György; Cardinali, Alessandro; Sonnino, Alberto; Tlidi, Mustapha

    2013-01-01

    Using statistical thermodynamics, we derive a general expression of the stationary probability distribution for thermodynamic systems driven out of equilibrium by several thermodynamic forces. The local equilibrium is defined by imposing the minimum entropy production and the maximum entropy principle under the scale invariance restrictions. The obtained probability distribution presents a singularity that has immediate physical interpretation in terms of the intermittency models. The derived reference probability distribution function is interpreted as time and ensemble average of the real physical one. A generic family of stochastic processes describing noise-driven intermittency, where the stationary density distribution coincides exactly with the one resulted from entropy maximization, is presented.

  15. Influence of reinforcement's corrosion into hyperstatic reinforced concrete beams: a probabilistic failure scenarios analysis

    Directory of Open Access Journals (Sweden)

    G. P. PELLIZZER

    Full Text Available AbstractThis work aims to study the mechanical effects of reinforcement's corrosion in hyperstatic reinforced concrete beams. The focus is the probabilistic determination of individual failure scenarios change as well as global failure change along time. The limit state functions assumed describe analytically bending and shear resistance of reinforced concrete rectangular cross sections as a function of steel and concrete resistance and section dimensions. It was incorporated empirical laws that penalize the steel yield stress and the reinforcement's area along time in addition to Fick's law, which models the chloride penetration into concrete pores. The reliability theory was applied based on Monte Carlo simulation method, which assesses each individual probability of failure. The probability of global structural failure was determined based in the concept of failure tree. The results of a hyperstatic reinforced concrete beam showed that reinforcements corrosion make change into the failure scenarios modes. Therefore, unimportant failure modes in design phase become important after corrosion start.

  16. Dynamic Minimum Spanning Forest with Subpolynomial Worst-case Update Time

    DEFF Research Database (Denmark)

    Nanongkai, Danupon; Saranurak, Thatchaphol; Wulff-Nilsen, Christian

    2017-01-01

    Abstract: We present a Las Vegas algorithm for dynamically maintaining a minimum spanning forest of an nnode graph undergoing edge insertions and deletions. Our algorithm guarantees an O(no(1)) worst-case update time with high probability. This significantly improves the two recent Las Vegas algo...... the previous approach in [2], [3] which is based on Frederickson's 2-dimensional topology tree [6] and illustrates a new application to this old technique....

  17. ERG review of containment failure probability and repository functional design criteria

    International Nuclear Information System (INIS)

    Gopal, S.

    1986-06-01

    The Engineering Review Group (ERG) was established by the Office of Nuclear Waste Isolation (ONWI) to help evaluate engineering-related issues in the US Department of Energy's nuclear waste repository program. The June 1984 meeting of the ERG considered two topics: (1) statistical probability for containment of nuclides within the waste package and (2) repository design criteria. This report documents the ERG's comments and recommendations on these two subjects and the ONWI response to the specific points raised by ERG

  18. Performance Based Failure Criteria of the Base Isolation System for Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Jung Han; Kim, Min Kyu; Choi, In Kil

    2013-01-01

    The realistic approach to evaluate the failure state of the base isolation system is necessary. From this point of view, several concerns are reviewed and discussed in this study. This is the preliminary study for the performance based risk assessment of a base isolated nuclear power plant. The items to evaluate the capacity and response of an individual base isolator and a base isolation system were briefly outlined. However, the methodology to evaluate the realistic fragility of a base isolation system still needs to be specified. For the quantification of the seismic risk for a nuclear power plant structure, the failure probabilities of the structural component for the various seismic intensity levels need to be calculated. The failure probability is evaluated as the probability when the seismic response of a structure exceeds the failure criteria. Accordingly, the failure mode of the structural system caused by an earthquake vibration should be defined first. The type of a base isolator appropriate for a nuclear power plant structure is regarded as an elastometric rubber bearing with a lead core. The failure limit of the lead-rubber bearing (LRB) is not easy to be predicted because of its high nonlinearity and a complex loading condition by an earthquake excitation. Furthermore, the failure mode of the LRB system installed below the nuclear island cannot be simply determined because the basemat can be sufficiently supported if the number of damaged isolator is not much

  19. A short walk in quantum probability

    Science.gov (United States)

    Hudson, Robin

    2018-04-01

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas. This article is part of the themed issue `Hilbert's sixth problem'.

  20. A short walk in quantum probability.

    Science.gov (United States)

    Hudson, Robin

    2018-04-28

    This is a personal survey of aspects of quantum probability related to the Heisenberg commutation relation for canonical pairs. Using the failure, in general, of non-negativity of the Wigner distribution for canonical pairs to motivate a more satisfactory quantum notion of joint distribution, we visit a central limit theorem for such pairs and a resulting family of quantum planar Brownian motions which deform the classical planar Brownian motion, together with a corresponding family of quantum stochastic areas.This article is part of the themed issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  1. Effect of Preconditioning and Soldering on Failures of Chip Tantalum Capacitors

    Science.gov (United States)

    Teverovsky, Alexander A.

    2014-01-01

    Soldering of molded case tantalum capacitors can result in damage to Ta205 dielectric and first turn-on failures due to thermo-mechanical stresses caused by CTE mismatch between materials used in the capacitors. It is also known that presence of moisture might cause damage to plastic cases due to the pop-corning effect. However, there are only scarce literature data on the effect of moisture content on the probability of post-soldering electrical failures. In this work, that is based on a case history, different groups of similar types of CWR tantalum capacitors from two lots were prepared for soldering by bake, moisture saturation, and longterm storage at room conditions. Results of the testing showed that both factors: initial quality of the lot, and preconditioning affect the probability of failures. Baking before soldering was shown to be effective to prevent failures even in lots susceptible to pop-corning damage. Mechanism of failures is discussed and recommendations for pre-soldering bake are suggested based on analysis of moisture characteristics of materials used in the capacitors' design.

  2. Fuel element failures caused by iodine stress corrosion

    International Nuclear Information System (INIS)

    Videm, K.; Lunde, L.

    1976-01-01

    Sections of unirradiated cladding tubes were plugged in both ends by mechanical seals and internally pressurized with argon containing iodine. The time to failure and the strain at failure as a function of stress was determined for tubing with different heat treatments. Fully annealed tubes suffer cracking at the lowest stress but exhibit the largest strains at failure. Elementary iodine is not necessary for stress corrosion: small amounts of iodides of zirconium, iron and aluminium can also give cracking. Moisture, however, was found to act as an inhibitor. A deformation threshold exists below which stress corrosion failure does not occur regardless of the exposure time. This deformation limit is lower the harder the tube. The deformation at failure is dependent on the deformation rate and has a minimum at 0.1%/hr. At higher deformation rates the failure deformation increases, but only slightly for hard tubes. Fuel was over-power tested at ramp rates varying between 0.26 to 30 W/cm min. For one series of fuel pins the failure deformations of 0.8% at high ramp rates were in good agreement with predictions based on stress corrosion experiments. For another series of experiments the failure deformation was surprisingly low, about 0.2%. (author)

  3. First-passage Probability Estimation of an Earthquake Response of Seismically Isolated Containment Buildings

    International Nuclear Information System (INIS)

    Hahm, Dae-Gi; Park, Kwan-Soon; Koh, Hyun-Moo

    2008-01-01

    The awareness of a seismic hazard and risk is being increased rapidly according to the frequent occurrences of the huge earthquakes such as the 2008 Sichuan earthquake which caused about 70,000 confirmed casualties and a 20 billion U.S. dollars economic loss. Since an earthquake load contains various uncertainties naturally, the safety of a structural system under an earthquake excitation has been assessed by probabilistic approaches. In many structural applications for a probabilistic safety assessment, it is often regarded that the failure of a system will occur when the response of the structure firstly crosses the limit barrier within a specified interval of time. The determination of such a failure probability is usually called the 'first-passage problem' and has been extensively studied during the last few decades. However, especially for the structures which show a significant nonlinear dynamic behavior, an effective and accurate method for the estimation of such a failure probability is not fully established yet. In this study, we presented a new approach to evaluate the first-passage probability of an earthquake response of seismically isolated structures. The proposed method is applied to the seismic isolation system for the containment buildings of a nuclear power plant. From the numerical example, we verified that the proposed method shows accurate results with more efficient computational efforts compared to the conventional approaches

  4. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  5. The prediction problems of VVER fuel element cladding failure theory

    International Nuclear Information System (INIS)

    Pelykh, S.N.; Maksimov, M.V.; Ryabchikov, S.D.

    2016-01-01

    Highlights: • Fuel cladding failure forecasting is based on the fuel load history and the damage distribution. • The limit damage parameter is exceeded, though limit stresses are not reached. • The damage parameter plays a significant role in predicting the cladding failure. • The proposed failure probability criterion can be used to control the cladding tightness. - Abstract: A method for forecasting of VVER fuel element (FE) cladding failure due to accumulation of deformation damage parameter, taking into account the fuel assembly (FA) loading history and the damage parameter distribution among FEs included in the FA, has been developed. Using the concept of conservative FE groups, it is shown that the safety limit for damage parameter is exceeded for some FA rearrangement, though the limits for circumferential and equivalent stresses are not reached. This new result contradicts the wide-spread idea that the damage parameter value plays a minor role when estimating the limiting state of cladding. The necessary condition of rearrangement algorithm admissibility and the criterion for minimization of the probability of cladding failure due to damage parameter accumulation have been derived, for using in automated systems controlling the cladding tightness.

  6. Effect of a certain class of potential common mode failures on the reliability of redundant systems

    International Nuclear Information System (INIS)

    Apostolakis, G.E.

    1975-11-01

    This is a theoretical investigation of the importance of common mode failures on the reliability of redundant systems. These failures are assumed to be the result of fatal shocks (e.g., from earthquakes, explosions, etc.) which occur at a constant rate. This formulation makes it possible to predict analytically results obtained in the past which showed that the probability of a common mode failure of the redundant channels of the protection system of a typical nuclear power plant was orders of magnitude larger than the probability of failure from chance failures alone. Furthermore, since most reliability analyses of redundant systems do not include potential common mode failures in the probabilistic calculations, criteria are established which can be used to decide either that the common-mode-failure effects are indeed insignificant or that such calculations are meaningless, and more sophisticated methods of analysis are required, because common mode failures cannot be ignored

  7. Reliability modelling for wear out failure period of a single unit system

    OpenAIRE

    Arekar, Kirti; Ailawadi, Satish; Jain, Rinku

    2012-01-01

    The present paper deals with two time-shifted density models for wear out failure period of a single unit system. The study, considered the time-shifted Gamma and Normal distributions. Wear out failures occur as a result of deterioration processes or mechanical wear and its probability of occurrence increases with time. A failure rate as a function of time deceases in an early failure period and it increases in wear out period. Failure rates for time shifted distributions and expression for m...

  8. Balancing burn-in and mission times in environments with catastrophic and repairable failures

    International Nuclear Information System (INIS)

    Bebbington, Mark; Lai, C.-D.; Zitikis, Ricardas

    2009-01-01

    In a system subject to both repairable and catastrophic (i.e., nonrepairable) failures, 'mission success' can be defined as operating for a specified time without a catastrophic failure. We examine the effect of a burn-in process of duration τ on the mission time x, and also on the probability of mission success, by introducing several functions and surfaces on the (τ,x)-plane whose extrema represent suitable choices for the best burn-in time, and the best burn-in time for a desired mission time. The corresponding curvature functions and surfaces provide information about probabilities and expectations related to these burn-in and mission times. Theoretical considerations are illustrated with both parametric and, separating the failures by failure mode, nonparametric analyses of a data set, and graphical visualization of results.

  9. Some possible causes and probability of leakages in LMFBR steam generators

    International Nuclear Information System (INIS)

    Bolt, P.R.

    1984-01-01

    Relevant operational experience with steam generators for process and conventional plant and thermal and fast reactors is reviewed. Possible causes of water/steam leakages into sodium/gas are identified and data is given on the conditions necessary for failure, leakage probability and type of leakage path. (author)

  10. How Life History Can Sway the Fixation Probability of Mutants

    Science.gov (United States)

    Li, Xiang-Yi; Kurokawa, Shun; Giaimo, Stefano; Traulsen, Arne

    2016-01-01

    In this work, we study the effects of demographic structure on evolutionary dynamics when selection acts on reproduction, survival, or both. In contrast to the previously discovered pattern that the fixation probability of a neutral mutant decreases while the population becomes younger, we show that a mutant with a constant selective advantage may have a maximum or a minimum of the fixation probability in populations with an intermediate fraction of young individuals. This highlights the importance of life history and demographic structure in studying evolutionary dynamics. We also illustrate the fundamental differences between selection on reproduction and selection on survival when age structure is present. In addition, we evaluate the relative importance of size and structure of the population in determining the fixation probability of the mutant. Our work lays the foundation for also studying density- and frequency-dependent effects in populations when demographic structures cannot be neglected. PMID:27129737

  11. High-Temperature Graphitization Failure of Primary Superheater Tube

    Science.gov (United States)

    Ghosh, D.; Ray, S.; Roy, H.; Mandal, N.; Shukla, A. K.

    2015-12-01

    Failure of boiler tubes is the main cause of unit outages of the plant, which further affects the reliability, availability and safety of the unit. So failure analysis of boiler tubes is absolutely essential to predict the root cause of the failure and the steps are taken for future remedial action to prevent the failure in near future. This paper investigates the probable cause/causes of failure of the primary superheater tube in a thermal power plant boiler. Visual inspection, dimensional measurement, chemical analysis, metallographic examination and hardness measurement are conducted as the part of the investigative studies. Apart from these tests, mechanical testing and fractographic analysis are also conducted as supplements. Finally, it is concluded that the superheater tube is failed due to graphitization for prolonged exposure of the tube at higher temperature.

  12. A new method for explicit modelling of single failure event within different common cause failure groups

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Redundancy and diversity are the main principles of the safety systems in the nuclear industry. Implementation of safety components redundancy has been acknowledged as an effective approach for assuring high levels of system reliability. The existence of redundant components, identical in most of the cases, implicates a probability of their simultaneous failure due to a shared cause—a common cause failure. This paper presents a new method for explicit modelling of single component failure event within multiple common cause failure groups simultaneously. The method is based on a modification of the frequently utilised Beta Factor parametric model. The motivation for development of this method lays in the fact that one of the most widespread softwares for fault tree and event tree modelling as part of the probabilistic safety assessment does not comprise the option for simultaneous assignment of single failure event to multiple common cause failure groups. In that sense, the proposed method can be seen as an advantage of the explicit modelling of common cause failures. A standard standby safety system is selected as a case study for application and study of the proposed methodology. The results and insights implicate improved, more transparent and more comprehensive models within probabilistic safety assessment.

  13. [Storage of plant protection products in farms: minimum safety requirements].

    Science.gov (United States)

    Dutto, Moreno; Alfonzo, Santo; Rubbiani, Maristella

    2012-01-01

    Failure to comply with requirements for proper storage and use of pesticides in farms can be extremely hazardous and the risk of accidents involving farm workers, other persons and even animals is high. There are still wide differences in the interpretation of the concept of "securing or making safe", by workers in this sector. One of the critical points detected, particularly in the fruit sector, is the establishment of an adequate storage site for plant protection products. The definition of "safe storage of pesticides" is still unclear despite the recent enactment of Legislative Decree 81/2008 regulating health and work safety in Italy. In addition, there are no national guidelines setting clear minimum criteria for storage of plant protection products in farms. The authors, on the basis of their professional experience and through analysis of recent legislation, establish certain minimum safety standards for storage of pesticides in farms.

  14. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  15. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    Science.gov (United States)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  16. Properties of parameter estimation techniques for a beta-binomial failure model. Final technical report

    International Nuclear Information System (INIS)

    Shultis, J.K.; Buranapan, W.; Eckhoff, N.D.

    1981-12-01

    Of considerable importance in the safety analysis of nuclear power plants are methods to estimate the probability of failure-on-demand, p, of a plant component that normally is inactive and that may fail when activated or stressed. Properties of five methods for estimating from failure-on-demand data the parameters of the beta prior distribution in a compound beta-binomial probability model are examined. Simulated failure data generated from a known beta-binomial marginal distribution are used to estimate values of the beta parameters by (1) matching moments of the prior distribution to those of the data, (2) the maximum likelihood method based on the prior distribution, (3) a weighted marginal matching moments method, (4) an unweighted marginal matching moments method, and (5) the maximum likelihood method based on the marginal distribution. For small sample sizes (N = or < 10) with data typical of low failure probability components, it was found that the simple prior matching moments method is often superior (e.g. smallest bias and mean squared error) while for larger sample sizes the marginal maximum likelihood estimators appear to be best

  17. Minimum Effective Volume of Lidocaine for Ultrasound-Guided Costoclavicular Block.

    Science.gov (United States)

    Sotthisopha, Thitipan; Elgueta, Maria Francisca; Samerchua, Artid; Leurcharusmee, Prangmalee; Tiyaprasertkul, Worakamol; Gordon, Aida; Finlayson, Roderick J; Tran, De Q

    This dose-finding study aimed to determine the minimum effective volume in 90% of patients (MEV90) of lidocaine 1.5% with epinephrine 5 μg/mL for ultrasound-guided costoclavicular block. Using an in-plane technique and a lateral-to-medial direction, the block needle was positioned in the middle of the 3 cords of the brachial plexus in the costoclavicular space. The entire volume of lidocaine was deposited in this location. Dose assignment was carried out using a biased-coin-design up-and-down sequential method, where the total volume of local anesthetic administered to each patient depended on the response of the previous one. In case of failure, the next subject received a higher volume (defined as the previous volume with an increment of 2.5 mL). If the previous patient had a successful block, the next subject was randomized to a lower volume (defined as the previous volume with a decrement of 2.5 mL), with a probability of b = 0.11, or the same volume, with a probability of 1 - b = 0.89. Success was defined, at 30 minutes, as a minimal score of 14 of 16 points using a sensorimotor composite scale. Patients undergoing surgery of the elbow, forearm, wrist, or hand were prospectively enrolled until 45 successful blocks were obtained. This clinical trial was registered with ClinicalTrials.gov (ID NCT02932670). Fifty-seven patients were included in the study. Using isotonic regression and bootstrap confidence interval, the MEV90 for ultrasound-guided costoclavicular block was estimated to be 34.0 mL (95% confidence interval, 33.4-34.4 mL). All patients with a minimal composite score of 14 points at 30 minutes achieved surgical anesthesia intraoperatively. For ultrasound-guided costoclavicular block, the MEV90 of lidocaine 1.5% with epinephrine 5 μg/mL is 34 mL. Further dose-finding studies are required for other concentrations of lidocaine, other local anesthetic agents, and multiple-injection techniques.

  18. The probability estimate of the defects of the asynchronous motors based on the complex method of diagnostics

    Science.gov (United States)

    Zhukovskiy, Yu L.; Korolev, N. A.; Babanova, I. S.; Boikov, A. V.

    2017-10-01

    This article is devoted to the development of a method for probability estimate of failure of an asynchronous motor as a part of electric drive with a frequency converter. The proposed method is based on a comprehensive method of diagnostics of vibration and electrical characteristics that take into account the quality of the supply network and the operating conditions. The developed diagnostic system allows to increase the accuracy and quality of diagnoses by determining the probability of failure-free operation of the electromechanical equipment, when the parameters deviate from the norm. This system uses an artificial neural networks (ANNs). The results of the system for estimator the technical condition are probability diagrams of the technical state and quantitative evaluation of the defects of the asynchronous motor and its components.

  19. Evaluation of containment failure and cleanup time for Pu shots on the Z machine.

    Energy Technology Data Exchange (ETDEWEB)

    Darby, John L.

    2010-02-01

    Between November 30 and December 11, 2009 an evaluation was performed of the probability of containment failure and the time for cleanup of contamination of the Z machine given failure, for plutonium (Pu) experiments on the Z machine at Sandia National Laboratories (SNL). Due to the unique nature of the problem, there is little quantitative information available for the likelihood of failure of containment components or for the time to cleanup. Information for the evaluation was obtained from Subject Matter Experts (SMEs) at the Z machine facility. The SMEs provided the State of Knowledge (SOK) for the evaluation. There is significant epistemic- or state of knowledge- uncertainty associated with the events that comprise both failure of containment and cleanup. To capture epistemic uncertainty and to allow the SMEs to reason at the fidelity of the SOK, we used the belief/plausibility measure of uncertainty for this evaluation. We quantified two variables: the probability that the Pu containment system fails given a shot on the Z machine, and the time to cleanup Pu contamination in the Z machine given failure of containment. We identified dominant contributors for both the time to cleanup and the probability of containment failure. These results will be used by SNL management to decide the course of action for conducting the Pu experiments on the Z machine.

  20. CO2 maximum in the oxygen minimum zone (OMZ)

    OpenAIRE

    Paulmier, Aurélien; Ruiz-Pino, D.; Garcon, V.

    2011-01-01

    International audience; Oxygen minimum zones (OMZs), known as suboxic layers which are mainly localized in the Eastern Boundary Upwelling Systems, have been expanding since the 20th "high CO2" century, probably due to global warming. OMZs are also known to significantly contribute to the oceanic production of N2O, a greenhouse gas (GHG) more efficient than CO2. However, the contribution of the OMZs on the oceanic sources and sinks budget of CO2, the main GHG, still remains to be established. ...

  1. The minimum area requirements (MAR) for giant panda: an empirical study.

    Science.gov (United States)

    Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang

    2016-12-08

    Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population's long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km 2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive.

  2. Low-Probability High-Consequence (LPHC) Failure Events in Geologic Carbon Sequestration Pipelines and Wells: Framework for LPHC Risk Assessment Incorporating Spatial Variability of Risk

    Energy Technology Data Exchange (ETDEWEB)

    Oldenburg, Curtis M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Budnitz, Robert J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-08-31

    If Carbon dioxide Capture and Storage (CCS) is to be effective in mitigating climate change, it will need to be carried out on a very large scale. This will involve many thousands of miles of dedicated high-pressure pipelines in order to transport many millions of tonnes of CO2 annually, with the CO2 delivered to many thousands of wells that will inject the CO2 underground. The new CCS infrastructure could rival in size the current U.S. upstream natural gas pipeline and well infrastructure. This new infrastructure entails hazards for life, health, animals, the environment, and natural resources. Pipelines are known to rupture due to corrosion, from external forces such as impacts by vehicles or digging equipment, by defects in construction, or from the failure of valves and seals. Similarly, wells are vulnerable to catastrophic failure due to corrosion, cement degradation, or operational mistakes. While most accidents involving pipelines and wells will be minor, there is the inevitable possibility of accidents with very high consequences, especially to public health. The most important consequence of concern is CO2 release to the environment in concentrations sufficient to cause death by asphyxiation to nearby populations. Such accidents are thought to be very unlikely, but of course they cannot be excluded, even if major engineering effort is devoted (as it will be) to keeping their probability low and their consequences minimized. This project has developed a methodology for analyzing the risks of these rare but high-consequence accidents, using a step-by-step probabilistic methodology. A key difference between risks for pipelines and wells is that the former are spatially distributed along the pipe whereas the latter are confined to the vicinity of the well. Otherwise, the methodology we develop for risk assessment of pipeline and well failures is similar and provides an analysis both of the annual probabilities of

  3. An interval-valued reliability model with bounded failure rates

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2012-01-01

    The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...

  4. Predictors of treatment failure in young patients undergoing in vitro fertilization.

    Science.gov (United States)

    Jacobs, Marni B; Klonoff-Cohen, Hillary; Agarwal, Sanjay; Kritz-Silverstein, Donna; Lindsay, Suzanne; Garzo, V Gabriel

    2016-08-01

    The purpose of the study was to evaluate whether routinely collected clinical factors can predict in vitro fertilization (IVF) failure among young, "good prognosis" patients predominantly with secondary infertility who are less than 35 years of age. Using de-identified clinic records, 414 women model predicted probability of cycle failure. One hundred ninety-seven patients with both primary and secondary infertility had a failed IVF cycle, and 217 with secondary infertility had a successful live birth. None of the women with primary infertility had a successful live birth. The significant predictors for IVF cycle failure among young patients were fewer previous live births, history of biochemical pregnancies or spontaneous abortions, lower baseline antral follicle count, higher total gonadotropin dose, unknown infertility diagnosis, and lack of at least one fair to good quality embryo. The full model showed good predictive value (c = 0.885) for estimating risk of cycle failure; at ≥80 % predicted probability of failure, sensitivity = 55.4 %, specificity = 97.5 %, positive predictive value = 95.4 %, and negative predictive value = 69.8 %. If this predictive model is validated in future studies, it could be beneficial for predicting IVF failure in good prognosis women under the age of 35 years.

  5. A competing risk model of first failure site after definitive (chemo) radiation therapy for locally advanced non-small cell lung cancer

    DEFF Research Database (Denmark)

    Nygård, Lotte; Vogelius, Ivan R; Fischer, Barbara M

    2018-01-01

    INTRODUCTION: The aim of the study was to build a model of first failure site and lesion specific failure probability after definitive chemo-radiotherapy for inoperable non-small cell lung cancer (NSCLC). METHODS: We retrospectively analyzed 251 patients receiving definitive chemo......-regional failure, multivariable logistic regression was applied to assess risk of each lesion being first site of failure. The two models were used in combination to predict lesion failure probability accounting for competing events. RESULTS: Adenocarcinoma had a lower hazard ratio (HR) of loco-regional (LR...

  6. On estimating the fracture probability of nuclear graphite components

    International Nuclear Information System (INIS)

    Srinivasan, Makuteswara

    2008-01-01

    The properties of nuclear grade graphites exhibit anisotropy and could vary considerably within a manufactured block. Graphite strength is affected by the direction of alignment of the constituent coke particles, which is dictated by the forming method, coke particle size, and the size, shape, and orientation distribution of pores in the structure. In this paper, a Weibull failure probability analysis for components is presented using the American Society of Testing Materials strength specification for nuclear grade graphites for core components in advanced high-temperature gas-cooled reactors. The risk of rupture (probability of fracture) and survival probability (reliability) of large graphite blocks are calculated for varying and discrete values of service tensile stresses. The limitations in these calculations are discussed from considerations of actual reactor environmental conditions that could potentially degrade the specification properties because of damage due to complex interactions between irradiation, temperature, stress, and variability in reactor operation

  7. Component fragility data base for reliability and probability studies

    International Nuclear Information System (INIS)

    Bandyopadhyay, K.; Hofmayer, C.; Kassier, M.; Pepper, S.

    1989-01-01

    Safety-related equipment in a nuclear plant plays a vital role in its proper operation and control, and failure of such equipment due to an earthquake may pose a risk to the safe operation of the plant. Therefore, in order to assess the overall reliability of a plant, the reliability of performance of the equipment should be studied first. The success of a reliability or a probability study depends to a great extent on the data base. To meet this demand, Brookhaven National Laboratory (BNL) has formed a test data base relating the seismic capacity of equipment specimens to the earthquake levels. Subsequently, the test data have been analyzed for use in reliability and probability studies. This paper describes the data base and discusses the analysis methods. The final results that can be directly used in plant reliability and probability studies are also presented in this paper

  8. Failure Predictions for Graphite Reflector Bricks in the Very High Temperature Reactor with the Prismatic Core Design

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Gyanender, E-mail: sing0550@umn.edu [Department of Mechanical Engineering, University of Minnesota, 111, Church St. SE, Minneapolis, MN 55455 (United States); Fok, Alex [Minnesota Dental Research in Biomaterials and Biomechanics, School of Dentistry, University of Minnesota, 515, Delaware St. SE, Minneapolis, MN 55455 (United States); Department of Mechanical Engineering, University of Minnesota, 111, Church St. SE, Minneapolis, MN 55455 (United States); Mantell, Susan [Department of Mechanical Engineering, University of Minnesota, 111, Church St. SE, Minneapolis, MN 55455 (United States)

    2017-06-15

    Highlights: • Failure probability of VHTR reflector bricks predicted though crack modeling. • Criterion chosen for defining failure strongly affects the predictions. • Breaching of the CRC could be significantly delayed through crack arrest. • Capability to predict crack initiation and propagation demonstrated. - Abstract: Graphite is used in nuclear reactor cores as a neutron moderator, reflector and structural material. The dimensions and physical properties of graphite change when it is exposed to neutron irradiation. The non-uniform changes in the dimensions and physical properties lead to the build-up of stresses over the course of time in the core components. When the stresses reach the critical limit, i.e. the strength of the material, cracking occurs and ultimately the components fail. In this paper, an explicit crack modeling approach to predict the probability of failure of a VHTR prismatic reactor core reflector brick is presented. Firstly, a constitutive model for graphite is constructed and used to predict the stress distribution in the reflector brick under in-reactor conditions of high temperature and irradiation. Fracture simulations are performed as part of a Monte Carlo analysis to predict the probability of failure. Failure probability is determined based on two different criteria for defining failure time: A) crack initiation and B) crack extension to near control rod channel. A significant difference is found between the failure probabilities based on the two criteria. It is predicted that the reflector bricks will start cracking during the time range of 5–9 years, while breaching of the control rod channels will occur during the period of 11–16 years. The results show that, due to crack arrest, there is a significantly delay between crack initiation and breaching of the control rod channel.

  9. neutron-Induced Failures in semiconductor Devices

    Energy Technology Data Exchange (ETDEWEB)

    Wender, Stephen Arthur [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-13

    Single Event Effects are a very significant failure mode in modern semiconductor devices that may limit their reliability. Accelerated testing is important for semiconductor industry. Considerable more work is needed in this field to mitigate the problem. Mitigation of this problem will probably come from Physicists and Electrical Engineers working together

  10. Parameters affecting the resilience of scale-free networks to random failures.

    Energy Technology Data Exchange (ETDEWEB)

    Link, Hamilton E.; LaViolette, Randall A.; Lane, Terran (University of New Mexico, Albuquerque, NM); Saia, Jared (University of New Mexico, Albuquerque, NM)

    2005-09-01

    It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. in (1) study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, most of the remaining nodes would still be connected in a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions. In particular, we study scale-free networks which have minimum node degree of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.

  11. Swarm of bees and particles algorithms in the problem of gradual failure reliability assurance

    Directory of Open Access Journals (Sweden)

    M. F. Anop

    2015-01-01

    Full Text Available Probability-statistical framework of reliability theory uses models based on the chance failures analysis. These models are not functional and do not reflect relation of reliability characteristics to the object performance. At the same time, a significant part of the technical systems failures are gradual failures caused by degradation of the internal parameters of the system under the influence of various external factors.The paper shows how to provide the required level of reliability at the design stage using a functional model of a technical object. Paper describes the method for solving this problem under incomplete initial information, when there is no information about the patterns of technological deviations and degradation parameters, and the considered system model is a \\black box" one.To this end, we formulate the problem of optimal parametric synthesis. It lies in the choice of the nominal values of the system parameters to satisfy the requirements for its operation and take into account the unavoidable deviations of the parameters from their design values during operation. As an optimization criterion in this case we propose to use a deterministic geometric criterion \\reliability reserve", which is the minimum distance measured along the coordinate directions from the nominal parameter value to the acceptability region boundary rather than statistical values.The paper presents the results of the application of heuristic swarm intelligence methods to solve the formulated optimization problem. Efficiency of particle swarm algorithms and swarm of bees one compared with undirected random search algorithm in solving a number of test optimal parametric synthesis problems in three areas: reliability, convergence rate and operating time. The study suggests that the use of a swarm of bees method for solving the problem of the technical systems gradual failure reliability ensuring is preferred because of the greater flexibility of the

  12. failure analysis of a uav flight control system using markov analysis

    African Journals Online (AJOL)

    Failure analysis of a flight control system proposed for Air Force Institute of Technology (AFIT) Unmanned Aerial Vehicle (UAV) was studied using Markov Analysis (MA). It was perceived that understanding of the number of failure states and the probability of being in those state are of paramount importance in order to ...

  13. BANK FAILURE PREDICTION WITH LOGISTIC REGRESSION

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2013-04-01

    Full Text Available In recent years the economic and financial world is shaken by a wave of financial crisis and resulted in violent bank fairly huge losses. Several authors have focused on the study of the crises in order to develop an early warning model. It is in the same path that our work takes its inspiration. Indeed, we have tried to develop a predictive model of Tunisian bank failures with the contribution of the binary logistic regression method. The specificity of our prediction model is that it takes into account microeconomic indicators of bank failures. The results obtained using our provisional model show that a bank's ability to repay its debt, the coefficient of banking operations, bank profitability per employee and leverage financial ratio has a negative impact on the probability of failure.

  14. Robust facility location: Hedging against failures

    International Nuclear Information System (INIS)

    Hernandez, Ivan; Emmanuel Ramirez-Marquez, Jose; Rainwater, Chase; Pohl, Edward; Medal, Hugh

    2014-01-01

    While few companies would be willing to sacrifice day-to-day operations to hedge against disruptions, designing for robustness can yield solutions that perform well before and after failures have occurred. Through a multi-objective optimization approach this paper provides decision makers the option to trade-off total weighted distance before and after disruptions in the Facility Location Problem. Additionally, this approach allows decision makers to understand the impact on the opening of facilities on total distance and on system robustness (considering the system as the set of located facilities). This approach differs from previous studies in that hedging against failures is done without having to elicit facility failure probabilities concurrently without requiring the allocation of additional hardening/protections resources. The approach is applied to two datasets from the literature

  15. Impact of the specialization from failures data in probability safety analysis for process plants

    International Nuclear Information System (INIS)

    Ribeiro, Antonio C.O.; Melo, P.F. Frutuoso e

    2005-01-01

    Full text: The aim of this paper is to show the Bayesian inference in reliability studies, which are used to failures, rates updating in safety analyses. It is developed the impact of its using in quantitative risk assessments (QRA) for industrial process plants. With this approach we find a structured and auditable way of showing the difference between an industrial installation with a good project and maintenance structure from another one that shows a low level of quality in these areas. In general the evidence from failures rates and as follow the frequency of occurrence from scenarios, which the risks taken in account in ERA, are taken from generics data banks, instead of, the installation in analysis. The use of this methodology in probabilistic safety analysis (PSA) for nuclear plants is commonly used when you need to find the final fault tree event evaluation applied to a scenario, but it is not showed in a PSA level III. (author)

  16. Associations of serumpotassiumlevels with mortality in chronic heart failure patients

    DEFF Research Database (Denmark)

    Aldahl, Mette; Caroline Jensen, Anne Sofie; Davidsen, Line

    2017-01-01

    Aims Medication prescribed to patients suffering from chronic heart failure carries an increased risk of impaired potassium homeostasis. We examined the relation between different levels of serum potassium and mortality among patients with chronic heart failure. Methods and results From Danish...... National registries, we identified 19 549 patients with a chronic heart failure diagnosis who had a measurement of potassium within minimum 90 days after initiated medical treatment with loop diuretics and angiotensin converting enzyme inhibitors or angiotensin-II receptor blockers. All-cause mortality......-cause mortality. Conclusion Levels within the lower and upper levels of the normal serum potassium range (3.5-4.1 mmol/L and 4.8-5.0 mmol/ L, respectively) were associated with a significant increased short-term risk of death in chronic heart failure patients. Likewise, potassium below 3.5 mmol/L and above 5...

  17. Calculation of the pipes failure probability of the Rcic system of a nuclear power station by means of software WinPRAISE 07; Calculo de la probabilidad de falla de tuberias del sistema RCIC de una central nuclear mediante el software WinPRAISE 07

    Energy Technology Data Exchange (ETDEWEB)

    Jasso G, J.; Diaz S, A.; Mendoza G, G.; Sainz M, E. [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico); Garcia de la C, F. M., E-mail: angeles.diaz@inin.gob.mx [Comision Federal de Electricidad, Central Nucleoelectrica Laguna Verde, Km 44.5 Carretera Cardel-Nautla, 91476 Laguna Verde, Alto Lucero, Veracruz (Mexico)

    2014-10-15

    The growth and the cracks propagation by fatigue are a typical degradation mechanism that is presented in the nuclear industry as in the conventional industry; the unstable propagation of a crack can cause the catastrophic failure of a metallic component even with high ductility; for this reason, activities of programmed maintenance have been established in the industry using inspection and visual techniques and/or ultrasound with an established periodicity allowing to follow up to these growths, controlling the undesirable effects; however, these activities increase the operation costs; and in the peculiar case of the nuclear industry, they increase the radiation exposure to the participant personnel. The use of mathematical processes that integrate concepts of uncertainty, material properties and the probability associated to the inspection results, has been constituted as a powerful tool of evaluation of the component reliability, reducing costs and exposure levels. In this work the evaluation of the failure probability by cracks growth preexisting by fatigue is presented, in pipes of a Reactor Core Isolation Cooling system (Rcic) in a nuclear power station. The software WinPRAISE 07 (Piping Reliability Analysis Including Seismic Events) was used supported in the probabilistic fracture mechanics principles. The obtained values of failure probability evidenced a good behavior of the analyzed pipes with a maximum order of 1.0 E-6, therefore is concluded that the performance of the lines of these pipes is reliable even extrapolating the calculations at 10, 20, 30 and 40 years of service. (Author)

  18. Probability of fracture and life extension estimate of the high-flux isotope reactor vessel

    International Nuclear Information System (INIS)

    Chang, S.J.

    1998-01-01

    The state of the vessel steel embrittlement as a result of neutron irradiation can be measured by its increase in ductile-brittle transition temperature (DBTT) for fracture, often denoted by RT NDT for carbon steel. This transition temperature can be calibrated by the drop-weight test and, sometimes, by the Charpy impact test. The life extension for the high-flux isotope reactor (HFIR) vessel is calculated by using the method of fracture mechanics that is incorporated with the effect of the DBTT change. The failure probability of the HFIR vessel is limited as the life of the vessel by the reactor core melt probability of 10 -4 . The operating safety of the reactor is ensured by periodic hydrostatic pressure test (hydrotest). The hydrotest is performed in order to determine a safe vessel static pressure. The fracture probability as a result of the hydrostatic pressure test is calculated and is used to determine the life of the vessel. Failure to perform hydrotest imposes the limit on the life of the vessel. The conventional method of fracture probability calculations such as that used by the NRC-sponsored PRAISE CODE and the FAVOR CODE developed in this Laboratory are based on the Monte Carlo simulation. Heavy computations are required. An alternative method of fracture probability calculation by direct probability integration is developed in this paper. The present approach offers simple and expedient ways to obtain numerical results without losing any generality. In this paper, numerical results on (1) the probability of vessel fracture, (2) the hydrotest time interval, and (3) the hydrotest pressure as a result of the DBTT increase are obtained

  19. Efficient Committed Budget for Implementing Target Audit Probability for Many Inspectees

    OpenAIRE

    Yim, Andrew

    2009-01-01

    Strategic models of auditor-inspectee interaction have neglected implementation details in multiple-inspectee settings. With multiple inspectees, the target audit probability derived from the standard analysis can be implemented with sampling plans differing in the budgets committed to support them. Overly committed audit budgets tie up unneeded resources that could have been allocated for better uses. This paper studies the minimum committed budget required to implement a target audit probab...

  20. Study on shielded pump system failure analysis method based on Bayesian network

    International Nuclear Information System (INIS)

    Bao Yilan; Huang Gaofeng; Tong Lili; Cao Xuewu

    2012-01-01

    This paper applies Bayesian network to the system failure analysis, with an aim to improve knowledge representation of the uncertainty logic and multi-fault states in system failure analysis. A Bayesian network for shielded pump failure analysis is presented, conducting fault parameter learning, updating Bayesian network parameter based on new samples. Finally, through the Bayesian network inference, vulnerability in this system, the largest possible failure modes, and the fault probability are obtained. The powerful ability of Bayesian network to analyze system fault is illustrated by examples. (authors)

  1. Reliability and Availability Analysis of Some Systems with Common-Cause Failures Using SPICE Circuit Simulation Program

    Directory of Open Access Journals (Sweden)

    Muhammad Taher Abuelma'atti

    1999-01-01

    Full Text Available The effectiveness of SPICE circuit simulation program in calculating probabilities, reliability, steady-state availability and mean-time to failure of repairable systems described by Markov models is demonstrated. Two examples are presented. The first example is a warm standby system with common-cause failures and human errors. The second example is a non-identical unit parallel system with common-cause failures. In both cases recourse to numerical solution is inevitable to obtain the Laplace transforms of the probabilities. Results obtained using SPICE are compared with previously published results obtained using the Laplace transform method. Full SPICE listings are included.

  2. The less familiar side of heart failure: symptomatic diastolic dysfunction.

    Science.gov (United States)

    Morris, Spencer A; Van Swol, Mark; Udani, Bela

    2005-06-01

    Arrange for echocardiography or radionuclide angiography within 72 hours of a heart failure exacerbation. An ejection fraction >50% in the presence of signs and symptoms of heart failure makes the diagnosis of diastolic heart failure probable. To treat associated hypertension, use angiotensin receptor blockers (ARBs), angiotensin-converting enzyme (ACE) inhibitors, beta-blockers, calcium channel blockers, or diuretics to achieve a blood pressure goal of <130/80 mm Hg. When using beta-blockers to control heart rate, titrate doses more aggressively than would be done for systolic failure, to reach a goal of 60 to 70 bpm. Use ACE inhibitors/ARBs to decrease hospitalizations, decrease symptoms, and prevent left ventricular remodeling.

  3. Future changes over the Himalayas: Maximum and minimum temperature

    Science.gov (United States)

    Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.

    2018-03-01

    An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with

  4. Effects of Stress Ratio and Microstructure on Fatigue Failure Behavior of Polycrystalline Nickel Superalloy

    Science.gov (United States)

    Zhang, H.; Guan, Z. W.; Wang, Q. Y.; Liu, Y. J.; Li, J. K.

    2018-05-01

    The effects of microstructure and stress ratio on high cycle fatigue of nickel superalloy Nimonic 80A were investigated. The stress ratios of 0.1, 0.5 and 0.8 were chosen to perform fatigue tests in a frequency of 110 Hz. Cleavage failure was observed, and three competing failure crack initiation modes were discovered by a scanning electron microscope, which were classified as surface without facets, surface with facets and subsurface with facets. With increasing the stress ratio from 0.1 to 0.8, the occurrence probability of surface and subsurface with facets also increased and reached the maximum value at R = 0.5, meanwhile the probability of surface initiation without facets decreased. The effect of microstructure on the fatigue fracture behavior at different stress ratios was also observed and discussed. Based on the Goodman diagram, it was concluded that the fatigue strength of 50% probability of failure at R = 0.1, 0.5 and 0.8 is lower than the modified Goodman line.

  5. Modelling the failure risk for water supply networks with interval-censored data

    International Nuclear Information System (INIS)

    García-Mora, B.; Debón, A.; Santamaría, C.; Carrión, A.

    2015-01-01

    In reliability, sometimes some failures are not observed at the exact moment of the occurrence. In that case it can be more convenient to approximate them by a time interval. In this study, we have used a generalized non-linear model developed for interval-censored data to treat the life time of a pipe from its time of installation until its failure. The aim of this analysis was to identify those network characteristics that may affect the risk of failure and we make an exhaustive validation of this analysis. The results indicated that certain characteristics of the network negatively affected the risk of failure of the pipe: an increase in the length and pressure of the pipes, a small diameter, some materials used in the manufacture of pipes and the traffic on the street where the pipes are located. Once the model has been correctly fitted to our data, we also provided simple tables that will allow companies to easily calculate the pipe's probability of failure in a future. - Highlights: • We model the first failure time in a water supply company from Spain. • We fit arbitrarily interval-censored data with a generalized non-linear model. • The results are validated. We provide simple tables to easily calculate probabilities of no failure at different times.

  6. Fuel failure detection and location methods in CAGRs

    International Nuclear Information System (INIS)

    Harris, A.M.

    1982-06-01

    The release of fission products from AGR fuel failures and the way in which the signals from such failures must be detected against the background signal from uranium contamination of the fuel is considered. Theoretical assessments of failure detection are used to show the limitations of the existing Electrostatic Wire Precipitator Burst Can Detection system (BCD) and how its operating parameters can be optimised. Two promising alternative methods, the 'split count' technique and the use of iodine measurements, are described. The results of a detailed study of the mechanical and electronic performance of the present BCD trolleys are given. The limited experience of detection and location of two fuel failures in CAGR using conventional and alternative methods is reviewed. The larger failure was detected and located using the conventional BCD equipment with a high confidence level. It is shown that smaller failures may not be easy to detect and locate using the current BCD equipment, and the second smaller failure probably remained in the reactor for about a year before it was discharged. The split count technique used with modified BCD equipment was able to detect the smaller failure after careful inspection of the data. (author)

  7. SIMULATED HUMAN ERROR PROBABILITY AND ITS APPLICATION TO DYNAMIC HUMAN FAILURE EVENTS

    Energy Technology Data Exchange (ETDEWEB)

    Herberger, Sarah M.; Boring, Ronald L.

    2016-10-01

    Abstract Objectives: Human reliability analysis (HRA) methods typically analyze human failure events (HFEs) at the overall task level. For dynamic HRA, it is important to model human activities at the subtask level. There exists a disconnect between dynamic subtask level and static task level that presents issues when modeling dynamic scenarios. For example, the SPAR-H method is typically used to calculate the human error probability (HEP) at the task level. As demonstrated in this paper, quantification in SPAR-H does not translate to the subtask level. Methods: Two different discrete distributions were generated for each SPAR-H Performance Shaping Factor (PSF) to define the frequency of PSF levels. The first distribution was a uniform, or uninformed distribution that assumed the frequency of each PSF level was equally likely. The second non-continuous distribution took the frequency of PSF level as identified from an assessment of the HERA database. These two different approaches were created to identify the resulting distribution of the HEP. The resulting HEP that appears closer to the known distribution, a log-normal centered on 1E-3, is the more desirable. Each approach then has median, average and maximum HFE calculations applied. To calculate these three values, three events, A, B and C are generated from the PSF level frequencies comprised of subtasks. The median HFE selects the median PSF level from each PSF and calculates HEP. The average HFE takes the mean PSF level, and the maximum takes the maximum PSF level. The same data set of subtask HEPs yields starkly different HEPs when aggregated to the HFE level in SPAR-H. Results: Assuming that each PSF level in each HFE is equally likely creates an unrealistic distribution of the HEP that is centered at 1. Next the observed frequency of PSF levels was applied with the resulting HEP behaving log-normally with a majority of the values under 2.5% HEP. The median, average and maximum HFE calculations did yield

  8. Generalized Probability-Probability Plots

    NARCIS (Netherlands)

    Mushkudiani, N.A.; Einmahl, J.H.J.

    2004-01-01

    We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P

  9. RISK CONNECTED WITH AIRCRAFT PRODUCTION IN ACCORDANCE WITH MINIMUM EQUIPMENT LIST (MEL

    Directory of Open Access Journals (Sweden)

    R. V. Enikeev

    2014-01-01

    Full Text Available The article covers the problem of understanding of risk assessment necessity connected with aircraft production in accordance with Minimum Equipment List (MEL. The article presents calculation of fail-safe performance probability of Airbus A320 Family fuel system in the event of defect which rectification is postponed in accordance with MEL. The article also presents the results of risk assessment connected with aircraft production in accordance with MEL.

  10. Quantum Probabilities as Behavioral Probabilities

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Yukalov

    2017-03-01

    Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

  11. On the functional failures concept and probabilistic safety margins: challenges in application for evaluation of effectiveness of shutdown systems - 15318

    International Nuclear Information System (INIS)

    Serghiuta, D.; Tholammakkil, J.

    2015-01-01

    The use of level-3 reliability approach and the concept of functional failure probability could provide the basis for defining a safety margin metric which would include a limit for the probability of functional failure, in line with the definition of a reliability-based design. It can also allow a quantification of level of confidence, by explicit modeling and quantification of uncertainties, and provide a better framework for representation of actual design and optimization of design margins within an integrated probabilistic-deterministic model. This paper reviews the attributes and challenges in application of functional failure concept in evaluation of risk-informed safety margins using as illustrative example the case of CANDU reactors shutdown systems effectiveness. A risk-informed formulation is first introduced for estimation of a reasonable limit for the functional failure probability using a Swiss cheese model. It is concluded that more research is needed in this area and a deterministic - probabilistic approach may be a reasonable intermediate step for evaluation of functional failure probability at the system level. The views expressed in this paper are those of the authors and do not necessarily reflect those of CNSC, or any part thereof. (authors)

  12. Seismic analysis for translational failure of landfills with retaining walls.

    Science.gov (United States)

    Feng, Shi-Jin; Gao, Li-Ya

    2010-11-01

    In the seismic impact zone, seismic force can be a major triggering mechanism for translational failures of landfills. The scope of this paper is to develop a three-part wedge method for seismic analysis of translational failures of landfills with retaining walls. The approximate solution of the factor of safety can be calculated. Unlike previous conventional limit equilibrium methods, the new method is capable of revealing the effects of both the solid waste shear strength and the retaining wall on the translational failures of landfills during earthquake. Parameter studies of the developed method show that the factor of safety decreases with the increase of the seismic coefficient, while it increases quickly with the increase of the minimum friction angle beneath waste mass for various horizontal seismic coefficients. Increasing the minimum friction angle beneath the waste mass appears to be more effective than any other parameters for increasing the factor of safety under the considered condition. Thus, selecting liner materials with higher friction angle will considerably reduce the potential for translational failures of landfills during earthquake. The factor of safety gradually increases with the increase of the height of retaining wall for various horizontal seismic coefficients. A higher retaining wall is beneficial to the seismic stability of the landfill. Simply ignoring the retaining wall will lead to serious underestimation of the factor of safety. Besides, the approximate solution of the yield acceleration coefficient of the landfill is also presented based on the calculated method. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Failure Investigation of Radiant Platen Superheater Tube of Thermal Power Plant Boiler

    Science.gov (United States)

    Ghosh, D.; Ray, S.; Mandal, A.; Roy, H.

    2015-04-01

    This paper highlights a case study of typical premature failure of a radiant platen superheater tube of 210 MW thermal power plant boiler. Visual examination, dimensional measurement and chemical analysis, are conducted as part of the investigations. Apart from these, metallographic analysis and fractography are also conducted to ascertain the probable cause of failure. Finally it has been concluded that the premature failure of the super heater tube can be attributed to localized creep at high temperature. The corrective actions has also been suggested to avoid this type of failure in near future.

  14. The probability of containment failure by direct containment heating in Zion. Supplement 1

    International Nuclear Information System (INIS)

    Pilch, M.M.; Allen, M.D.; Stamps, D.W.; Tadios, E.L.; Knudson, D.L.

    1994-12-01

    Supplement 1 of NUREG/CR-6075 brings to closure the DCH issue for the Zion plant. It includes the documentation of the peer review process for NUREG/CR-6075, the assessments of four new splinter scenarios defined in working group meetings, and modeling enhancements recommended by the working groups. In the four new scenarios, consistency of the initial conditions has been implemented by using insights from systems-level codes. SCDAP/RELAP5 was used to analyze three short-term station blackout cases with Different lead rates. In all three case, the hot leg or surge line failed well before the lower head and thus the primary system depressurized to a point where DCH was no longer considered a threat. However, these calculations were continued to lower head failure in order to gain insights that were useful in establishing the initial and boundary conditions. The most useful insights are that the RCS pressure is-low at vessel breach metallic blockages in the core region do not melt and relocate into the lower plenum, and melting of upper plenum steel is correlated with hot leg failure. THE SCDAP/RELAP output was used as input to CONTAIN to assess the containment conditions at vessel breach. The containment-side conditions predicted by CONTAIN are similar to those originally specified in NUREG/CR-6075

  15. Probability Aggregates in Probability Answer Set Programming

    OpenAIRE

    Saad, Emad

    2013-01-01

    Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

  16. The plant-specific impact of different pressurization rates in the probabilistic estimation of containment failure modes

    International Nuclear Information System (INIS)

    Ahn, Kwang Il; Yang, Joon Eon; Ha, Jae Joo

    2003-01-01

    The explicit consideration of different pressurization rates in estimating the probabilities of containment failure modes has a profound effect on the confidence of containment performance evaluation that is so critical for risk assessment of nuclear power plants. Except for the sophisticated NUREG-1150 study, many of the recent containment performance analyses (through level 2 PSAs or IPE back-end analyses) did not take into account an explicit distinction between slow and fast pressurization in their analyses. A careful investigation of both approaches shows that many of the approaches adopted in the recent containment performance analyses exactly correspond to the NUREG-1150 approach for the prediction of containment failure mode probabilities in the presence of fast pressurization. As a result, it was expected that the existing containment performance analysis results would be subjected to greater or less conservatism in light of the ultimate failure mode of the containment. The main purpose of this paper is to assess potential conservatism of a plant-specific containment performance analysis result in light of containment failure mode probabilities

  17. Analysis of failure dependent test, repair and shutdown strategies for redundant trains

    International Nuclear Information System (INIS)

    Uryasev, S.; Samanta, P.

    1994-09-01

    Failure-dependent testing implies a test of a redundant components (or trains) when failure of one component has been detected. The purpose of such testing is to detect any common cause failures (CCFs) of multiple components so that a corrective action such as repair or plant shutdown can be taken to reduce the residence time of multiple failures, given a failure has been detected. This type of testing focuses on reducing the conditional risk of CCFs. Formulas for calculating the conditional failure probability of a two train system with different test, repair and shutdown strategies are developed. A methodology is presented with an example calculation showing the risk-effectiveness of failure-dependent strategies for emergency diesel generators (EDGs) in nuclear power plants (NPPs)

  18. α-Decomposition for estimating parameters in common cause failure modeling based on causal inference

    International Nuclear Information System (INIS)

    Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi

    2013-01-01

    The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system

  19. An engineering approach to common mode failure analysis

    International Nuclear Information System (INIS)

    Gangloff, W.C.; Franke, T.H.

    1975-01-01

    Safety systems for nuclear reactors can be designed using standard reliability engineering techniques such that system failure due to random component faults is extremely unlikely. However, the common-mode failure where several components fail together from a common cause is not susceptible to prevention by the usual tactics. In systems where a high degree of redundancy has been employed, the actual reliability of the system in service may be limited by common-mode failures. A methodical and thorough procedure for evaluation of system vulnerability to common-mode failures is presented. This procedure was developed for use in nuclear reactor safety systems and has been applied specifically to reactor protection. The method offers a qualitative assessment of a system whereby weak points can be identified and the resistance to common-mode failure can be judged. It takes into account all factors influencing system performance including design, manufacturing, installation, operation, testing, and maintenance. It is not a guarantee or sure solution, but rather a practical tool which can provide good assurance that the probability of common-mode protection failure has been made acceptably low. (author)

  20. Lecture notes: meantime to failure analysis

    International Nuclear Information System (INIS)

    Hanlen, R.C.

    1976-01-01

    A method is presented which affects the Quality Assurance Engineer's place in management decision making by giving him a working parameter to base sound engineering and management decisions. The theory used in Reliability Engineering to determine the mean-time-to-failure of a component or system is reviewed. The method presented derives the probability density function for the parameter of the exponential distribution. The exponential distribution is commonly used by industry to determine the reliability of a component or system when the failure rate is assumed to be constant. Some examples of N Reactor performance data are used. To be specific: The ball system data with 4.9 x 10 6 unit hours of service and 7 individual failures indicates a demonstrated 98.8 percent reliability at a 95 percent confidence level for a 12 month mission period, and the diesel starts data with 7.2 x 10 5 unit hours of service and 1 failure indicates a demonstrated 94.4 percent reliability at a 95 percent confidence level for a 12 month mission period

  1. Failure analysis of the cement mantle in total hip arthroplasty with an efficient probabilistic method.

    Science.gov (United States)

    Kaymaz, Irfan; Bayrak, Ozgu; Karsan, Orhan; Celik, Ayhan; Alsaran, Akgun

    2014-04-01

    Accurate prediction of long-term behaviour of cemented hip implants is very important not only for patient comfort but also for elimination of any revision operation due to failure of implants. Therefore, a more realistic computer model was generated and then used for both deterministic and probabilistic analyses of the hip implant in this study. The deterministic failure analysis was carried out for the most common failure states of the cement mantle. On the other hand, most of the design parameters of the cemented hip are inherently uncertain quantities. Therefore, the probabilistic failure analysis was also carried out considering the fatigue failure of the cement mantle since it is the most critical failure state. However, the probabilistic analysis generally requires large amount of time; thus, a response surface method proposed in this study was used to reduce the computation time for the analysis of the cemented hip implant. The results demonstrate that using an efficient probabilistic approach can significantly reduce the computation time for the failure probability of the cement from several hours to minutes. The results also show that even the deterministic failure analyses do not indicate any failure of the cement mantle with high safety factors, the probabilistic analysis predicts the failure probability of the cement mantle as 8%, which must be considered during the evaluation of the success of the cemented hip implants.

  2. Probability and Confidence Trade-space (PACT) Evaluation: Accounting for Uncertainty in Sparing Assessments

    Science.gov (United States)

    Anderson, Leif; Box, Neil; Carter, Katrina; DiFilippo, Denise; Harrington, Sean; Jackson, David; Lutomski, Michael

    2012-01-01

    There are two general shortcomings to the current annual sparing assessment: 1. The vehicle functions are currently assessed according to confidence targets, which can be misleading- overly conservative or optimistic. 2. The current confidence levels are arbitrarily determined and do not account for epistemic uncertainty (lack of knowledge) in the ORU failure rate. There are two major categories of uncertainty that impact Sparing Assessment: (a) Aleatory Uncertainty: Natural variability in distribution of actual failures around an Mean Time Between Failure (MTBF) (b) Epistemic Uncertainty : Lack of knowledge about the true value of an Orbital Replacement Unit's (ORU) MTBF We propose an approach to revise confidence targets and account for both categories of uncertainty, an approach we call Probability and Confidence Trade-space (PACT) evaluation.

  3. Cascading failures with local load redistribution in interdependent Watts-Strogatz networks

    Science.gov (United States)

    Hong, Chen; Zhang, Jun; Du, Wen-Bo; Sallan, Jose Maria; Lordan, Oriol

    2016-05-01

    Cascading failures of loads in isolated networks have been studied extensively over the last decade. Since 2010, such research has extended to interdependent networks. In this paper, we study cascading failures with local load redistribution in interdependent Watts-Strogatz (WS) networks. The effects of rewiring probability and coupling strength on the resilience of interdependent WS networks have been extensively investigated. It has been found that, for small values of the tolerance parameter, interdependent networks are more vulnerable as rewiring probability increases. For larger values of the tolerance parameter, the robustness of interdependent networks firstly decreases and then increases as rewiring probability increases. Coupling strength has a different impact on robustness. For low values of coupling strength, the resilience of interdependent networks decreases with the increment of the coupling strength until it reaches a certain threshold value. For values of coupling strength above this threshold, the opposite effect is observed. Our results are helpful to understand and design resilient interdependent networks.

  4. Probability of liquid radionuclide release of a near surface repository; Probabilidade de liberacao liquida de radionuclideos de um repositorio proximo a superficie

    Energy Technology Data Exchange (ETDEWEB)

    Aguiar, Lais A.; Melo, P.F. Frutuoso e [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: lais@con.ufrj.br; frutuoso@con.ufrj.br; Passos, Erivaldo; Alves, Antonio Sergio [ELETRONUCLEAR, Rio de Janeiro, RJ (Brazil). Div. de Seguranca Nuclear]. E-mail: epassos@eletronuclear.gov.br; asergi@eletronuclear.gov.br

    2005-07-01

    The safety analysis of a near surface repository for medium and low activity wastes leads to investigating accident scenarios related to water infiltration phenomena. The probability of radionuclide release through the infiltration water could be estimated with the aid of suitable probabilistic models. For the analysis, the repository system is divided into two subsystems: the first, due to the barriers against the water infiltration (backfill material and container), and the second one comprising the barriers against the leaching of radionuclide to the biosphere (solid matrix and geosphere). The repository system is supposed to have its components (barriers) working in an active parallel mode. The probability of the system failure is obtained from the logical structure of a failure tree. The study was based on the Probabilistic Safety Assessment (PSA) technique for the most significant radionuclides within the radioactive packages system of low and medium activity, and so the probability of failure of the system for each radionuclide during the time period of institutional control was obtained. (author)

  5. R3 Cup Does Not Have a High Failure Rate in Conventional Bearings: A Minimum of 5-Year Follow-Up.

    Science.gov (United States)

    Teoh, Kar H; Whitham, Robert D J; Golding, David M; Wong, Jenny F; Lee, Paul Y F; Evans, Aled R

    2018-02-01

    The R3 cementless acetabular system was first marketed in Australia and Europe in 2007. Previous papers have shown high failure rates of the R3 cup with up to 24% with metal-on-metal bearing. There are currently no medium term clinical results on this cup. The aim of the study is to review our results of the R3 acetabular cup with conventional bearings with a minimum of 5-year follow-up. Patients who were implanted with the R3 acetabular cup were identified from our center's arthroplasty database. A total of 293 consecutive total hip arthroplasties were performed in 286 patients. The primary outcome was revision. The secondary outcomes were the Oxford Hip Scores (OHS) and radiographic evaluation. The mean age of the patients was 69.4 years. The mean preoperative OHS was 23 (range 10-34) and the mean OHS was 40 (range 33-48) at the final follow-up. Radiological evaluation showed an excellent ARA score in all patients at 5 years. None of the R3 cups showed osteolysis at the final follow-up. There were 3 revisions in our series, of which 2 R3 cups were revised. The risk of revision was 1.11% at 5 years. Our experience of using the R3 acetabular system with conventional bearings showed high survivorship and is consistent with the allocated Orthopaedic Data Evaluation Panel rating of 5A* as rated in 2015 in the United Kingdom. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Validation of a model-based measurement of the minimum insert thickness of knee prostheses: a retrieval study.

    Science.gov (United States)

    van IJsseldijk, E A; Harman, M K; Luetzner, J; Valstar, E R; Stoel, B C; Nelissen, R G H H; Kaptein, B L

    2014-10-01

    Wear of polyethylene inserts plays an important role in failure of total knee replacement and can be monitored in vivo by measuring the minimum joint space width in anteroposterior radiographs. The objective of this retrospective cross-sectional study was to compare the accuracy and precision of a new model-based method with the conventional method by analysing the difference between the minimum joint space width measurements and the actual thickness of retrieved polyethylene tibial inserts. Before revision, the minimum joint space width values and their locations on the insert were measured in 15 fully weight-bearing radiographs. These measurements were compared with the actual minimum thickness values and locations of the retrieved tibial inserts after revision. The mean error in the model-based minimum joint space width measurement was significantly smaller than the conventional method for medial condyles (0.50 vs 0.94 mm, p model-based measurements was less than 10 mm in the medial direction in 12 cases and less in the lateral direction in 13 cases. The model-based minimum joint space width measurement method is more accurate than the conventional measurement with the same precision. Cite this article: Bone Joint Res 2014;3:289-96. ©2014 The British Editorial Society of Bone & Joint Surgery.

  7. Safety Management in an Oil Company through Failure Mode Effects and Critical Analysis

    Directory of Open Access Journals (Sweden)

    Benedictus Rahardjo

    2016-06-01

    Full Text Available This study attempts to apply Failure Mode Effects and Criticality Analysis (FMECA to improve the safety of a production system, specifically the production process of an oil company. Since food processing is a worldwide issue and self-management of a food company is more important than relying on government regulations, therefore this study focused on that matter. The initial step of this study is to identify and analyze the criticality of the potential failure modes of the production process. Furthermore, take corrective action to minimize the probability of repeating the same failure mode, followed by a re-analysis of its criticality. The results of corrective actions were compared with those before improvement conditions by testing the significance of the difference using two sample t-test. The final measured result is the Criticality Priority Number (CPN, which refers to the severity category of the failure mode and the probability of occurrence of the same failure mode. The recommended actions proposed by the FMECA significantly reduce the CPN compared with the value before improvement, with increases of 38.46% for the palm olein case study.

  8. Treatment of the complex abdomen and acute intestinal failure

    NARCIS (Netherlands)

    de Vries, F.E.E.

    2018-01-01

    Management of the complex abdomen and acute intestinal failure (IF) is challenging and requires specialized multidisciplinary treatment. Due to the small numbers and heterogeneity of the patient group high-quality evidence for some of the research questions is probably unachievable. Nevertheless,

  9. A model for the coupling of failure rates in a redundant system

    International Nuclear Information System (INIS)

    Kleppmann, W.G.; Wutschig, R.

    1986-01-01

    A model is developed which takes into acount the coupling between failure rates or identical components in different redundancies of a safety system, i.e., the fact that the failure rates of identical components subjected to the same operating conditions will scatter less than the failure rates of any two components of the same type. It is shown that with increasing coupling the expectation value and the variance of the distribution of the failure probability of the redundant system increases. A consistent way to incorporate operating experience in a Bayesian framework is developed and the reults are presented. (orig.)

  10. Negative probability in the framework of combined probability

    OpenAIRE

    Burgin, Mark

    2013-01-01

    Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

  11. Is human failure a stochastic process?

    International Nuclear Information System (INIS)

    Dougherty, Ed M.

    1997-01-01

    Human performance results in failure events that occur with a risk-significant frequency. System analysts have taken for granted the random (stochastic) nature of these events in engineering assessments such as risk assessment. However, cognitive scientists and error technologists, at least those who have interest in human reliability, have, over the recent years, claimed that human error does not need this stochastic framework. Yet they still use the language appropriate to stochastic processes. This paper examines the potential for the stochastic nature of human failure production as the basis for human reliability analysis. It distinguishes and leaves to others, however, the epistemic uncertainties over the possible probability models for the real variability of human performance

  12. NASA Lewis Launch Collision Probability Model Developed and Analyzed

    Science.gov (United States)

    Bollenbacher, Gary; Guptill, James D

    1999-01-01

    There are nearly 10,000 tracked objects orbiting the earth. These objects encompass manned objects, active and decommissioned satellites, spent rocket bodies, and debris. They range from a few centimeters across to the size of the MIR space station. Anytime a new satellite is launched, the launch vehicle with its payload attached passes through an area of space in which these objects orbit. Although the population density of these objects is low, there always is a small but finite probability of collision between the launch vehicle and one or more of these space objects. Even though the probability of collision is very low, for some payloads even this small risk is unacceptable. To mitigate the small risk of collision associated with launching at an arbitrary time within the daily launch window, NASA performs a prelaunch mission assurance Collision Avoidance Analysis (or COLA). For the COLA of the Cassini spacecraft, the NASA Lewis Research Center conducted an in-house development and analysis of a model for launch collision probability. The model allows a minimum clearance criteria to be used with the COLA analysis to ensure an acceptably low probability of collision. If, for any given liftoff time, the nominal launch vehicle trajectory would pass a space object with less than the minimum required clearance, launch would not be attempted at that time. The model assumes that the nominal positions of the orbiting objects and of the launch vehicle can be predicted as a function of time, and therefore, that any tracked object that comes within close proximity of the launch vehicle can be identified. For any such pair, these nominal positions can be used to calculate a nominal miss distance. The actual miss distances may differ substantially from the nominal miss distance, due, in part, to the statistical uncertainty of the knowledge of the objects positions. The model further assumes that these position uncertainties can be described with position covariance matrices

  13. Dam failure analysis for the Lago El Guineo Dam, Orocovis, Puerto Rico

    Science.gov (United States)

    Gómez-Fragoso, Julieta; Heriberto Torres-Sierra,

    2016-08-09

    The U.S. Geological Survey, in cooperation with the Puerto Rico Electric Power Authority, completed hydrologic and hydraulic analyses to assess the potential hazard to human life and property associated with the hypothetical failure of the Lago El Guineo Dam. The Lago El Guineo Dam is within the headwaters of the Río Grande de Manatí and impounds a drainage area of about 4.25 square kilometers.The hydrologic assessment was designed to determine the outflow hydrographs and peak discharges for Lago El Guineo and other subbasins in the Río Grande de Manatí hydrographic basin for three extreme rainfall events: (1) a 6-hour probable maximum precipitation event, (2) a 24-hour probable maximum precipitation event, and (3) a 24-hour, 100-year recurrence rainfall event. The hydraulic study simulated a dam failure of Lago El Guineo Dam using flood hydrographs generated from the hydrologic study. The simulated dam failure generated a hydrograph that was routed downstream from Lago El Guineo Dam through the lower reaches of the Río Toro Negro and the Río Grande de Manatí to determine water-surface profiles developed from the event-based hydrologic scenarios and “sunny day” conditions. The Hydrologic Engineering Center’s Hydrologic Modeling System (HEC–HMS) and Hydrologic Engineering Center’s River Analysis System (HEC–RAS) computer programs, developed by the U.S. Army Corps of Engineers, were used for the hydrologic and hydraulic modeling, respectively. The flow routing in the hydraulic analyses was completed using the unsteady flow module available in the HEC–RAS model.Above the Lago El Guineo Dam, the simulated inflow peak discharges from HEC–HMS resulted in about 550 and 414 cubic meters per second for the 6- and 24-hour probable maximum precipitation events, respectively. The 24-hour, 100-year recurrence storm simulation resulted in a peak discharge of about 216 cubic meters per second. For the hydrologic analysis, no dam failure conditions are

  14. On the failure probability of the primary piping of the PWR

    International Nuclear Information System (INIS)

    Schueller, G.I.; Hampl, N.C.

    1984-01-01

    A methodology for quantification of the structural reliability of the primary piping (PP) of a PWR under operational and accidental conditions is developed. Biblis B is utilized as reference plant. The PP structure is modeled utilizing finite element procedures. Based on the properties of the operational and internal accidental conditions, a static analysis suffices. However, a dynamic analysis considering non-linear effects of the soil-structure-interaction is to be used to determine load effects due to earthquake induced loading. Considering realistically the presence of initial cracks in welds and considering annual frequencies of occurrence of the various loading conditions, a crack propagation calculation utilizing the Forman model is carried out. Simultaneously leak and break probabilities using the 'Two Criteria'-Aproach are computed. A Monte Carlo simulation procedure is used as method of solution. (Author) [pt

  15. Do Minimum Wages Fight Poverty?

    OpenAIRE

    David Neumark; William Wascher

    1997-01-01

    The primary goal of a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the work force. However, estimates of employment effects of minimum wages tell us little about whether minimum wages are can achieve this goal; even if the disemployment effects of minimum wages are modest, minimum wage increases could result in net income losses for poor families. We present evidence on the effects of minimum wages on family incomes from matched March CPS s...

  16. Maximizing probable oil field profit: uncertainties on well spacing

    International Nuclear Information System (INIS)

    MacKay, J.A.; Lerche, I.

    1997-01-01

    The influence of uncertainties in field development costs, well costs, lifting costs, selling price, discount factor, and oil field reserves are evaluated for their impact on assessing probable ranges of uncertainty on present day worth (PDW), oil field lifetime τ 2/3 , optimum number of wells (OWI), and the minimum (n-) and maximum (n+) number of wells to produce a PDW ≥ O. The relative importance of different factors in contributing to the uncertainties in PDW, τ 2/3 , OWI, nsub(-) and nsub(+) is also analyzed. Numerical illustrations indicate how the maximum PDW depends on the ranges of parameter values, drawn from probability distributions using Monte Carlo simulations. In addition, the procedure illustrates the relative importance of contributions of individual factors to the total uncertainty, so that one can assess where to place effort to improve ranges of uncertainty; while the volatility of each estimate allows one to determine when such effort is needful. (author)

  17. Effect of Progressive Heart Failure on Cerebral Hemodynamics and Monoamine Metabolism in CNS.

    Science.gov (United States)

    Mamalyga, M L; Mamalyga, L M

    2017-07-01

    Compensated and decompensated heart failure are characterized by different associations of disorders in the brain and heart. In compensated heart failure, the blood flow in the common carotid and basilar arteries does not change. Exacerbation of heart failure leads to severe decompensation and is accompanied by a decrease in blood flow in the carotid and basilar arteries. Changes in monoamine content occurring in the brain at different stages of heart failure are determined by various factors. The functional exercise test showed unequal monoamine-synthesizing capacities of the brain in compensated and decompensated heart failure. Reduced capacity of the monoaminergic systems in decompensated heart failure probably leads to overstrain of the central regulatory mechanisms, their gradual exhaustion, and failure of the compensatory mechanisms, which contributes to progression of heart failure.

  18. Failure investigation of a secondary super heater tube in a 140 MW thermal power plant

    Directory of Open Access Journals (Sweden)

    Atanu Saha

    2017-04-01

    Full Text Available This article describes the findings of a detailed investigation into the failure of a secondary super heater tube in a 140 MW thermal power plant. Preliminary macroscopic examinations along with visual examination, dimensional measurement and chemical analysis were carried out to deduce the probable cause of failure. In addition optical microscopy was a necessary supplement to understand the cause of failure. It was concluded that the tube had failed due to severe creep damage caused by high metal temperature during service. The probable causes of high metal temperature may be in sufficient flow of steam due to partial blockage, presence of thick oxide scale on ID surface, high flue gas temperature etc. rupture.

  19. Review of the current understanding of the potential for containment failure from in-vessel steam explosions

    International Nuclear Information System (INIS)

    1985-06-01

    A group of experts was convened to review the current understanding of the potential for containment failure from in-vessel steam explosions during core meltdown accidents in LWRs. The Steam Explosion Review Group (SERG) was requested to provide assessments of: (1) the conditional probability of containment failure due to a steam explosion, (2) a Sandia National Laboratory (SNL) report entitled ''An Uncertainty Study of PWR Steam Explosions,'' NUREG/CR-3369, (3) a SNL proposed steam explosion research program. This report summarizes the results of the deliberations of the review group. It also presents the detailed response of each individual member to each of the issues. The consensus of the SERG is that the occurrence of a steam explosion of sufficient energetics which could lead to alpha-mode containment failure has a low probability. The SERG members disagreed with the methodology used in NUREG/CR-3369 for the purpose of establishing the uncertainty in the probability of containment failure by a steam explosion. A consensus was reached among SERG members on the need for a continuing steam explosion research program which would improve our understanding of certain aspects of steam explosion phenomenology

  20. Site-to-Source Finite Fault Distance Probability Distribution in Probabilistic Seismic Hazard and the Relationship Between Minimum Distances

    Science.gov (United States)

    Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.

    2017-12-01

    We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.

  1. Behavioral and physiological significance of minimum resting metabolic rate in king penguins.

    Science.gov (United States)

    Halsey, L G; Butler, P J; Fahlman, A; Woakes, A J; Handrich, Y

    2008-01-01

    Because fasting king penguins (Aptenodytes patagonicus) need to conserve energy, it is possible that they exhibit particularly low metabolic rates during periods of rest. We investigated the behavioral and physiological aspects of periods of minimum metabolic rate in king penguins under different circumstances. Heart rate (f(H)) measurements were recorded to estimate rate of oxygen consumption during periods of rest. Furthermore, apparent respiratory sinus arrhythmia (RSA) was calculated from the f(H) data to determine probable breathing frequency in resting penguins. The most pertinent results were that minimum f(H) achieved (over 5 min) was higher during respirometry experiments in air than during periods ashore in the field; that minimum f(H) during respirometry experiments on water was similar to that while at sea; and that RSA was apparent in many of the f(H) traces during periods of minimum f(H) and provides accurate estimates of breathing rates of king penguins resting in specific situations in the field. Inferences made from the results include that king penguins do not have the capacity to reduce their metabolism to a particularly low level on land; that they can, however, achieve surprisingly low metabolic rates at sea while resting in cold water; and that during respirometry experiments king penguins are stressed to some degree, exhibiting an elevated metabolism even when resting.

  2. A physical probabilistic model to predict failure rates in buried PVC pipelines

    International Nuclear Information System (INIS)

    Davis, P.; Burn, S.; Moglia, M.; Gould, S.

    2007-01-01

    For older water pipeline materials such as cast iron and asbestos cement, future pipe failure rates can be extrapolated from large volumes of existing historical failure data held by water utilities. However, for newer pipeline materials such as polyvinyl chloride (PVC), only limited failure data exists and confident forecasts of future pipe failures cannot be made from historical data alone. To solve this problem, this paper presents a physical probabilistic model, which has been developed to estimate failure rates in buried PVC pipelines as they age. The model assumes that under in-service operating conditions, crack initiation can occur from inherent defects located in the pipe wall. Linear elastic fracture mechanics theory is used to predict the time to brittle fracture for pipes with internal defects subjected to combined internal pressure and soil deflection loading together with through-wall residual stress. To include uncertainty in the failure process, inherent defect size is treated as a stochastic variable, and modelled with an appropriate probability distribution. Microscopic examination of fracture surfaces from field failures in Australian PVC pipes suggests that the 2-parameter Weibull distribution can be applied. Monte Carlo simulation is then used to estimate lifetime probability distributions for pipes with internal defects, subjected to typical operating conditions. As with inherent defect size, the 2-parameter Weibull distribution is shown to be appropriate to model uncertainty in predicted pipe lifetime. The Weibull hazard function for pipe lifetime is then used to estimate the expected failure rate (per pipe length/per year) as a function of pipe age. To validate the model, predicted failure rates are compared to aggregated failure data from 17 UK water utilities obtained from the United Kingdom Water Industry Research (UKWIR) National Mains Failure Database. In the absence of actual operating pressure data in the UKWIR database, typical

  3. Studying and comparing spectrum efficiency and error probability in GMSK and DBPSK modulation schemes

    Directory of Open Access Journals (Sweden)

    Juan Mario Torres Nova

    2008-09-01

    Full Text Available Gaussian minimum shift keying (GMSK and differential binary phase shift keying (DBPSK are two digital modulation schemes which are -frequently used in radio communication systems; however, there is interdependence in the use of its benefits (spectral efficiency, low bit error rate, low inter symbol interference, etc. Optimising one parameter creates problems for another; for example, the GMSK scheme succeeds in reducing bandwidth when introducing a Gaussian filter into an MSK (minimum shift ke-ying modulator in exchange for increasing inter-symbol interference in the system. The DBPSK scheme leads to lower error pro-bability, occupying more bandwidth; it likewise facilitates synchronous data transmission due to the receiver’s bit delay when re-covering a signal.

  4. An approach for estimating the breach probabilities of moraine-dammed lakes in the Chinese Himalayas using remote-sensing data

    Directory of Open Access Journals (Sweden)

    X. Wang

    2012-10-01

    Full Text Available To make first-order estimates of the probability of moraine-dammed lake outburst flood (MDLOF and prioritize the probabilities of breaching posed by potentially dangerous moraine-dammed lakes (PDMDLs in the Chinese Himalayas, an objective approach is presented. We first select five indicators to identify PDMDLs according to four predesigned criteria. The climatic background was regarded as the climatic precondition of the moraine-dam failure, and under different climatic preconditions, we distinguish the trigger mechanisms of MDLOFs and subdivide them into 17 possible breach modes, with each mode having three or four components; we combined the precondition, modes and components to construct a decision-making tree of moraine-dam failure. Conversion guidelines were established so as to quantify the probabilities of components of a breach mode employing the historic performance method combined with expert knowledge and experience. The region of the Chinese Himalayas was chosen as a study area where there have been frequent MDLOFs in recent decades. The results show that the breaching probabilities (P of 142 PDMDLs range from 0.037 to 0.345, and they can be further categorized as 43 lakes with very high breach probabilities (P ≥ 0.24, 47 lakes with high breach probabilities (0.18 ≤ P < 0.24, 24 lakes with mid-level breach probabilities (0.12 ≤ P < 0.18, 24 lakes with low breach probabilities (0.06 ≤ P < 0.12, and four lakes with very low breach probabilities (p < 0.06.

  5. Seismically induced common cause failures in PSA of nuclear power plants

    International Nuclear Information System (INIS)

    Ravindra, M.K.; Johnson, J.J.

    1991-01-01

    In this paper, a research project on the seismically induced common cause failures in nuclear power plants performed for Toshiba Corp. is described. The objective of this research was to develop the procedure for estimating the common cause failure probabilities of different nuclear power plant components using the combination of seismic experience data, the review of sources of dependency, sensitivity studies and engineering judgement. The research project consisted of three tasks: the investigation of damage instances in past earthquakes, the analysis of multiple failures and their root causes, and the development of the methodology for assessing seismically induced common cause failures. The details of these tasks are explained. In this paper, the works carried out in the third task are described. A methodology for treating common cause failures and the correlation between component failures is formulated; it highlights the modeling of event trees taking into account common cause failures and the development of fault trees considering the correlation between component failures. The overview of seismic PSA, the quantification methods for dependent failures and Latin Hypercube sampling method are described. (K.I.)

  6. A delay time model with imperfect and failure-inducing inspections

    International Nuclear Information System (INIS)

    Flage, Roger

    2014-01-01

    This paper presents an inspection-based maintenance optimisation model where the inspections are imperfect and potentially failure-inducing. The model is based on the basic delay-time model in which a system has three states: perfectly functioning, defective and failed. The system is deteriorating through these states and to reveal defective systems, inspections are performed periodically using a procedure by which the system fails with a fixed state-dependent probability; otherwise, an inspection identifies a functioning system as defective (false positive) with a fixed probability and a defective system as functioning (false negative) with a fixed probability. The system is correctively replaced upon failure or preventively replaced either at the N'th inspection time or when an inspection reveals the system as defective, whichever occurs first. Replacement durations are assumed to be negligible and costs are associated with inspections, replacements and failures. The problem is to determine the optimal inspection interval T and preventive age replacement limit N that jointly minimise the long run expected cost per unit of time. The system may also be thought of as a passive two-state system subject to random demands; the three states of the model are then functioning, undetected failed and detected failed; and to ensure the renewal property of replacement cycles the demand process generating the ‘delay time’ is then restricted to the Poisson process. The inspiration for the presented model has been passive safety critical valves as used in (offshore) oil and gas production and transportation systems. In light of this the passive system interpretation is highlighted, as well as the possibility that inspection-induced failures are associated with accidents. Two numerical examples are included, and some potential extensions of the model are indicated

  7. Le débat au sujet des effets du salaire minimum sur l'emploi se ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    13 déc. 2012 ... Le salaire minimum est un sujet qui soulève immanquablement la controverse. Ses partisans affirment qu'il aide les travailleurs au bas de l'échelle salariale et va dans le sens d'une société plus équitable. Ses détracteurs soutiennent que les entreprises vont probablement embaucher moins de travailleurs ...

  8. Outage Probability Analysis of FSO Links over Foggy Channel

    KAUST Repository

    Esmail, Maged Abdullah

    2017-02-22

    Outdoor Free space optic (FSO) communication systems are sensitive to atmospheric impairments such as turbulence and fog, in addition to being subject to pointing errors. Fog is particularly severe because it induces an attenuation that may vary from few dBs up to few hundreds of dBs per kilometer. Pointing errors also distort the link alignment and cause signal fading. In this paper, we investigate and analyze the FSO systems performance under fog conditions and pointing errors in terms of outage probability. We then study the impact of several effective communication mitigation techniques that can improve the system performance including multi-hop, transmit laser selection (TLS) and hybrid RF/FSO transmission. Closed-form expressions for the outage probability are derived and practical and comprehensive numerical examples are suggested to assess the obtained results. We found that the FSO system has limited performance that prevents applying FSO in wireless microcells that have a 500 m minimum cell radius. The performance degrades more when pointing errors appear. Increasing the transmitted power can improve the performance under light to moderate fog. However, under thick and dense fog the improvement is negligible. Using mitigation techniques can play a major role in improving the range and outage probability.

  9. Outage Probability Analysis of FSO Links over Foggy Channel

    KAUST Repository

    Esmail, Maged Abdullah; Fathallah, Habib; Alouini, Mohamed-Slim

    2017-01-01

    Outdoor Free space optic (FSO) communication systems are sensitive to atmospheric impairments such as turbulence and fog, in addition to being subject to pointing errors. Fog is particularly severe because it induces an attenuation that may vary from few dBs up to few hundreds of dBs per kilometer. Pointing errors also distort the link alignment and cause signal fading. In this paper, we investigate and analyze the FSO systems performance under fog conditions and pointing errors in terms of outage probability. We then study the impact of several effective communication mitigation techniques that can improve the system performance including multi-hop, transmit laser selection (TLS) and hybrid RF/FSO transmission. Closed-form expressions for the outage probability are derived and practical and comprehensive numerical examples are suggested to assess the obtained results. We found that the FSO system has limited performance that prevents applying FSO in wireless microcells that have a 500 m minimum cell radius. The performance degrades more when pointing errors appear. Increasing the transmitted power can improve the performance under light to moderate fog. However, under thick and dense fog the improvement is negligible. Using mitigation techniques can play a major role in improving the range and outage probability.

  10. A hybrid of fuzzy FMEA-AHP to determine factors affecting alternator failure causes

    Directory of Open Access Journals (Sweden)

    Reza Kiani Aslani

    2014-09-01

    Full Text Available This paper presents a method to determine factors influencing alternator failure causes. Failure Mode and Effects Analysis (FMEA is one of the first systematic techniques for failure analysis based on three factors including Probability (P, Severity (S and Detection (D. Traditional FMEA method considers equal weights for all three factors, however, in read-world cases; one may wish to consider various weights. The proposed study develops a mathematical model to determine optimal weights based on analytical hierarchy process technique. The implementation of the proposed study has been demonstrated for a read-world case study of alternator failure causes.

  11. Directionality and Orientation Effects on the Resistance to Propagating Shear Failure

    Science.gov (United States)

    Leis, B. N.; Barbaro, F. J.; Gray, J. M.

    Hydrocarbon pipelines transporting compressible products like methane or high-vapor-pressure (HVP) liquids under supercritical conditions can be susceptible to long-propagating failures. As the unplanned release of such hydrocarbons can lead to significant pollution and/or the horrific potential of explosion and/or a very large fire, design criteria to preclude such failures were essential to environmental and public safety. Thus, technology was developed to establish the minimum arrest requirements to avoid such failures shortly after this design concern was evident. Soon after this technology emerged in the early 1970sit became evident that its predictions were increasinglynon-conservative as the toughness of line-pipe steel increased. A second potentially critical factor for what was a one-dimensional technology was that changes in steel processing led to directional dependence in both the flow and fracture properties. While recognized, this dependence was tacitly ignored in quantifying arrest, as were early observations that indicated propagating shear failure was controlled by plastic collapse rather than by fracture processes.

  12. Probability of brittle failure

    Science.gov (United States)

    Kim, A.; Bosnyak, C. P.; Chudnovsky, A.

    1991-01-01

    A methodology was developed for collecting statistically representative data for crack initiation and arrest from small number of test specimens. An epoxy (based on bisphenol A diglycidyl ether and polyglycol extended diglycyl ether and cured with diethylene triamine) is selected as a model material. A compact tension specimen with displacement controlled loading is used to observe multiple crack initiation and arrests. The energy release rate at crack initiation is significantly higher than that at a crack arrest, as has been observed elsewhere. The difference between these energy release rates is found to depend on specimen size (scale effect), and is quantitatively related to the fracture surface morphology. The scale effect, similar to that in statistical strength theory, is usually attributed to the statistics of defects which control the fracture process. Triangular shaped ripples (deltoids) are formed on the fracture surface during the slow subcritical crack growth, prior to the smooth mirror-like surface characteristic of fast cracks. The deltoids are complementary on the two crack faces which excludes any inelastic deformation from consideration. Presence of defects is also suggested by the observed scale effect. However, there are no defects at the deltoid apexes detectable down to the 0.1 micron level.

  13. Estimating the empirical probability of submarine landslide occurrence

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  14. Advances on the Failure Analysis of the Dam—Foundation Interface of Concrete Dams

    Directory of Open Access Journals (Sweden)

    Luis Altarejos-García

    2015-12-01

    Full Text Available Failure analysis of the dam-foundation interface in concrete dams is characterized by complexity, uncertainties on models and parameters, and a strong non-linear softening behavior. In practice, these uncertainties are dealt with a well-structured mixture of experience, best practices and prudent, conservative design approaches based on the safety factor concept. Yet, a sound, deep knowledge of some aspects of this failure mode remain unveiled, as they have been offset in practical applications by the use of this conservative approach. In this paper we show a strategy to analyse this failure mode under a reliability-based approach. The proposed methodology of analysis integrates epistemic uncertainty on spatial variability of strength parameters and data from dam monitoring. The purpose is to produce meaningful and useful information regarding the probability of occurrence of this failure mode that can be incorporated in risk-informed dam safety reviews. In addition, relationships between probability of failure and factors of safety are obtained. This research is supported by a more than a decade of intensive professional practice on real world cases and its final purpose is to bring some clarity, guidance and to contribute to the improvement of current knowledge and best practices on such an important dam safety concern.

  15. Classification and calculation of primary failure modes in bread production line

    International Nuclear Information System (INIS)

    Tsarouhas, Panagiotis H.

    2009-01-01

    In this study, we describe the classification methodology over a 2-year period of the primary failure modes in categories based on failure data of bread production line. We estimate the probabilities of these categories applying the chi-square goodness of fit test, and we calculate their joint probabilities of mass function at workstation and line level. Then, we present numerical examples in order to predict the causes and frequencies of breakdowns for workstations and for the entire bread production line that will occur in the future. The methodology is meant to guide bread and bakery product manufacturers, improving the operation of the production lines. It can also be a useful tool to maintenance engineers, who wish to analyze and improve the reliability and efficiency of the manufacturing systems

  16. Severe renal failure in acute bacterial pyelonephritis: Do not forget corticosteroids

    Directory of Open Access Journals (Sweden)

    Sqalli Tarik

    2010-01-01

    Full Text Available Acute renal failure (ARF is a rare complication of acute pyelonephritis in adult immunocompetent patients. Recovery of renal function usually occurs if antibiotics are promptly initiated. However, long-term consequences of renal scarring due to acute pyelonephritis are probably underestimated, and some patients present with prolonged renal failure despite adequate antibiotic therapy. We report two cases of severe ARF complicating bacterial pyelonephritis successfully treated with corticosteroids in association with conventional antibiotics.

  17. Probability

    CERN Document Server

    Shiryaev, A N

    1996-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables

  18. Structural and microstructural design in brittle materials

    International Nuclear Information System (INIS)

    Evans, A.G.

    1979-12-01

    Structural design with brittle materials requires that the stress level in the component correspond to a material survival probability that exceeds the minimum survival probability permitted in that application. This can be achieved by developing failure models that fully account for the probability of fracture from defects within the material (including considerations of fracture statistics, fracture mechanics and stress analysis) coupled with non-destructive techniques that determine the size of the large extreme of critical defects. Approaches for obtaining the requisite information are described. The results provide implications for the microstructural design of failure resistant brittle materials by reducing the size of deleterious defects and enhancing the fracture toughness

  19. A Propensity-Matched Study of Hypertension and Increased Stroke-Related Hospitalization in Chronic Heart Failure

    NARCIS (Netherlands)

    G.S. Filippatos (Gerasimos); C. Adamopoulos (Chris); X. Sui (Xuemei); T.E. Love (Thomas); P.M. Pullicino (Patrick); J. Lubsen (Jacob); G. Bakris (George); S.D. Anker (Stefan); G. Howard (George); D.T. Kremastinos (Dimitrios); A. Ahmed (Ali)

    2008-01-01

    textabstractHypertension is a risk factor for heart failure and stroke. However, the effect of hypertension on stroke in patients with heart failure has not been well studied. In the Digitalis Investigation Group trial, 3,674 (47%) of the 7,788 patients had a history of hypertension. Probability or

  20. Estimates o the risks associated with dam failure

    Energy Technology Data Exchange (ETDEWEB)

    Ayyaswamy, P.; Hauss, B.; Hseih, T.; Moscati, A.; Hicks, T.E.; Okrent, D.

    1974-03-01

    The probabilities and potential consequences of dam failure in California, primarily due to large earthquakes, was estimated, taking as examples eleven dams having a relatively large population downstream. Mortalities in the event of dam failure range from 11,000 to 260,000, while damage to property may be as high as $720 million. It was assumed that an intensity IX or X earthquake (on the Modified Mercalli Scale) would be sufficient to completely fail earthen dams. Predictions of dam failure were based on the recurrence times of such earthquakes. For the dams studied, the recurrence intervals for an intensity IX earthquake varied between 20 and 800 years; for an intensity X between 50 and 30,000 years. For the Lake Chabot and San Pablo dams (respectively 20, 30 years recurrent earthquake times for a intensity X) the associated consequences are: 34,000 (Lake Chabot) and 30,000 (San Pablo) people killed; damage $140 million and $77 million. Evaculation was found to ameliorate the consequences slightly in most cases because of the short time available. Calculations are based on demography, and assume 10 foot floodwaters will drown all in their path and destroy all one-unit homes in the flood area. Damage estimates reflect losses incurred by structural damage to buildings and do not include loss of income. Hence the economic impact is probably understated.

  1. What is the Minimum EROI that a Sustainable Society Must Have?

    Directory of Open Access Journals (Sweden)

    David J.R. Murphy

    2009-01-01

    Full Text Available Economic production and, more generally, most global societies, are overwhelmingly dependant upon depleting supplies of fossil fuels. There is considerable concern amongst resource scientists, if not most economists, as to whether market signals or cost benefit analysis based on today’s prices are sufficient to guide our decisions about our energy future. These suspicions and concerns were escalated during the oil price increase from 2005 – 2008 and the subsequent but probably related market collapse of 2008. We believe that Energy Return On Investment (EROI analysis provides a useful approach for examining disadvantages and advantages of different fuels and also offers the possibility to look into the future in ways that markets seem unable to do. The goal of this paper is to review the application of EROI theory to both natural and economic realms, and to assess preliminarily the minimum EROI that a society must attain from its energy exploitation to support continued economic activity and social function. In doing so we calculate herein a basic first attempt at the minimum EROI for current society and some of the consequences when that minimum is approached. The theory of the minimum EROI discussed here, which describes the somewhat obvious but nonetheless important idea that for any being or system to survive or grow it must gain substantially more energy than it uses in obtaining that energy, may be especially important. Thus any particular being or system must abide by a “Law of Minimum EROI”, which we calculate for both oil and corn-based ethanol as about 3:1 at the mine-mouth/farm-gate. Since most biofuels have EROI’s of less than 3:1 they must be subsidized by fossil fuels to be useful.

  2. Probabilistic analyses of failure in reactor coolant piping

    International Nuclear Information System (INIS)

    Holman, G.S.

    1984-01-01

    LLNL is performing probabilistic reliability analyses of PWR and BWR reactor coolant piping for the NRC Office of Nuclear Regulatory Research. Specifically, LLNL is estimating the probability of a double-ended guillotine break (DEGB) in the reactor coolant loop piping in PWR plants, and in the main stream, feedwater, and recirculation piping of BWR plants. In estimating the probability of DEGB, LLNL considers two causes of pipe break: pipe fracture due to the growth of cracks at welded joints (direct DEGB), and pipe rupture indirectly caused by the seismically-induced failure of critical supports or equipment (indirect DEGB)

  3. Towards a whole-network risk assessment for railway bridge failures caused by scour during flood events

    Directory of Open Access Journals (Sweden)

    Lamb Rob

    2016-01-01

    Full Text Available Localised erosion (scour during flood flow conditions can lead to costly damage or catastrophic failure of bridges, and in some cases loss of life or significant disruption to transport networks. Here, we take a broad scale view to assess risk associated with bridge scour during flood events over an entire infrastructure network, illustrating the analysis with data from the British railways. There have been 54 recorded events since 1846 in which scour led to the failure of railway bridges in Britain. These events tended to occur during periods of extremely high river flow, although there is uncertainty about the precise conditions under which failures occur, which motivates a probabilistic analysis of the failure events. We show how data from the historical bridge failures, combined with hydrological analysis, have been used to construct fragility curves that quantify the conditional probability of bridge failure as a function of river flow, accompanied by estimates of the associated uncertainty. The new fragility analysis is tested using flood events simulated from a national, spatial joint probability model for extremes in river flows. The combined models appear robust in comparison with historical observations of the expected number of bridge failures in a flood event, and provide an empirical basis for further broad-scale network risk analysis.

  4. The structure of water around the compressibility minimum

    Energy Technology Data Exchange (ETDEWEB)

    Skinner, L. B. [X-ray Science Division, Advanced Photon Source, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Mineral Physics Institute, Stony Brook University, Stony Brook, New York, New York 11794-2100 (United States); Benmore, C. J., E-mail: benmore@aps.anl.gov [X-ray Science Division, Advanced Photon Source, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Neuefeind, J. C. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37922 (United States); Parise, J. B. [Mineral Physics Institute, Stony Brook University, Stony Brook, New York, New York 11794-2100 (United States); Department of Geosciences, Stony Brook University, Stony Brook, New York, New York 11794-2100 (United States); Photon Sciences Division, Brookhaven National Laboratory, Upton, New York 11973 (United States)

    2014-12-07

    Here we present diffraction data that yield the oxygen-oxygen pair distribution function, g{sub OO}(r) over the range 254.2–365.9 K. The running O-O coordination number, which represents the integral of the pair distribution function as a function of radial distance, is found to exhibit an isosbestic point at 3.30(5) Å. The probability of finding an oxygen atom surrounding another oxygen at this distance is therefore shown to be independent of temperature and corresponds to an O-O coordination number of 4.3(2). Moreover, the experimental data also show a continuous transition associated with the second peak position in g{sub OO}(r) concomitant with the compressibility minimum at 319 K.

  5. Rising above the Minimum Wage.

    Science.gov (United States)

    Even, William; Macpherson, David

    An in-depth analysis was made of how quickly most people move up the wage scale from minimum wage, what factors influence their progress, and how minimum wage increases affect wage growth above the minimum. Very few workers remain at the minimum wage over the long run, according to this study of data drawn from the 1977-78 May Current Population…

  6. Calculation of the Incremental Conditional Core Damage Probability on the Extension of Allowed Outage Time

    International Nuclear Information System (INIS)

    Kang, Dae Il; Han, Sang Hoon

    2006-01-01

    RG 1.177 requires that the conditional risk (incremental conditional core damage probability and incremental conditional large early release probability: ICCDP and ICLERP), given that a specific component is out of service (OOS), be quantified for a permanent change of the allowed outage time (AOT) of a safety system. An AOT is the length of time that a particular component or system is permitted to be OOS while the plant is operating. The ICCDP is defined as: ICCDP = [(conditional CDF with the subject equipment OOS)- (baseline CDF with nominal expected equipment unavailabilities)] [duration of the single AOT under consideration]. Any event enabling the component OOS can initiate the time clock for the limiting condition of operation for a nuclear power plant. Thus, the largest ICCDP among the ICCDPs estimated from any occurrence of the basic events for the component fault tree should be selected for determining whether the AOT can be extended or not. If the component is under a preventive maintenance, the conditional risk can be straightforwardly calculated without changing the CCF probability. The main concern is the estimations of the CCF probability because there are the possibilities of the failures of other similar components due to the same root causes. The quantifications of the risk, given that a subject equipment is in a failed state, are performed by setting the identified event of subject equipment to TRUE. The CCF probabilities are also changed according to the identified failure cause. In the previous studies, however, the ICCDP was quantified with the consideration of the possibility of a simultaneous occurrence of two CCF events. Based on the above, we derived the formulas of the CCF probabilities for the cases where a specific component is in a failed state and we presented sample calculation results of the ICCDP for the low pressure safety injection system (LPSIS) of Ulchin Unit 3

  7. Hotspots ampersand other hidden targets: Probability of detection, number, frequency and area

    International Nuclear Information System (INIS)

    Vita, C.L.

    1994-01-01

    Concepts and equations are presented for making probability-based estimates of the detection probability, and the number, frequency, and area of hidden targets, including hotspots, at a given site. Targets include hotspots, which are areas of extreme or particular contamination, and any object or feature that is hidden from direct visual observation--including buried objects and geologic or hydrologic details or anomalies. Being Bayesian, results are fundamentally consistent with observational methods. Results are tools for planning or interpreting exploration programs used in site investigation or characterization, remedial design, construction, or compliance monitoring, including site closure. Used skillfully and creatively, these tools can help streamline and expedite environmental restoration, reducing time and cost, making site exploration cost-effective, and providing acceptable risk at minimum cost. 14 refs., 4 figs

  8. Sensitivity analysis of limit state functions for probability-based plastic design

    Science.gov (United States)

    Frangopol, D. M.

    1984-01-01

    The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.

  9. Non-Organic Failure-to-Thrive: Origins and Psychoeducational Implications.

    Science.gov (United States)

    Phelps, LeAdelle

    1991-01-01

    The passage of Public Law 99-457 relating to education for the handicapped increases the probability that the school psychologist will be asked to evaluate infants and children diagnosed as suffering from nonorganic failure-to-thrive (NOFT) disorder. Etiology, models, and interventions appropriate for NOFT children are discussed. (SLD)

  10. Top scores are possible, bottom scores are certain (and middle scores are not worth mentioning: A pragmatic view of verbal probabilities

    Directory of Open Access Journals (Sweden)

    Marie Juanchich

    2013-05-01

    Full Text Available In most previous studies of verbal probabilities, participants are asked to translate expressions such as possible and not certain into numeric probability values. This probabilistic translation approach can be contrasted with a novel which-outcome (WO approach that focuses on the outcomes that people naturally associate with probability terms. The WO approach has revealed that, when given bell-shaped distributions of quantitative outcomes, people tend to associate certainty with minimum (unlikely outcome magnitudes and possibility with (unlikely maximal ones. The purpose of the present paper is to test the factors that foster these effects and the conditions in which they apply. Experiment 1 showed that the association of probability term and outcome was related to the association of scalar modifiers (i.e., it is certain that the battery will last at least..., it is possible that the battery will last up to.... Further, we tested whether this pattern was dependent on the frequency (e.g., increasing vs. decreasing distribution or the nature of the outcomes presented (i.e., categorical vs. continuous. Results showed that despite being slightly affected by the shape of the distribution, participants continue to prefer to associate possible with maximum outcomes and certain with minimum outcomes. The final experiment provided a boundary condition to the effect, showing that it applies to verbal but not numerical probabilities.

  11. The probability of Mark-I containment failure by melt-attack of the liner

    International Nuclear Information System (INIS)

    Theofanous, T.G.; Yan, H.; Podowski, M.Z.

    1993-11-01

    This report is a followup to the work presented in NUREG/CR-5423 addressing early failure of a BWR Mark I containment by melt attack of the liner, and it constitutes a part of the implementation of the Risk-Oriented Accident Analysis Methodology (ROAAM) employed therein. In particular, it expands the quantification to include four independent evaluations carried out at Rensselaer Polytechnic Institute, Argonne National Laboratories, Sandia National Laboratories and ANATECH, Inc. on the various portions of the phenomenology involved. These independent evaluations are included here as Parts II through V. The results, and their integration in Part I, demonstrate the substantial synergism and convergence necessary to recognize that the issue has been resolved

  12. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    Science.gov (United States)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  13. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    Science.gov (United States)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  14. Influence of specimen design on the deformation and failure of zircaloy cladding

    International Nuclear Information System (INIS)

    Bates, D.W.; Koss, D.A.; Motta, A.T.; Majumdar, S.

    2000-01-01

    Experimental as well as computational analyses have been used to examine the deformation and failure behavior of ring-stretch specimens of Zircaloy-4 cladding tubes. The results show that, at least for plastically anisotropic unirradiated cladding, specimens with a small gauge length l to width w ratio (l/w ∼ 1) exhibit pronounced non-uniform deformation along their length. As a result, specimen necking occurs upon yielding when the specimen is fully plastic. Finite element analysis indicates a minimum l/w of 4 before a significant fraction of the gauge length deforms homogeneously. A brief examination of the contrasting deformation and failure behavior between uniaxial and plane-strain ring tension tests further supports the use of the latter geometry for determining cladding failure ductility data that are relevant to certain reactivity-initiated accident conditions

  15. Treatment failure of nelfinavir-containing triple therapy can largely be explained by low nelfinavir plasma concentrations

    NARCIS (Netherlands)

    Burger, David M.; Hugen, Patricia W. H.; Aarnoutse, Rob E.; Hoetelmans, Richard M. W.; Jambroes, Marielle; Nieuwkerk, Pythia T.; Schreij, Gerrit; Schneider, Margriet M. E.; van der Ende, Marchina E.; Lange, Joep M. A.

    2003-01-01

    The relationship between plasma concentrations of nelfinavir and virologic treatment failure was investigated to determine the minimum effective concentration of nelfinavir. Plasma samples were prospectively collected from treatment-naive patients who began taking nelfinavir, 1,250 mg BID + two

  16. Scaling Qualitative Probability

    OpenAIRE

    Burgin, Mark

    2017-01-01

    There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...

  17. Respiratory failure following anti-lung serum: study on mechanisms associated with surfactant system damage

    International Nuclear Information System (INIS)

    Lachmann, B.; Hallman, M.; Bergmann, K.C.

    1987-01-01

    Within 2 minutes intravenous anti-lung serum (ALS) into guinea pig induces a respiratory failure that is fatal within 30 min. The relationship between surfactant, alveolar-capillary permeability and respiratory failure was studied. Within two minutes ALS induced a leak in the alveolar-capillary barrier. Within 30 minutes 28.3% (controls, given normal rabbit serum: 0.7%) of iv 131 I-albumin, and 0.5% (controls 0.02%) of iv surfactant phospholipid tracer were recovered in bronchoalveolar lavage. Furthermore, 57% (controls 32%) of the endotracheally administered surfactant phospholipid became associated with lung tissue and only less than 0.5% left the lung. The distribution of proteins and phospholipids between the in vivo small volume bronchoalveolar lavages and the ex vivo bronchoalveolar lavages were dissimilar: 84% (controls 20%) of intravenously injected, lavageable 131 I-albumin and 23% (controls 18%) of total lavageable phospholipid were recovered in the in vivo small volume bronchoalveolar lavages. ALS also decreased lavageable surfactant phospholipid by 41%. After ALS the minimum surface tension increased. The supernatant of the lavage increased the minimum surface tension of normal surfactant. In addition, the sediment fraction of the lavage had slow surface adsorption, and a marked reduction in 35,000 and 10,000 MW peptides. Exogenous surfactant ameliorated the ALS-induced respiratory failure. We propose that inhibition, altered intrapulmonary distribution, and dissociation of protein and phospholipid components of surfactant are important in early pathogenesis of acute respiratory failure

  18. Dynamic decision making for dam-break emergency management - Part 2: Application to Tangjiashan landslide dam failure

    Science.gov (United States)

    Peng, M.; Zhang, L. M.

    2013-02-01

    Tangjiashan landslide dam, which was triggered by the Ms = 8.0 Wenchuan earthquake in 2008 in China, threatened 1.2 million people downstream of the dam. All people in Beichuan Town 3.5 km downstream of the dam and 197 thousand people in Mianyang City 85 km downstream of the dam were evacuated 10 days before the breaching of the dam. Making such an important decision under uncertainty was difficult. This paper applied a dynamic decision-making framework for dam-break emergency management (DYDEM) to help rational decision in the emergency management of the Tangjiashan landslide dam. Three stages are identified with different levels of hydrological, geological and social-economic information along the timeline of the landslide dam failure event. The probability of dam failure is taken as a time series. The dam breaching parameters are predicted with a set of empirical models in stage 1 when no soil property information is known, and a physical model in stages 2 and 3 when knowledge of soil properties has been obtained. The flood routing downstream of the dam in these three stages is analyzed to evaluate the population at risk (PAR). The flood consequences, including evacuation costs, flood damage and monetized loss of life, are evaluated as functions of warning time using a human risk analysis model based on Bayesian networks. Finally, dynamic decision analysis is conducted to find the optimal time to evacuate the population at risk with minimum total loss in each of these three stages.

  19. Employment effects of minimum wages

    OpenAIRE

    Neumark, David

    2014-01-01

    The potential benefits of higher minimum wages come from the higher wages for affected workers, some of whom are in low-income families. The potential downside is that a higher minimum wage may discourage employers from using the low-wage, low-skill workers that minimum wages are intended to help. Research findings are not unanimous, but evidence from many countries suggests that minimum wages reduce the jobs available to low-skill workers.

  20. Piping failures in United States nuclear power plants 1961-1995

    International Nuclear Information System (INIS)

    Bush, S.H.; Do, M.J.; Slavich, A.L.; Chockie, A.D.

    1996-01-01

    Over 1500 reported piping failures were identified and summarized based on an extensive review of tens of thousands of event reports that have been submitted to the US regulatory agencies over the last 35 years. The data base contains only piping failures; failures in vessels, pumps, valves and steam generators or any cracks that were not through-wall are not included. It was observed that there has been a marked decrease in the number of failures after 1983 for almost all sizes of pipes. This is likely due to the changes in the reporting requirements at that time and the corrective actions taken by utilities to minimize fatigue failures of small lines and IGSCC in BWRs. One failure mechanism that continues to occur is erosion-corrosion, which accounts for most of the ruptures reported and probably is responsible for the absence of downward trends in ruptures. Fatigue-vibration is also a significant contributor to piping failures. However, most of such events occur in lines approx. one inch or less in diameter. Together, erosion-corrosion and fatigue-vibration account for over 43 per cent of the failures. The overwhelming majority of failures have been leaks, over half the failures occurred in pipes with a diameter of one inch or less. Included in the report is a listing of the number of welds in various systems in LWRs

  1. Common Cause Failure Analysis for the Digital Plant Protection System

    International Nuclear Information System (INIS)

    Kagn, Hyun Gook; Jang, Seung Cheol

    2005-01-01

    Safety-critical systems such as nuclear power plants adopt the multiple-redundancy design in order to reduce the risk from the single component failure. The digitalized safety-signal generation system is also designed based on the multiple-redundancy strategy which consists of more redundant components. The level of the redundant design of digital systems is usually higher than those of conventional mechanical systems. This higher redundancy would clearly reduce the risk from the single failure of components, but raise the importance of the common cause failure (CCF) analysis. This research aims to develop the practical and realistic method for modeling the CCF in digital safety-critical systems. We propose a simple and practical framework for assessing the CCF probability of digital equipment. Higher level of redundancy causes the difficulty of CCF analysis because it results in impractically large number of CCF events in the fault tree model when we use conventional CCF modeling methods. We apply the simplified alpha-factor (SAF) method to the digital system CCF analysis. The precedent study has shown that SAF method is quite realistic but simple when we consider carefully system success criteria. The first step for using the SAF method is the analysis of target system for determining the function failure cases. That is, the success criteria of the system could be derived from the target system's function and configuration. Based on this analysis, we can calculate the probability of single CCF event which represents the CCF events resulting in the system failure. In addition to the application of SAF method, in order to accommodate the other characteristics of digital technology, we develop a simple concept and several equations for practical use

  2. Investigation into Cause of High Temperature Failure of Boiler Superheater Tube

    Science.gov (United States)

    Ghosh, D.; Ray, S.; Roy, H.; Shukla, A. K.

    2015-04-01

    The failure of the boiler tubes occur due to various reasons like creep, fatigue, corrosion and erosion. This paper highlights a case study of typical premature failure of a final superheater tube of 210 MW thermal power plant boiler. Visual examination, dimensional measurement, chemical analysis, oxide scale thickness measurement, microstructural examination are conducted as part of the investigations. Apart from these investigations, sulfur print, Energy Dispersive spectroscopy (EDS) and X ray diffraction analysis (XRD) are also conducted to ascertain the probable cause of failure of final super heater tube. Finally it has been concluded that the premature failure of the super heater tube can be attributed to the combination of localized high tube metal temperature and loss of metal from the outer surface due to high temperature corrosion. The corrective actions have also been suggested to avoid this type of failure in near future.

  3. Competing fatigue failure behaviors of Ni-based superalloy FGH96 at elevated temperature

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Guolei [School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Yang, Xiaoguang [School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Collaborative Innovation Center of Advanced Aero-engine(CICAAE), Beihang University, Beijing 100191 (China); Shi, Duoqi, E-mail: shdq@buaa.edu.cn [School of Energy and Power Engineering, Beihang University, Beijing 100191 (China); Collaborative Innovation Center of Advanced Aero-engine(CICAAE), Beihang University, Beijing 100191 (China)

    2016-06-21

    Fatigue experiments were performed on a polycrystalline P/M processed nickel-based superalloy, FGH96 at 600 °C to investigate competing fatigue failure behaviors of the alloy. The experiments were performed at four levels of stress (from high cycle fatigue to low cycle fatigue) at stress ratio of 0.05. There was large variability in fatigue life at both high and low stresses. Scanning electron microscopy (SEM) was used to analyze the failure surfaces. Three types of competing failure modes were observed (surface, sub-surface and internal initiated failures). Crack initiation sites were gradually changed from the surface to the interior with the decreasing of stress level. Roles of microstructures in competing failure mechanism were analyzed. There were six kinds of fatigue crack initiation modes: (1) surface inclusion initiated; (2) surface facet initiated; (3) sub-surface inclusion initiated; (4) sub-surface facet initiated; (5) internal inclusion initiated; (6) internal facet initiated. Inclusions at surface were the life-limiting microstructures at higher stress levels. The probability of occurrence of inclusions initiated is gradually reduced with decreasing of stress level, simultaneously the probability of occurrence of facets initiated is increasing. The existence of the inclusions resulted in large life variability at higher stress levels, while heterogeneity of material caused by random combinations of grains was the main cause of fatigue variability at lower stress levels.

  4. Competing fatigue failure behaviors of Ni-based superalloy FGH96 at elevated temperature

    International Nuclear Information System (INIS)

    Miao, Guolei; Yang, Xiaoguang; Shi, Duoqi

    2016-01-01

    Fatigue experiments were performed on a polycrystalline P/M processed nickel-based superalloy, FGH96 at 600 °C to investigate competing fatigue failure behaviors of the alloy. The experiments were performed at four levels of stress (from high cycle fatigue to low cycle fatigue) at stress ratio of 0.05. There was large variability in fatigue life at both high and low stresses. Scanning electron microscopy (SEM) was used to analyze the failure surfaces. Three types of competing failure modes were observed (surface, sub-surface and internal initiated failures). Crack initiation sites were gradually changed from the surface to the interior with the decreasing of stress level. Roles of microstructures in competing failure mechanism were analyzed. There were six kinds of fatigue crack initiation modes: (1) surface inclusion initiated; (2) surface facet initiated; (3) sub-surface inclusion initiated; (4) sub-surface facet initiated; (5) internal inclusion initiated; (6) internal facet initiated. Inclusions at surface were the life-limiting microstructures at higher stress levels. The probability of occurrence of inclusions initiated is gradually reduced with decreasing of stress level, simultaneously the probability of occurrence of facets initiated is increasing. The existence of the inclusions resulted in large life variability at higher stress levels, while heterogeneity of material caused by random combinations of grains was the main cause of fatigue variability at lower stress levels.

  5. DYNAMIC PARAMETER ESTIMATION BASED ON MINIMUM CROSS-ENTROPY METHOD FOR COMBINING INFORMATION SOURCES

    Czech Academy of Sciences Publication Activity Database

    Sečkárová, Vladimíra

    2015-01-01

    Roč. 24, č. 5 (2015), s. 181-188 ISSN 0204-9805. [XVI-th International Summer Conference on Probability and Statistics (ISCPS-2014). Pomorie, 21.6.-29.6.2014] R&D Projects: GA ČR GA13-13502S Grant - others:GA UK(CZ) SVV 260225/2015 Institutional support: RVO:67985556 Keywords : minimum cross- entropy principle * Kullback-Leibler divergence * dynamic diffusion estimation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2015/AS/seckarova-0445817.pdf

  6. Spectrum-efficient multi-channel design for coexisting IEEE 802.15.4 networks: A stochastic geometry approach

    KAUST Repository

    Elsawy, Hesham; Hossain, Ekram; Camorlinga, Sergio

    2014-01-01

    of the intensity of the admitted networks decreases with the number of channels. By using graph theory, we obtain the minimum required number of channels to accommodate a certain intensity of coexisting networks under a self admission failure probability constraint

  7. Conversion and matched filter approximations for serial minimum-shift keyed modulation

    Science.gov (United States)

    Ziemer, R. E.; Ryan, C. R.; Stilwell, J. H.

    1982-01-01

    Serial minimum-shift keyed (MSK) modulation, a technique for generating and detecting MSK using series filtering, is ideally suited for high data rate applications provided the required conversion and matched filters can be closely approximated. Low-pass implementations of these filters as parallel inphase- and quadrature-mixer structures are characterized in this paper in terms of signal-to-noise ratio (SNR) degradation from ideal and envelope deviation. Several hardware implementation techniques utilizing microwave devices or lumped elements are presented. Optimization of parameter values results in realizations whose SNR degradation is less than 0.5 dB at error probabilities of .000001.

  8. Searching for top, Higgs, and supersymmetry: the minimum invariant mass technique

    International Nuclear Information System (INIS)

    Berger, E.L.

    1984-01-01

    Supersymmetric particls, Higgs mesons, the top quark and other heavy objects are expected to decay frequently into three or more body final states in which at least one particle, such a neutrino or photino, is non-interacting. A method is described for obtaining an excellent estimate of both the mass and the longitudinal momentum of the parent state. The probable longitudinal momenta of the non-interacting particle and of the parent, and the minimum invariant mass of the parent are derived from a minimization procedure. The distributions in these variables are shown to peak sharply at their true values

  9. Minimum Wages and Poverty

    OpenAIRE

    Fields, Gary S.; Kanbur, Ravi

    2005-01-01

    Textbook analysis tells us that in a competitive labor market, the introduction of a minimum wage above the competitive equilibrium wage will cause unemployment. This paper makes two contributions to the basic theory of the minimum wage. First, we analyze the effects of a higher minimum wage in terms of poverty rather than in terms of unemployment. Second, we extend the standard textbook model to allow for incomesharing between the employed and the unemployed. We find that there are situation...

  10. Minimum area thresholds for rattlesnakes and colubrid snakes on islands in the Gulf of California, Mexico.

    Science.gov (United States)

    Meik, Jesse M; Makowsky, Robert

    2018-01-01

    We expand a framework for estimating minimum area thresholds to elaborate biogeographic patterns between two groups of snakes (rattlesnakes and colubrid snakes) on islands in the western Gulf of California, Mexico. The minimum area thresholds for supporting single species versus coexistence of two or more species relate to hypotheses of the relative importance of energetic efficiency and competitive interactions within groups, respectively. We used ordinal logistic regression probability functions to estimate minimum area thresholds after evaluating the influence of island area, isolation, and age on rattlesnake and colubrid occupancy patterns across 83 islands. Minimum area thresholds for islands supporting one species were nearly identical for rattlesnakes and colubrids (~1.7 km 2 ), suggesting that selective tradeoffs for distinctive life history traits between rattlesnakes and colubrids did not result in any clear advantage of one life history strategy over the other on islands. However, the minimum area threshold for supporting two or more species of rattlesnakes (37.1 km 2 ) was over five times greater than it was for supporting two or more species of colubrids (6.7 km 2 ). The great differences between rattlesnakes and colubrids in minimum area required to support more than one species imply that for islands in the Gulf of California relative extinction risks are higher for coexistence of multiple species of rattlesnakes and that competition within and between species of rattlesnakes is likely much more intense than it is within and between species of colubrids.

  11. Predictors of Failure in Infant Mandibular Distraction Osteogenesis.

    Science.gov (United States)

    Hammoudeh, Jeffrey A; Fahradyan, Artur; Brady, Colin; Tsuha, Michaela; Azadgoli, Beina; Ward, Sally; Urata, Mark M

    2018-03-15

    Mandibular distraction osteogenesis (MDO) has been shown to be successful in treating upper airway obstruction caused by micrognathia in pediatric patients. The purpose of this study was to assess the success rate of MDO and possible predictors of failure. The records of all neonates and infants who underwent MDO from 2008 to 2015 were retrospectively reviewed. Procedural failure was defined as patient death or the need for tracheostomy postoperatively. Details of distraction, length of stay, and failures were captured and elucidated. Of the 82 patients, 47 (57.3%) were male; 46 (56.1%) had sporadic Pierre Robin sequence; 33 (40.3%) had syndromic Pierre Robin sequence; and 3 (3.7%) had micrognathia, not otherwise specified. The average distraction length was 27.5 mm (range, 15 to 30 mm; SD, 4.4 mm), the average age at operation was 63.3 days (range, 3 to 342 days; SD, 71.4 days), and the average length of post-MDO hospital stay was 43 days (range, 9 to 219 days; SD, 35 days) with an average follow-up period of 4.3 years (range, 1.1 to 9.6 years; SD, 2.6 years). There were 7 failures (8.5%) (5 tracheostomies and 2 deaths) resulting in a 91.5% success rate. Regression analysis showed that the predicted probability of the need for tracheostomy was 45% (P = .02) when the patient had a central nervous system (CNS) anomaly. The predicted probability of the need for tracheostomy and death combined was 99.6% when the patient had laryngomalacia and a CNS anomaly and was preoperatively intubated (P < .05). This review confirms that MDO is an effective method of treating the upper airway obstruction caused by micrognathia with a high success rate. In our sample the presence of CNS abnormalities, laryngomalacia, and preoperative intubation had a significant impact on the failure rate. Copyright © 2018 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Modelling software failures of digital I and C in probabilistic safety analyses based on the TELEPERM registered XS operating experience

    International Nuclear Information System (INIS)

    Jockenhoevel-Barttfeld, Mariana; Taurines Andre; Baeckstroem, Ola; Holmberg, Jan-Erik; Porthin, Markus; Tyrvaeinen, Tero

    2015-01-01

    Digital instrumentation and control (I and C) systems appear as upgrades in existing nuclear power plants (NPPs) and in new plant designs. In order to assess the impact of digital system failures, quantifiable reliability models are needed along with data for digital systems that are compatible with existing probabilistic safety assessments (PSA). The paper focuses on the modelling of software failures of digital I and C systems in probabilistic assessments. An analysis of software faults, failures and effects is presented to derive relevant failure modes of system and application software for the PSA. The estimations of software failure probabilities are based on an analysis of the operating experience of TELEPERM registered XS (TXS). For the assessment of application software failures the analysis combines the use of the TXS operating experience at an application function level combined with conservative engineering judgments. Failure probabilities to actuate on demand and of spurious actuation of typical reactor protection application are estimated. Moreover, the paper gives guidelines for the modelling of software failures in the PSA. The strategy presented in this paper is generic and can be applied to different software platforms and their applications.

  13. Minimum weight design of composite laminates for multiple loads

    International Nuclear Information System (INIS)

    Krikanov, A.A.; Soni, S.R.

    1995-01-01

    A new design method of constructing optimum weight composite laminates for multiple loads is proposed in this paper. A netting analysis approach is used to develop an optimization procedure. Three ply orientations permit development of optimum laminate design without using stress-strain relations. It is proved that stresses in minimum weight laminate reach allowable values in each ply with given load. The optimum ply thickness is defined at maximum value among tensile and compressive loads. Two examples are given to obtain optimum ply orientations, thicknesses and materials. For comparison purposes, calculations of stresses are done in orthotropic material using classical lamination theory. Based upon these calculations, matrix degrades at 30 to 50% of ultimate load. There is no fiber failure and therefore laminates withstand all applied loads in both examples

  14. Reliability, failure probability, and strength of resin-based materials for CAD/CAM restorations

    Directory of Open Access Journals (Sweden)

    Kiatlin Lim

    Full Text Available ABSTRACT Objective: This study investigated the Weibull parameters and 5% fracture probability of direct, indirect composites, and CAD/CAM composites. Material and Methods: Discshaped (12 mm diameter x 1 mm thick specimens were prepared for a direct composite [Z100 (ZO, 3M-ESPE], an indirect laboratory composite [Ceramage (CM, Shofu], and two CAD/CAM composites [Lava Ultimate (LU, 3M ESPE; Vita Enamic (VE, Vita Zahnfabrik] restorations (n=30 for each group. The specimens were polished, stored in distilled water for 24 hours at 37°C. Weibull parameters (m= modulus of Weibull, σ0= characteristic strength and flexural strength for 5% fracture probability (σ5% were determined using a piston-on-three-balls device at 1 MPa/s in distilled water. Statistical analysis for biaxial flexural strength analysis were performed either by both one-way ANOVA and Tukey's post hoc (α=0.05 or by Pearson's correlation test. Results: Ranking of m was: VE (19.5, LU (14.5, CM (11.7, and ZO (9.6. Ranking of σ0 (MPa was: LU (218.1, ZO (210.4, CM (209.0, and VE (126.5. σ5% (MPa was 177.9 for LU, 163.2 for CM, 154.7 for Z0, and 108.7 for VE. There was no significant difference in the m for ZO, CM, and LU. VE presented the highest m value and significantly higher than ZO. For σ0 and σ5%, ZO, CM, and LU were similar but higher than VE. Conclusion: The strength characteristics of CAD/ CAM composites vary according to their composition and microstructure. VE presented the lowest strength and highest Weibull modulus among the materials.

  15. 75 FR 6151 - Minimum Capital

    Science.gov (United States)

    2010-02-08

    ... capital and reserve requirements to be issued by order or regulation with respect to a product or activity... minimum capital requirements. Section 1362(a) establishes a minimum capital level for the Enterprises... entities required under this section.\\6\\ \\3\\ The Bank Act's current minimum capital requirements apply to...

  16. A Pareto-Improving Minimum Wage

    OpenAIRE

    Eliav Danziger; Leif Danziger

    2014-01-01

    This paper shows that a graduated minimum wage, in contrast to a constant minimum wage, can provide a strict Pareto improvement over what can be achieved with an optimal income tax. The reason is that a graduated minimum wage requires high-productivity workers to work more to earn the same income as low-productivity workers, which makes it more difficult for the former to mimic the latter. In effect, a graduated minimum wage allows the low-productivity workers to benefit from second-degree pr...

  17. The Distribution of Minimum of Ratios of Two Random Variables and Its Application in Analysis of Multi-hop Systems

    Directory of Open Access Journals (Sweden)

    A. Stankovic

    2012-12-01

    Full Text Available The distributions of random variables are of interest in many areas of science. In this paper, ascertaining on the importance of multi-hop transmission in contemporary wireless communications systems operating over fading channels in the presence of cochannel interference, the probability density functions (PDFs of minimum of arbitrary number of ratios of Rayleigh, Rician, Nakagami-m, Weibull and α-µ random variables are derived. These expressions can be used to study the outage probability as an important multi-hop system performance measure. Various numerical results complement the proposed mathematical analysis.

  18. Toward a generalized probability theory: conditional probabilities

    International Nuclear Information System (INIS)

    Cassinelli, G.

    1979-01-01

    The main mathematical object of interest in the quantum logic approach to the foundations of quantum mechanics is the orthomodular lattice and a set of probability measures, or states, defined by the lattice. This mathematical structure is studied per se, independently from the intuitive or physical motivation of its definition, as a generalized probability theory. It is thought that the building-up of such a probability theory could eventually throw light on the mathematical structure of Hilbert-space quantum mechanics as a particular concrete model of the generalized theory. (Auth.)

  19. Total time on test processes and applications to failure data analysis

    International Nuclear Information System (INIS)

    Barlow, R.E.; Campo, R.

    1975-01-01

    This paper describes a new method for analyzing data. The method applies to non-negative observations such as times to failure of devices and survival times of biological organisms and involves a plot of the data. These plots are useful in choosing a probabilistic model to represent the failure behavior of the data. They also furnish information about the failure rate function and aid in its estimation. An important feature of these data plots is that incomplete data can be analyzed. The underlying random variables are, however, assumed to be independent and identically distributed. The plots have a theoretical basis, and converge to a transform of the underlying probability distribution as the sample size increases

  20. Minimum critical mass systems

    International Nuclear Information System (INIS)

    Dam, H. van; Leege, P.F.A. de

    1987-01-01

    An analysis is presented of thermal systems with minimum critical mass, based on the use of materials with optimum neutron moderating and reflecting properties. The optimum fissile material distributions in the systems are obtained by calculations with standard computer codes, extended with a routine for flat fuel importance search. It is shown that in the minimum critical mass configuration a considerable part of the fuel is positioned in the reflector region. For 239 Pu a minimum critical mass of 87 g is found, which is the lowest value reported hitherto. (author)

  1. Failure modes and effects analysis (FMEA) for Gamma Knife radiosurgery.

    Science.gov (United States)

    Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Flickinger, John; Arai, Yoshio; Vacsulka, Jonet; Feng, Wenzheng; Monaco, Edward; Niranjan, Ajay; Lunsford, L Dade; Huq, M Saiful

    2017-11-01

    Gamma Knife radiosurgery is a highly precise and accurate treatment technique for treating brain diseases with low risk of serious error that nevertheless could potentially be reduced. We applied the AAPM Task Group 100 recommended failure modes and effects analysis (FMEA) tool to develop a risk-based quality management program for Gamma Knife radiosurgery. A team consisting of medical physicists, radiation oncologists, neurosurgeons, radiation safety officers, nurses, operating room technologists, and schedulers at our institution and an external physicist expert on Gamma Knife was formed for the FMEA study. A process tree and a failure mode table were created for the Gamma Knife radiosurgery procedures using the Leksell Gamma Knife Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detection for failure mode (D) were assigned to each failure mode by 8 professionals on a scale from 1 to 10. An overall risk priority number (RPN) for each failure mode was then calculated from the averaged O, S, and D scores. The coefficient of variation for each O, S, or D score was also calculated. The failure modes identified were prioritized in terms of both the RPN scores and the severity scores. The established process tree for Gamma Knife radiosurgery consists of 10 subprocesses and 53 steps, including a subprocess for frame placement and 11 steps that are directly related to the frame-based nature of the Gamma Knife radiosurgery. Out of the 86 failure modes identified, 40 Gamma Knife specific failure modes were caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the Gamma Knife helmets and plugs, the skull definition tools as well as other features of the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all external beam radiation therapy

  2. Outcomes of Patients with Intestinal Failure after the Development and Implementation of a Multidisciplinary Team

    Directory of Open Access Journals (Sweden)

    Sabrina Furtado

    2016-01-01

    Full Text Available Aim. A multidisciplinary team was created in our institution to manage patients with intestinal failure (INFANT: INtestinal Failure Advanced Nutrition Team. We aimed to evaluate the impact of the implementation of the team on the outcomes of this patient population. Methods. Retrospective chart review of patients with intestinal failure over a 6-year period was performed. Outcomes of patients followed up by INFANT (2010–2012 were compared to a historical cohort (2007–2009. Results. Twenty-eight patients with intestinal failure were followed up by INFANT while the historical cohort was formed by 27 patients. There was no difference between the groups regarding remaining length of small and large bowel, presence of ICV, or number of infants who reached full enteral feeds. Patients followed up by INFANT took longer to attain full enteral feeds and had longer duration of PN, probably reflecting more complex cases. Overall mortality (14.8%/7.1% was lower than other centers, probably illustrating our population of “early” intestinal failure patients. Conclusions. Our data demonstrates that the creation and implementation of a multidisciplinary program in a tertiary center without an intestinal and liver transplant program can lead to improvement in many aspects of their care.

  3. BACFIRE, Minimal Cut Sets Common Cause Failure Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: BACFIRE, designed to aid in common cause failure analysis, searches among the basic events of a minimal cut set of the system logic model for common potential causes of failure. The potential cause of failure is called a qualitative failure characteristics. The algorithm searches qualitative failure characteristics (that are part of the program input) of the basic events contained in a set to find those characteristics common to all basic events. This search is repeated for all cut sets input to the program. Common cause failure analysis is thereby performed without inclusion of secondary failure in the system logic model. By using BACFIRE, a common cause failure analysis can be added to an existing system safety and reliability analysis. 2 - Method of solution: BACFIRE searches the qualitative failure characteristics of the basic events contained in the fault tree minimal cut set to find those characteristics common to all basic events by either of two criteria. The first criterion can be met if all the basic events in a minimal cut set are associated by a condition which alone may increase the probability of multiple component malfunction. The second criterion is met if all the basic events in a minimal cut set are susceptible to the same secondary failure cause and are located in the same domain for that cause of secondary failure. 3 - Restrictions on the complexity of the problem - Maxima of: 1001 secondary failure maps, 101 basic events, 10 cut sets

  4. Minimum Time Search in Uncertain Dynamic Domains with Complex Sensorial Platforms

    Science.gov (United States)

    Lanillos, Pablo; Besada-Portas, Eva; Lopez-Orozco, Jose Antonio; de la Cruz, Jesus Manuel

    2014-01-01

    The minimum time search in uncertain domains is a searching task, which appears in real world problems such as natural disasters and sea rescue operations, where a target has to be found, as soon as possible, by a set of sensor-equipped searchers. The automation of this task, where the time to detect the target is critical, can be achieved by new probabilistic techniques that directly minimize the Expected Time (ET) to detect a dynamic target using the observation probability models and actual observations collected by the sensors on board the searchers. The selected technique, described in algorithmic form in this paper for completeness, has only been previously partially tested with an ideal binary detection model, in spite of being designed to deal with complex non-linear/non-differential sensorial models. This paper covers the gap, testing its performance and applicability over different searching tasks with searchers equipped with different complex sensors. The sensorial models under test vary from stepped detection probabilities to continuous/discontinuous differentiable/non-differentiable detection probabilities dependent on distance, orientation, and structured maps. The analysis of the simulated results of several static and dynamic scenarios performed in this paper validates the applicability of the technique with different types of sensor models. PMID:25093345

  5. Quantum probability measures and tomographic probability densities

    NARCIS (Netherlands)

    Amosov, GG; Man'ko, [No Value

    2004-01-01

    Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the

  6. Estimation of Extreme Responses and Failure Probability of Wind Turbines under Normal Operation by Controlled Monte Carlo Simulation

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri

    of the evolution of the PDF of a stochastic process; hence an alternative to the FPK. The considerable advantage of the introduced method over FPK is that its solution does not require high computational cost which extends its range of applicability to high order structural dynamic problems. The problem...... an alternative approach for estimation of the first excursion probability of any system is based on calculating the evolution of the Probability Density Function (PDF) of the process and integrating it on the specified domain. Clearly this provides the most accurate results among the three classes of the methods....... The solution of the Fokker-Planck-Kolmogorov (FPK) equation for systems governed by a stochastic differential equation driven by Gaussian white noise will give the sought time variation of the probability density function. However the analytical solution of the FPK is available for only a few dynamic systems...

  7. Statistical study on applied stress dependence of failure time in stress corrosion cracking of Zircaloy-4 alloy

    International Nuclear Information System (INIS)

    Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi; Tanaka, Akiei.

    1988-01-01

    Effects of applied stress on failure time in stress corrosion cracking of Zircaloy-4 alloy were investigated by Weibull distribution method. Test pieces in the evaculated silica tubes were annealed at 1,073 K for 7.2 x 10 3 s, and then quenched into ice-water. These species under constant applied stresses of 40∼90 % yield stress were immersed in CH 3 OH-1 w% I 2 solution at room temperature. The probability distribution of failure times under applied stress of 40 % of yield stress was described as single Weibull distribution, which had one shape parameter. The probability distributions of failure times under applied stress above 60 % of yield stress were described as composite and mixed Weibull distributions, which had the two shape parameters of Weibull distributions for the regions of the shorter time and longer one of failure. The values of these shape parameters in this study were larger than the value of 1 which corresponded to that of wear out failure. The observation of fracture surfaces and the stress dependence of the shape parameters indicated that the shape parameters both for the times of failure under 40 % of yield stress and for the longer ones above 60 % of yield stress corresponded to intergranular cracking, and that for shorter times of failure corresponded to transgranular cracking and dimple fracture. (author)

  8. 5 CFR 551.301 - Minimum wage.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...

  9. Investigation on Shielding Failure of a Novel 400 kV Composite tower

    DEFF Research Database (Denmark)

    Wang, Qian; Jahangiri, Tohid; Bak, Claus Leth

    2018-01-01

    In this paper, the lightning shielding performance of a newly-designed 400 kV double-circuit composite tower is investigated. Based on a revised EGM method, traditional shielding failure regions, located at both sides of a traditional tower is no longer a big issue for the fully composite tower......, due to its unusual ‘Y’ configuration. Instead, a new unprotected region exists in the tower center. The maximum lightning current that can lead to shielding failure and the shielding failure rate (SFR) of the new tower are calculated. To verify results from the revised EGM method, a scale model test...... is conducted. Spatial shielding failure probability around the tower is calculated based on ratio of discharge paths recorded in the test. Moreover, based on test results, the maximum shielding failure lightning currents are obtained. Analysis and results derived from the revised EGM method and scale model...

  10. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waler, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1977-01-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure-rate parameter, lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this paper is to present a methodology which can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate, lambda, simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) = 0.50 and P(lambda less than 1.0 x 10 -5 ) = 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure-rate percentiles illustrated above, one can use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) = 0.50 and P(R(t 0 ) less than 0.99999) = 0.95 for some operating time t 0 . Also, the paper includes graphs for selected percentiles which assist an engineer in applying the methodology

  11. A test for the minimum scale of grooving on the Amatrice and Norcia earthquakes

    Science.gov (United States)

    Okamoto, K.; Brodsky, E. E.; Billi, A.

    2017-12-01

    As stress builds up along a fault, elastic strain energy builds until it cannot be accommodated by small-scale ductile deformation and then the fault brittlely fails. This brittle failure is associated with the grooving process that causes slickensides along fault planes. Therefore the scale at which slickensides disappear could be geological evidence of earthquake nucleation. Past studies found the minimum scale of grooving, however the studied fault surfaces were not exposed by recent earthquakes. These measurements could have been a product of chemical or mechanical weathering. On August 24th and October 30th of 2016, MW 6.0 and 6.5 earthquakes shook central Italy. The earthquakes caused decimeter to meter scale fault scarps along the Mt. Vettoretto Fault. Here, we analyze samples of a scarp using white light interferometry in order to determine if the minimum scale of grooving is present. Results suggest that grooving begins around 100 μm for these samples, which is consistent with previous findings of faults without any direct evidence of earthquakes. The measurement is also consistent with typical values of the frictional weakening distance Dc, which also is associated with a transition between ductile and brittle behavior. The measurements show that the minimum scale of grooving is a useful measure of the behavior of faults.

  12. COVAL, Compound Probability Distribution for Function of Probability Distribution

    International Nuclear Information System (INIS)

    Astolfi, M.; Elbaz, J.

    1979-01-01

    1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions

  13. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    Science.gov (United States)

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Acromegaly Presenting as Cardiac Failure - A Case Report

    Directory of Open Access Journals (Sweden)

    Shohael Mahmud Arafat

    2011-09-01

    Full Text Available Acromegaly is characterized by chronic hypersecretion of growth hormone (GH and is associated with increased mortality rate because of the potential complications such as cardiovascular disease, respiratory disease, or malignancy, which are probably caused by the long-term exposure of tissues to excess GH, for at least 10 years, before diagnosis and treatment. Here we are reporting a case of acromegaly who initially presented with features of left ventricular failure for which she got herself admitted in CCU and was treated conservatively. Later on, after clinical examination and investigations she was diagnosed as a case of mitral regurgitation due to cardiomyopathy caused by acromegaly. After the successful transsphenoidal resection of the pituitary microadenoma, the level of GH was normalized and heart failure improved. Key words: acromegaly; heart failure; Pituitary microadenoma. DOI: http://dx.doi.org/10.3329/bsmmuj.v4i2.8644 BSMMU J 2011; 4(2:122-124

  15. SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator

    International Nuclear Information System (INIS)

    Xie, J; Xiao, Y; Wang, J; Peng, J; Lu, S; Hu, W

    2014-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range of 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future

  16. Assessing the Adequacy of Probability Distributions for Estimating the Extreme Events of Air Temperature in Dabaa Region

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2015-01-01

    Assessing the adequacy of probability distributions for estimating the extreme events of air temperature in Dabaa region is one of the pre-requisite s for any design purpose at Dabaa site which can be achieved by probability approach. In the present study, three extreme value distributions are considered and compared to estimate the extreme events of monthly and annual maximum and minimum temperature. These distributions include the Gumbel/Frechet distributions for estimating the extreme maximum values and Gumbel /Weibull distributions for estimating the extreme minimum values. Lieblein technique and Method of Moments are applied for estimating the distribution para meters. Subsequently, the required design values with a given return period of exceedance are obtained. Goodness-of-Fit tests involving Kolmogorov-Smirnov and Anderson-Darling are used for checking the adequacy of fitting the method/distribution for the estimation of maximum/minimum temperature. Mean Absolute Relative Deviation, Root Mean Square Error and Relative Mean Square Deviation are calculated, as the performance indicators, to judge which distribution and method of parameters estimation are the most appropriate one to estimate the extreme temperatures. The present study indicated that the Weibull distribution combined with Method of Moment estimators gives the highest fit, most reliable, accurate predictions for estimating the extreme monthly and annual minimum temperature. The Gumbel distribution combined with Method of Moment estimators showed the highest fit, accurate predictions for the estimation of the extreme monthly and annual maximum temperature except for July, August, October and November. The study shows that the combination of Frechet distribution with Method of Moment is the most accurate for estimating the extreme maximum temperature in July, August and November months while t he Gumbel distribution and Lieblein technique is the best for October

  17. On Probability Leakage

    OpenAIRE

    Briggs, William M.

    2012-01-01

    The probability leakage of model M with respect to evidence E is defined. Probability leakage is a kind of model error. It occurs when M implies that events $y$, which are impossible given E, have positive probability. Leakage does not imply model falsification. Models with probability leakage cannot be calibrated empirically. Regression models, which are ubiquitous in statistical practice, often evince probability leakage.

  18. Scalable Failure Masking for Stencil Computations using Ghost Region Expansion and Cell to Rank Remapping

    International Nuclear Information System (INIS)

    Gamell, Marc; Kolla, Hemanth; Mayo, Jackson; Heroux, Michael A.

    2017-01-01

    In order to achieve exascale systems, application resilience needs to be addressed. Some programming models, such as task-DAG (directed acyclic graphs) architectures, currently embed resilience features whereas traditional SPMD (single program, multiple data) and message-passing models do not. Since a large part of the community's code base follows the latter models, it is still required to take advantage of application characteristics to minimize the overheads of fault tolerance. To that end, this paper explores how recovering from hard process/node failures in a local manner is a natural approach for certain applications to obtain resilience at lower costs in faulty environments. In particular, this paper targets enabling online, semitransparent local recovery for stencil computations on current leadership-class systems as well as presents programming support and scalable runtime mechanisms. Also described and demonstrated in this paper is the effect of failure masking, which allows the effective reduction of impact on total time to solution due to multiple failures. Furthermore, we discuss, implement, and evaluate ghost region expansion and cell-to-rank remapping to increase the probability of failure masking. To conclude, this paper shows the integration of all aforementioned mechanisms with the S3D combustion simulation through an experimental demonstration (using the Titan system) of the ability to tolerate high failure rates (i.e., node failures every five seconds) with low overhead while sustaining performance at large scales. In addition, this demonstration also displays the failure masking probability increase resulting from the combination of both ghost region expansion and cell-to-rank remapping.

  19. Failure of Grass Covered Flood Defences with Roads on Top Due to Wave Overtopping: A Probabilistic Assessment Method

    Directory of Open Access Journals (Sweden)

    Juan P. Aguilar-López

    2018-06-01

    Full Text Available Hard structures, i.e., roads, are commonly found over flood defences, such as dikes, in order to ensure access and connectivity between flood protected areas. Several climate change future scenario studies have concluded that flood defences will be required to withstand more severe storms than the ones used for their original design. Therefore, this paper presents a probabilistic methodology to assess the effect of a road on top of a dike: it gives the failure probability of the grass cover due to wave overtopping over a wide range of design storms. The methodology was developed by building two different dike configurations in computational fluid dynamics Navier–Stokes solution software; one with a road on top and one without a road. Both models were validated with experimental data collected from field-scale experiments. Later, both models were used to produce data sets for training simpler and faster emulators. These emulators were coupled to a simplified erosion model which allowed testing storm scenarios which resulted in local scouring conditioned statistical failure probabilities. From these results it was estimated that the dike with a road has higher probabilities (5 × 10−5 > Pf >1 × 10−4 of failure than a dike without a road (Pf < 1 × 10−6 if realistic grass quality spatial distributions were assumed. The coupled emulator-erosion model was able to yield realistic probabilities, given all the uncertainties in the modelling process and it seems to be a promising tool for quantifying grass cover erosion failure.

  20. New research directions in flexural member failure at an interior support (Interaction of web crippling and bending moment)

    NARCIS (Netherlands)

    Hofmeyer, H.; Kerstens, J.G.M.; Snijder, H.H.; Bakker, M.C.M.

    1996-01-01

    Design rules describing failure at an interior support of cold-formed steel flexural members are of an empirical nature. This is probably due to the complex character of the failure mechanisms, which makes an analytical approach difficult. An overview of research on this subject has been made. The

  1. [Examination of safety improvement by failure record analysis that uses reliability engineering].

    Science.gov (United States)

    Kato, Kyoichi; Sato, Hisaya; Abe, Yoshihisa; Ishimori, Yoshiyuki; Hirano, Hiroshi; Higashimura, Kyoji; Amauchi, Hiroshi; Yanakita, Takashi; Kikuchi, Kei; Nakazawa, Yasuo

    2010-08-20

    How the maintenance checks of the medical treatment system, including start of work check and the ending check, was effective for preventive maintenance and the safety improvement was verified. In this research, date on the failure of devices in multiple facilities was collected, and the data of the trouble repair record was analyzed by the technique of reliability engineering. An analysis of data on the system (8 general systems, 6 Angio systems, 11 CT systems, 8 MRI systems, 8 RI systems, and the radiation therapy system 9) used in eight hospitals was performed. The data collection period assumed nine months from April to December 2008. Seven items were analyzed. (1) Mean time between failures (MTBF) (2) Mean time to repair (MTTR) (3) Mean down time (MDT) (4) Number found by check in morning (5) Failure generation time according to modality. The classification of the breakdowns per device, the incidence, and the tendency could be understood by introducing reliability engineering. Analysis, evaluation, and feedback on the failure generation history are useful to keep downtime to a minimum and to ensure safety.

  2. Results of Investigations of Failures of Geothermal Direct Use Well Pumps

    Energy Technology Data Exchange (ETDEWEB)

    Culver, G.

    1994-12-01

    Failures of 13 geothermal direct-use well pumps were investigated and information obtained about an additional 5 pumps that have been in service up to 23 years, but have not failed. Pumps with extra long lateral and variable-speed drives had the highest correlation with reduced time in service. There appears to be at least circumstantial evidence that recirculation may be a cause of reduced pump life. If recirculation is a cause of pump failures, pump specifiers will need to be more aware of minimum flow conditions as well as maximum flow conditions when specifying pumps. Over-sizing pumps and the tendency to specify pumps with high flow and low Net Positive Suction Head (NPSH) could lead to increased problems with recirculation.

  3. Monte Carlo methods to calculate impact probabilities

    Science.gov (United States)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  4. Failure Diagnosis and Prognosis of Rolling - Element Bearings using Artificial Neural Networks: A Critical Overview

    Science.gov (United States)

    Rao, B. K. N.; Srinivasa Pai, P.; Nagabhushana, T. N.

    2012-05-01

    Rolling - Element Bearings are extensively used in almost all global industries. Any critical failures in these vitally important components would not only affect the overall systems performance but also its reliability, safety, availability and cost-effectiveness. Proactive strategies do exist to minimise impending failures in real time and at a minimum cost. Continuous innovative developments are taking place in the field of Artificial Neural Networks (ANNs) technology. Significant research and development are taking place in many universities, private and public organizations and a wealth of published literature is available highlighting the potential benefits of employing ANNs in intelligently monitoring, diagnosing, prognosing and managing rolling-element bearing failures. This paper attempts to critically review the recent trends in this topical area of interest.

  5. Failure Diagnosis and Prognosis of Rolling - Element Bearings using Artificial Neural Networks: A Critical Overview

    International Nuclear Information System (INIS)

    Rao, B K N; Pai, P Srinivasa; Nagabhushana, T N

    2012-01-01

    Rolling - Element Bearings are extensively used in almost all global industries. Any critical failures in these vitally important components would not only affect the overall systems performance but also its reliability, safety, availability and cost-effectiveness. Proactive strategies do exist to minimise impending failures in real time and at a minimum cost. Continuous innovative developments are taking place in the field of Artificial Neural Networks (ANNs) technology. Significant research and development are taking place in many universities, private and public organizations and a wealth of published literature is available highlighting the potential benefits of employing ANNs in intelligently monitoring, diagnosing, prognosing and managing rolling-element bearing failures. This paper attempts to critically review the recent trends in this topical area of interest.

  6. Failure detection by adaptive lattice modelling using Kalman filtering methodology : application to NPP

    International Nuclear Information System (INIS)

    Ciftcioglu, O.

    1991-03-01

    Detection of failure in the operational status of a NPP is described. The method uses lattice form of the signal modelling established by means of Kalman filtering methodology. In this approach each lattice parameter is considered to be a state and the minimum variance estimate of the states is performed adaptively by optimal parameter estimation together with fast convergence and favourable statistical properties. In particular, the state covariance is also the covariance of the error committed by that estimate of the state value and the Mahalanobis distance formed for pattern comparison takes x 2 distribution for normally distributed signals. The failure detection is performed after a decision making process by probabilistic assessments based on the statistical information provided. The failure detection system is implemented in multi-channel signal environment of Borssele NPP and its favourable features are demonstrated. (author). 29 refs.; 7 figs

  7. Probability and uncertainty in nuclear safety decisions

    International Nuclear Information System (INIS)

    Pate-Cornell, M.E.

    1986-01-01

    In this paper, we examine some problems posed by the use of probabilities in Nuclear Safety decisions. We discuss some of the theoretical difficulties due to the collective nature of regulatory decisions, and, in particular, the calibration and the aggregation of risk information (e.g., experts opinions). We argue that, if one chooses numerical safety goals as a regulatory basis, one can reduce the constraints to an individual safety goal and a cost-benefit criterion. We show the relevance of risk uncertainties in this kind of regulatory framework. We conclude that, whereas expected values of future failure frequencies are adequate to show compliance with economic constraints, the use of a fractile (e.g., 95%) to be specified by the regulatory agency is justified to treat hazard uncertainties for the individual safety goal. (orig.)

  8. Effects of footwear and stride length on metatarsal strains and failure in running.

    Science.gov (United States)

    Firminger, Colin R; Fung, Anita; Loundagin, Lindsay L; Edwards, W Brent

    2017-11-01

    The metatarsal bones of the foot are particularly susceptible to stress fracture owing to the high strains they experience during the stance phase of running. Shoe cushioning and stride length reduction represent two potential interventions to decrease metatarsal strain and thus stress fracture risk. Fourteen male recreational runners ran overground at a 5-km pace while motion capture and plantar pressure data were collected during four experimental conditions: traditional shoe at preferred and 90% preferred stride length, and minimalist shoe at preferred and 90% preferred stride length. Combined musculoskeletal - finite element modeling based on motion analysis and computed tomography data were used to quantify metatarsal strains and the probability of failure was determined using stress-life predictions. No significant interactions between footwear and stride length were observed. Running in minimalist shoes increased strains for all metatarsals by 28.7% (SD 6.4%; pRunning at 90% preferred stride length decreased strains for metatarsal 4 by 4.2% (SD 2.0%; p≤0.007), and no differences in probability of failure were observed. Significant increases in metatarsal strains and the probability of failure were observed for recreational runners acutely transitioning to minimalist shoes. Running with a 10% reduction in stride length did not appear to be a beneficial technique for reducing the risk of metatarsal stress fracture, however the increased number of loading cycles for a given distance was not detrimental either. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces

    International Nuclear Information System (INIS)

    Vourdas, A.

    2014-01-01

    The orthocomplemented modular lattice of subspaces L[H(d)], of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)]). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator D(H 1 ,H 2 ), which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors P(H 1 ),P(H 2 ), to the subspaces H 1 , H 2 . As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities

  10. Probabilitic analysis for fatigue failure of leg-supported liquid containers under random earthquake-type excitation

    International Nuclear Information System (INIS)

    Fujita, Takafumi

    1981-01-01

    Leg-supported cylindrical containers frequently used for nuclear power plants and chemical plants and leg-supported rectangular containers such as water and fuel tanks are the structures, of which the reliability is feared at the time of earthquakes. In this study, about such leg-supported liquid containers, the structural reliability of the system at the time of earthquakes was analyzed from the viewpoint of fatigue failure at the joints of tanks and supporting legs and the fixing parts of legs. The second order unsteady coupled probability density of response displacement and response velocity and the first and second order unsteady probability density of response displacement envelope were determined, then using the results, the expected value, variance and unsteady probability density of cumulative damage were obtained on the basis of Miner's law, thus the structural reliability of the system was analyzed. The result of analysis was verified with the results of vibration tests using many simulated earthquake waves, and the experiment of the fatigue failure of a model with sine wave vibration was carried out. The mechanical model for the analysis, the unsteady probability density described above, the analysis of structural reliability and the experiment are reported. (Kako, I.)

  11. Bidirectional Cardio-Respiratory Interactions in Heart Failure

    Directory of Open Access Journals (Sweden)

    Nikola N. Radovanović

    2018-03-01

    Full Text Available We investigated cardio-respiratory coupling in patients with heart failure by quantification of bidirectional interactions between cardiac (RR intervals and respiratory signals with complementary measures of time series analysis. Heart failure patients were divided into three groups of twenty, age and gender matched, subjects: with sinus rhythm (HF-Sin, with sinus rhythm and ventricular extrasystoles (HF-VES, and with permanent atrial fibrillation (HF-AF. We included patients with indication for implantation of implantable cardioverter defibrillator or cardiac resynchronization therapy device. ECG and respiratory signals were simultaneously acquired during 20 min in supine position at spontaneous breathing frequency in 20 healthy control subjects and in patients before device implantation. We used coherence, Granger causality and cross-sample entropy analysis as complementary measures of bidirectional interactions between RR intervals and respiratory rhythm. In heart failure patients with arrhythmias (HF-VES and HF-AF there is no coherence between signals (p < 0.01, while in HF-Sin it is reduced (p < 0.05, compared with control subjects. In all heart failure groups causality between signals is diminished, but with significantly stronger causality of RR signal in respiratory signal in HF-VES. Cross-sample entropy analysis revealed the strongest synchrony between respiratory and RR signal in HF-VES group. Beside respiratory sinus arrhythmia there is another type of cardio-respiratory interaction based on the synchrony between cardiac and respiratory rhythm. Both of them are altered in heart failure patients. Respiratory sinus arrhythmia is reduced in HF-Sin patients and vanished in heart failure patients with arrhythmias. Contrary, in HF-Sin and HF-VES groups, synchrony increased, probably as consequence of some dominant neural compensatory mechanisms. The coupling of cardiac and respiratory rhythm in heart failure patients varies depending on the

  12. Fracture strength and probability of survival of narrow and extra-narrow dental implants after fatigue testing: In vitro and in silico analysis.

    Science.gov (United States)

    Bordin, Dimorvan; Bergamo, Edmara T P; Fardin, Vinicius P; Coelho, Paulo G; Bonfante, Estevam A

    2017-07-01

    To assess the probability of survival (reliability) and failure modes of narrow implants with different diameters. For fatigue testing, 42 implants with the same macrogeometry and internal conical connection were divided, according to diameter, as follows: narrow (Ø3.3×10mm) and extra-narrow (Ø2.9×10mm) (21 per group). Identical abutments were torqued to the implants and standardized maxillary incisor crowns were cemented and subjected to step-stress accelerated life testing (SSALT) in water. The use-level probability Weibull curves, and reliability for a mission of 50,000 and 100,000 cycles at 50N, 100, 150 and 180N were calculated. For the finite element analysis (FEA), two virtual models, simulating the samples tested in fatigue, were constructed. Loading at 50N and 100N were applied 30° off-axis at the crown. The von-Mises stress was calculated for implant and abutment. The beta (β) values were: 0.67 for narrow and 1.32 for extra-narrow implants, indicating that failure rates did not increase with fatigue in the former, but more likely were associated with damage accumulation and wear-out failures in the latter. Both groups showed high reliability (up to 97.5%) at 50 and 100N. A decreased reliability was observed for both groups at 150 and 180N (ranging from 0 to 82.3%), but no significant difference was observed between groups. Failure predominantly involved abutment fracture for both groups. FEA at 50N-load, Ø3.3mm showed higher von-Mises stress for abutment (7.75%) and implant (2%) when compared to the Ø2.9mm. There was no significant difference between narrow and extra-narrow implants regarding probability of survival. The failure mode was similar for both groups, restricted to abutment fracture. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Basic concepts in metal work failure after metastatic spine tumour surgery.

    Science.gov (United States)

    Kumar, Naresh; Patel, Ravish; Wadhwa, Anshuja Charvi; Kumar, Aravind; Milavec, Helena Maria; Sonawane, Dhiraj; Singh, Gurpal; Benneker, Lorin Michael

    2018-04-01

    The development of spinal implants marks a watershed in the evolution of metastatic spine tumour surgery (MSTS), which has evolved from standalone decompressive laminectomy to instrumented stabilization and decompression with reconstruction when necessary. Fusion may not be feasible after MSTS due to poor quality of graft host bed along with adjunct chemotherapy and/or radiotherapy postoperatively. With an increase in the survival of patients with spinal tumours, there is a probability of an increase in the rate of implant failure. This review aims to help establish a clear understanding of implants/constructs used in MSTS and to highlight the fundamental biomechanics of implant/construct failures. Published literature on implant failure after spine surgery and MSTS has been reviewed. The evolution of spinal implants and their role in MSTS has been briefly described. The review defines implant/construct failures using radiological parameters that are practical, feasible, and derived from historical descriptions. We have discussed common modes of implant/construct failure after MSTS to allow further understanding, interception, and prevention of catastrophic failure. Implant failure rates in MSTS are in the range of 2-8%. Variability in patterns of failure has been observed based on anatomical region and the type of constructs used. Patients with construct/implant failures may or may not be symptomatic and present either as early (failures (> 3months). It has been noted that not all the implant failures after MSTS result in revisions. Based on the observed radiological criteria and clinical presentations, we have proposed a clinico-radiological classification for implant/construct failure after MSTS.

  14. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    Science.gov (United States)

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  15. Minimum income protection in the Netherlands

    NARCIS (Netherlands)

    van Peijpe, T.

    2009-01-01

    This article offers an overview of the Dutch legal system of minimum income protection through collective bargaining, social security, and statutory minimum wages. In addition to collective agreements, the Dutch statutory minimum wage offers income protection to a small number of workers. Its

  16. A Big Data Analysis Approach for Rail Failure Risk Assessment.

    Science.gov (United States)

    Jamshidi, Ali; Faghih-Roohi, Shahrzad; Hajizadeh, Siamak; Núñez, Alfredo; Babuska, Robert; Dollevoet, Rolf; Li, Zili; De Schutter, Bart

    2017-08-01

    Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach. © 2017 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  17. Quantitative functional failure analysis of a thermal-hydraulic passive system by means of bootstrapped Artificial Neural Networks

    International Nuclear Information System (INIS)

    Zio, E.; Apostolakis, G.E.; Pedroni, N.

    2010-01-01

    The estimation of the functional failure probability of a thermal-hydraulic (T-H) passive system can be done by Monte Carlo (MC) sampling of the epistemic uncertainties affecting the system model and the numerical values of its parameters, followed by the computation of the system response by a mechanistic T-H code, for each sample. The computational effort associated to this approach can be prohibitive because a large number of lengthy T-H code simulations must be performed (one for each sample) for accurate quantification of the functional failure probability and the related statistics. In this paper, the computational burden is reduced by replacing the long-running, original T-H code by a fast-running, empirical regression model: in particular, an Artificial Neural Network (ANN) model is considered. It is constructed on the basis of a limited-size set of data representing examples of the input/output nonlinear relationships underlying the original T-H code; once the model is built, it is used for performing, in an acceptable computational time, the numerous system response calculations needed for an accurate failure probability estimation, uncertainty propagation and sensitivity analysis. The empirical approximation of the system response provided by the ANN model introduces an additional source of (model) uncertainty, which needs to be evaluated and accounted for. A bootstrapped ensemble of ANN regression models is here built for quantifying, in terms of confidence intervals, the (model) uncertainties associated with the estimates provided by the ANNs. For demonstration purposes, an application to the functional failure analysis of an emergency passive decay heat removal system in a simple steady-state model of a Gas-cooled Fast Reactor (GFR) is presented. The functional failure probability of the system is estimated together with global Sobol sensitivity indices. The bootstrapped ANN regression model built with low computational time on few (e.g., 100) data

  18. Quantitative functional failure analysis of a thermal-hydraulic passive system by means of bootstrapped Artificial Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Zio, E., E-mail: enrico.zio@polimi.i [Energy Department, Politecnico di Milano, Via Ponzio 34/3, 20133 Milan (Italy); Apostolakis, G.E., E-mail: apostola@mit.ed [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139-4307 (United States); Pedroni, N. [Energy Department, Politecnico di Milano, Via Ponzio 34/3, 20133 Milan (Italy)

    2010-05-15

    The estimation of the functional failure probability of a thermal-hydraulic (T-H) passive system can be done by Monte Carlo (MC) sampling of the epistemic uncertainties affecting the system model and the numerical values of its parameters, followed by the computation of the system response by a mechanistic T-H code, for each sample. The computational effort associated to this approach can be prohibitive because a large number of lengthy T-H code simulations must be performed (one for each sample) for accurate quantification of the functional failure probability and the related statistics. In this paper, the computational burden is reduced by replacing the long-running, original T-H code by a fast-running, empirical regression model: in particular, an Artificial Neural Network (ANN) model is considered. It is constructed on the basis of a limited-size set of data representing examples of the input/output nonlinear relationships underlying the original T-H code; once the model is built, it is used for performing, in an acceptable computational time, the numerous system response calculations needed for an accurate failure probability estimation, uncertainty propagation and sensitivity analysis. The empirical approximation of the system response provided by the ANN model introduces an additional source of (model) uncertainty, which needs to be evaluated and accounted for. A bootstrapped ensemble of ANN regression models is here built for quantifying, in terms of confidence intervals, the (model) uncertainties associated with the estimates provided by the ANNs. For demonstration purposes, an application to the functional failure analysis of an emergency passive decay heat removal system in a simple steady-state model of a Gas-cooled Fast Reactor (GFR) is presented. The functional failure probability of the system is estimated together with global Sobol sensitivity indices. The bootstrapped ANN regression model built with low computational time on few (e.g., 100) data

  19. Analysis of calculating methods for failure distribution function based on maximal entropy principle

    International Nuclear Information System (INIS)

    Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli

    2009-01-01

    The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)

  20. Minimum 1D P wave velocity model for the Cordillera Volcanica de Guanacaste, Costa Rica

    International Nuclear Information System (INIS)

    Araya, Maria C.; Linkimer, Lepolt; Taylor, Waldo

    2016-01-01

    A minimum velocity model is derived from 475 local earthquakes registered by the Observatorio Vulcanologico y Sismologico Arenal Miravalles (OSIVAM) for the Cordillera Volcanica de Guanacaste, between January 2006 and July 2014. The model has consisted of six layers from the surface up to 80 km the depth. The model has presented speeds varying between 3,96 and 7,79 km/s. The corrections obtained from the seismic stations have varied between -0,28 to 0,45, and they have shown a trend of positive values on the volcanic arc and negative on the forearc, in concordance with the crustal thickness. The relocation of earthquakes have presented three main groups of epicenters that could be associated with activity in inferred failures. The minimum ID velocity model has provided a simplified idea of the crustal structure and aims to contribute with the improvement of the routine location of earthquakes performed by OSIVAM. (author) [es

  1. Analysis of risk factors for cluster behavior of dental implant failures.

    Science.gov (United States)

    Chrcanovic, Bruno Ramos; Kisch, Jenö; Albrektsson, Tomas; Wennerberg, Ann

    2017-08-01

    Some studies indicated that implant failures are commonly concentrated in few patients. To identify and analyze cluster behavior of dental implant failures among subjects of a retrospective study. This retrospective study included patients receiving at least three implants only. Patients presenting at least three implant failures were classified as presenting a cluster behavior. Univariate and multivariate logistic regression models and generalized estimating equations analysis evaluated the effect of explanatory variables on the cluster behavior. There were 1406 patients with three or more implants (8337 implants, 592 failures). Sixty-seven (4.77%) patients presented cluster behavior, with 56.8% of all implant failures. The intake of antidepressants and bruxism were identified as potential negative factors exerting a statistically significant influence on a cluster behavior at the patient-level. The negative factors at the implant-level were turned implants, short implants, poor bone quality, age of the patient, the intake of medicaments to reduce the acid gastric production, smoking, and bruxism. A cluster pattern among patients with implant failure is highly probable. Factors of interest as predictors for implant failures could be a number of systemic and local factors, although a direct causal relationship cannot be ascertained. © 2017 Wiley Periodicals, Inc.

  2. WE-G-BRA-08: Failure Modes and Effects Analysis (FMEA) for Gamma Knife Radiosurgery

    International Nuclear Information System (INIS)

    Xu, Y; Bhatnagar, J; Bednarz, G; Flickinger, J; Arai, Y; Huq, M Saiful; Vacsulka, J; Monaco, E; Niranjan, A; Lunsford, L Dade; Feng, W

    2015-01-01

    Purpose: To perform a failure modes and effects analysis (FMEA) study for Gamma Knife (GK) radiosurgery processes at our institution based on our experience with the treatment of more than 13,000 patients. Methods: A team consisting of medical physicists, nurses, radiation oncologists, neurosurgeons at the University of Pittsburgh Medical Center and an external physicist expert was formed for the FMEA study. A process tree and a failure mode table were created for the GK procedures using the Leksell GK Perfexion and 4C units. Three scores for the probability of occurrence (O), the severity (S), and the probability of no detection (D) for failure modes were assigned to each failure mode by each professional on a scale from 1 to 10. The risk priority number (RPN) for each failure mode was then calculated (RPN = OxSxD) as the average scores from all data sets collected. Results: The established process tree for GK radiosurgery consists of 10 sub-processes and 53 steps, including a sub-process for frame placement and 11 steps that are directly related to the frame-based nature of the GK radiosurgery. Out of the 86 failure modes identified, 40 failure modes are GK specific, caused by the potential for inappropriate use of the radiosurgery head frame, the imaging fiducial boxes, the GK helmets and plugs, and the GammaPlan treatment planning system. The other 46 failure modes are associated with the registration, imaging, image transfer, contouring processes that are common for all radiation therapy techniques. The failure modes with the highest hazard scores are related to imperfect frame adaptor attachment, bad fiducial box assembly, overlooked target areas, inaccurate previous treatment information and excessive patient movement during MRI scan. Conclusion: The implementation of the FMEA approach for Gamma Knife radiosurgery enabled deeper understanding of the overall process among all professionals involved in the care of the patient and helped identify potential

  3. Direct ultimate disposal of spent fuel. Simulation of shaft transport. Tests for removing operating failures (TA 6)

    International Nuclear Information System (INIS)

    Filbert, W.; Heda, M.; Neydek, J.

    1994-03-01

    Probable operating failures were analysed and appropriate means for recovery were planned. Based on the analysis of probable failures, three major countermeasures were defined and planned in detail for subsequent demonstration tests: recovery of a fully functioning plateau car, recovery of a plateau car that got stuck, and recovery of a plateau car that ran off the rails. All failures investigated can be repaired by one countermeasure, or a combination of the measures provided for. Trained personnel will be able to restore full service of a loaded POLLUX plateau car that ran off the track and is damaged within approx. 50 minutes. Dose calculations on a conservative basis indicate personal doses resulting from recovery and repair work to be between 200 μSv (20 mrem) and 52 μSv (5.2 mrem). The collective dose is calculated to be approx. 250 μSv (25 mrem). (orig./HP) [de

  4. Slope stability probability classification, Waikato Coal Measures, New Zealand

    Energy Technology Data Exchange (ETDEWEB)

    Lindsay, P.; Gillard, G.R.; Moore, T.A. [CRL Energy, PO Box 29-415, Christchurch (New Zealand); Campbell, R.N.; Fergusson, D.A. [Solid Energy North, Private Bag 502, Huntly (New Zealand)

    2001-01-01

    Ferm classified lithological units have been identified and described in the Waikato Coal Measures in open pits in the Waikato coal region. These lithological units have been classified geotechnically by mechanical tests and discontinuity measurements. Using these measurements slope stability probability classifications (SSPC) have been quantified based on an adaptation of Hack's [Slope Stability Probability Classification, ITC Delft Publication, Enschede, Netherlands, vol. 43, 1998, 273 pp.] SSPC system, which places less influence on rock quality designation and unconfined compressive strength than previous slope/rock mass rating systems. The Hack weathering susceptibility rating has been modified by using chemical index of alteration values determined from XRF major element analyses. Slaking is an important parameter in slope stability in the Waikato Coal Measures lithologies and hence, a non-subjective method of assessing slaking in relation to the chemical index of alteration has been introduced. Another major component of this adapted SSPC system is the inclusion of rock moisture content effects on slope stability. The main modifications of Hack's SSPC system are the introduction of rock intact strength derived from the modified Mohr-Coulomb failure criterion, which has been adapted for varying moisture content, weathering state and confining pressure. It is suggested that the subjectivity in assessing intact rock strength within broad bands in the initial SSPC system is a major weakness of the initial system. Initial results indicate a close relationship between rock mass strength values, calculated from rock mass friction angles and rock mass cohesion values derived from two established rock mass classification methods (modified Hoek-Brown failure criteria and MRMR) and the adapted SSPC system. The advantage of the modified SSPC system is that slope stability probabilities based on discontinuity-independent and discontinuity-dependent data and a

  5. Bidirectional Cardio-Respiratory Interactions in Heart Failure.

    Science.gov (United States)

    Radovanović, Nikola N; Pavlović, Siniša U; Milašinović, Goran; Kirćanski, Bratislav; Platiša, Mirjana M

    2018-01-01

    We investigated cardio-respiratory coupling in patients with heart failure by quantification of bidirectional interactions between cardiac (RR intervals) and respiratory signals with complementary measures of time series analysis. Heart failure patients were divided into three groups of twenty, age and gender matched, subjects: with sinus rhythm (HF-Sin), with sinus rhythm and ventricular extrasystoles (HF-VES), and with permanent atrial fibrillation (HF-AF). We included patients with indication for implantation of implantable cardioverter defibrillator or cardiac resynchronization therapy device. ECG and respiratory signals were simultaneously acquired during 20 min in supine position at spontaneous breathing frequency in 20 healthy control subjects and in patients before device implantation. We used coherence, Granger causality and cross-sample entropy analysis as complementary measures of bidirectional interactions between RR intervals and respiratory rhythm. In heart failure patients with arrhythmias (HF-VES and HF-AF) there is no coherence between signals ( p respiratory signal in HF-VES. Cross-sample entropy analysis revealed the strongest synchrony between respiratory and RR signal in HF-VES group. Beside respiratory sinus arrhythmia there is another type of cardio-respiratory interaction based on the synchrony between cardiac and respiratory rhythm. Both of them are altered in heart failure patients. Respiratory sinus arrhythmia is reduced in HF-Sin patients and vanished in heart failure patients with arrhythmias. Contrary, in HF-Sin and HF-VES groups, synchrony increased, probably as consequence of some dominant neural compensatory mechanisms. The coupling of cardiac and respiratory rhythm in heart failure patients varies depending on the presence of atrial/ventricular arrhythmias and it could be revealed by complementary methods of time series analysis.

  6. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waller, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1976-07-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure rate lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this report is to present a methodology that can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate lambda simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) equals 0.50 and P(lambda less than 1.0 x 10 -5 ) equals 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure rate percentiles illustrated above, it is possible to use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) equals 0.50 and P(R(t 0 ) less than 0.99999) equals 0.95, for some operating time t 0 . The report also includes graphs for selected percentiles which assist an engineer in applying the procedure. 28 figures, 16 tables

  7. Factors attributing to the failure of endometrial sampling in women with postmenopausal bleeding

    NARCIS (Netherlands)

    Visser, Nicole C. M.; Breijer, Maria C.; Herman, Malou C.; Bekkers, Ruud L. M.; Veersema, Sebastiaan; Opmeer, Brent C.; Mol, Ben W. J.; Timmermans, Anne; Pijnenborg, Johanna M. A.

    2013-01-01

    To determine which doctor- and patient-related factors affect failure of outpatient endometrial sampling in women with postmenopausal bleeding, and to develop a multivariable prediction model to select women with a high probability of failed sampling. Prospective multicenter cohort study. Three

  8. Euthyroid sick syndrome in patients with acute renal failure

    International Nuclear Information System (INIS)

    Ilic Slobodan; Vlajkovic Marina; Rajic Milena; Bogicevic Momcilo

    2004-01-01

    Objectives: The purpose of this study was to evaluate serum thyroid hormone profile in acute renal failure (ARF) patients according to the initial 131I-OIH clearance value as a predictor of ARF outcome. Patients and methods: Radioimmuno-assays of T4, T3, FT4, FT3, rT3 and TSH were performed in 32 ARF patients within 7 days and 6 months after ARF onset. The patients were divided into three groups according to the kidney function recovery potential measured by 131I-OIH clearance as follows: Group I: high probability for kidney recovery (131I-OIH clearance >250 ml/min), Group II: intermediate probability for kidney recovery (131I-OIH clearance 151-250 ml/min) and Group III: low probability for kidney recovery (131I-OIH clearance <150 ml/min). The results were compared with those obtained in 20 healthy patients. Results: Total thyroid hormone and TSH values are displayed in the table, Values of total T4 and TSH were slightly declined in the Group I but without reaching the statistical significance, while total T3 value was significantly decreased seven days after ARF onset. In the groups with intermediate and low probability for kidney recovery both T3 and T4 values were significantly dropped, being most prominent in the III group. After six months, the most severe fall of thyroid hormone levels without reaching the normalization was found only in the Group III while in the Group I and II normalization of total thyroid hormone levels was achieved. At the end of the observation period ARF patients with low probability for kidney recovery were found significantly lower values of TSH. Conclusion: Acute renal failure affects thyroid function leading to euthyroid sick syndrome characterized by declined serum T3 and T4 without TSH elevation. Thyroid hormone a disturbance is in accordance with the impairment of renal function being the most pronounced in patient with low probability for kidney recovery. This pattern of altered thyroid hormone levels could be a result of

  9. Development of a new model to evaluate the probability of automatic plant trips for pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Shimada, Yoshio [Institute of Nuclear Safety System Inc., Mihama, Fukui (Japan); Kawai, Katsunori; Suzuki, Hiroshi [Mitsubishi Heavy Industries Ltd., Tokyo (Japan)

    2001-09-01

    In order to improve the reliability of plant operations for pressurized water reactors, a new fault tree model was developed to evaluate the probability of automatic plant trips. This model consists of fault trees for sixteen systems. It has the following features: (1) human errors and transmission line incidents are modeled by the existing data, (2) the repair of failed components is considered to calculate the failure probability of components, (3) uncertainty analysis is performed by an exact method. From the present results, it is confirmed that the obtained upper and lower bound values of the automatic plant trip probability are within the existing data bound in Japan. Thereby this model can be applicable to the prediction of plant performance and reliability. (author)

  10. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  11. XI Symposium on Probability and Stochastic Processes

    CERN Document Server

    Pardo, Juan; Rivero, Víctor; Bravo, Gerónimo

    2015-01-01

    This volume features lecture notes and a collection of contributed articles from the XI Symposium on Probability and Stochastic Processes, held at CIMAT Mexico in September 2013. Since the symposium was part of the activities organized in Mexico to celebrate the International Year of Statistics, the program included topics from the interface between statistics and stochastic processes. The book starts with notes from the mini-course given by Louigi Addario-Berry with an accessible description of some features of the multiplicative coalescent and its connection with random graphs and minimum spanning trees. It includes a number of exercises and a section on unanswered questions. Further contributions provide the reader with a broad perspective on the state-of-the art of active areas of research. Contributions by: Louigi Addario-Berry Octavio Arizmendi Fabrice Baudoin Jochen Blath Loïc Chaumont J. Armando Domínguez-Molina Bjarki Eldon Shui Feng Tulio Gaxiola Adrián González Casanova Evgueni Gordienko Daniel...

  12. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  13. On a method of evaluation of failure rate of equipment and pipings under excess-earthquake loadings

    International Nuclear Information System (INIS)

    Shibata, H.; Okamura, H.

    1979-01-01

    This paper deals with a method of evaluation of the failure rate of equipment and pipings in nuclear power plants under an earthquake which is exceeding the design basis earthquake. If we put the ratio of the maximum ground acceleration of an earthquake to that of the design basis earthquake as n, then the failure rate or the probability of failure is the function of n as p(n). The purpose of this study is establishing the procedure of evaluation of the relation n vs. p(n). (orig.)

  14. Testing for variation in taxonomic extinction probabilities: a suggested methodology and some results

    Science.gov (United States)

    Conroy, M.J.; Nichols, J.D.

    1984-01-01

    Several important questions in evolutionary biology and paleobiology involve sources of variation in extinction rates. In all cases of which we are aware, extinction rates have been estimated from data in which the probability that an observation (e.g., a fossil taxon) will occur is related both to extinction rates and to what we term encounter probabilities. Any statistical method for analyzing fossil data should at a minimum permit separate inferences on these two components. We develop a method for estimating taxonomic extinction rates from stratigraphic range data and for testing hypotheses about variability in these rates. We use this method to estimate extinction rates and to test the hypothesis of constant extinction rates for several sets of stratigraphic range data. The results of our tests support the hypothesis that extinction rates varied over the geologic time periods examined. We also present a test that can be used to identify periods of high or low extinction probabilities and provide an example using Phanerozoic invertebrate data. Extinction rates should be analyzed using stochastic models, in which it is recognized that stratigraphic samples are random varlates and that sampling is imperfect

  15. Failure-probability driven dose painting

    DEFF Research Database (Denmark)

    Vogelius, Ivan R; Håkansson, Katrin; Due, Anne K

    2013-01-01

    To demonstrate a data-driven dose-painting strategy based on the spatial distribution of recurrences in previously treated patients. The result is a quantitative way to define a dose prescription function, optimizing the predicted local control at constant treatment intensity. A dose planning study...

  16. Methodology for probability of failure assessment of offshore pipelines; Metodologia qualitativa de avaliacao da probabilidade de falha de dutos rigidos submarinos estaticos

    Energy Technology Data Exchange (ETDEWEB)

    Pezzi Filho, Mario [PETROBRAS, Rio de Janeiro, RJ (Brazil)

    2005-07-01

    In this study it is presented a methodology for assessing the likelihood of failure for every failure mechanism defined for carbon steel static offshore pipelines. This methodology is aimed to comply with the Integrity Management policy established by the Company. Decision trees are used for the development of the methodology and the evaluation of the extent and the significance of these failure mechanisms. Decision trees enable also the visualization of the logical structure of algorithms which eventually will be used in risk assessment software. The benefits of the proposed methodology are presented and it is recommended that it be tested on static offshore pipelines installed in different assets for validation. (author)

  17. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    Energy Technology Data Exchange (ETDEWEB)

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul [Stanford University, Stanford, California 94394 (United States); Varian Medical Systems, Palo Alto, California 94304 (United States); Stanford University, Stanford, California 94394 (United States)

    2010-12-15

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN{>=}125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software/hardware upgrades. System latency was determined to be {approx}193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%-3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was {approx}35 min, while that taken for comprehensive testing was {approx}3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures

  18. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems.

    Science.gov (United States)

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-12-01

    To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Failures modes with RPN > or = 125 were recommended to be tested monthly. Failure modes with RPN < 125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software/hardware upgrades. System latency was determined to be approximately 193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%-3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was approximately 35 min, while that taken for comprehensive testing was approximately 3.5 h. FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation

  19. Failure mode and effect analysis-based quality assurance for dynamic MLC tracking systems

    International Nuclear Information System (INIS)

    Sawant, Amit; Dieterich, Sonja; Svatos, Michelle; Keall, Paul

    2010-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA)-based commissioning and quality assurance framework for dynamic multileaf collimator (DMLC) tumor tracking systems. Methods: A systematic failure mode and effect analysis was performed for a prototype real-time tumor tracking system that uses implanted electromagnetic transponders for tumor position monitoring and a DMLC for real-time beam adaptation. A detailed process tree of DMLC tracking delivery was created and potential tracking-specific failure modes were identified. For each failure mode, a risk probability number (RPN) was calculated from the product of the probability of occurrence, the severity of effect, and the detectibility of the failure. Based on the insights obtained from the FMEA, commissioning and QA procedures were developed to check (i) the accuracy of coordinate system transformation, (ii) system latency, (iii) spatial and dosimetric delivery accuracy, (iv) delivery efficiency, and (v) accuracy and consistency of system response to error conditions. The frequency of testing for each failure mode was determined from the RPN value. Results: Failures modes with RPN≥125 were recommended to be tested monthly. Failure modes with RPN<125 were assigned to be tested during comprehensive evaluations, e.g., during commissioning, annual quality assurance, and after major software/hardware upgrades. System latency was determined to be ∼193 ms. The system showed consistent and accurate response to erroneous conditions. Tracking accuracy was within 3%-3 mm gamma (100% pass rate) for sinusoidal as well as a wide variety of patient-derived respiratory motions. The total time taken for monthly QA was ∼35 min, while that taken for comprehensive testing was ∼3.5 h. Conclusions: FMEA proved to be a powerful and flexible tool to develop and implement a quality management (QM) framework for DMLC tracking. The authors conclude that the use of FMEA-based QM ensures efficient allocation

  20. Distribution of the minimum path on percolation clusters: A renormalization group calculation

    International Nuclear Information System (INIS)

    Hipsh, Lior.

    1993-06-01

    This thesis uses the renormalization group for the research of the chemical distance or the minimal path on percolation clusters on a 2 dimensional square lattice. Our aims are to calculate analytically (iterative calculation) the fractal dimension of the minimal path. d min. , and the distributions of the minimum paths, l min for different lattice sizes and for different starting densities (including the threshold value p c ). For the distributions. We seek for an analytic form which describes them. The probability to get a minimum path for each linear size L is calculated by iterating the distribution of l min for the basic cell of size 2*2 to the next scale sizes, using the H cell renormalization group. For the threshold value of p and for values near to p c . We confirm a scaling in the form: P(l,L) =f1/l(l/(L d min ). L - the linear size, l - the minimum path. The distribution can be also represented in the Fourier space, so we will try to solve the renormalization group equations in this space. A numerical fitting is produced and compared to existing numerical results. In order to improve the agreement between the renormalization group and the numerical simulations, we also present attempts to generalize the renormalization group by adding more parameters, e.g. correlations between bonds in different directions or finite densities for occupation of bonds and sites. (author) 17 refs

  1. Apicotomy as Treatment for Failure of Orthodontic Traction

    Directory of Open Access Journals (Sweden)

    Leandro Berni Osório

    2013-01-01

    Full Text Available Objective. The purpose of this study was to present a case report that demonstrated primary failure in a tooth traction that was subsequently treated with apicotomy technique. Case Report. A 10-year-old girl had an impacted upper right canine with increased pericoronal space, which was apparent on a radiographic image. The right maxillary sinus showed an opacity suggesting sinusitis. The presumptive diagnosis was dentigerous cyst associated with maxillary sinus infection. The plan for treatment included treatment of the sinus infection and cystic lesion and orthodontic traction of the canine after surgical exposure and bonding of an orthodontic appliance. The surgical procedure, canine position, root dilaceration, and probably apical ankylosis acted in the primary failure of the orthodontic traction. Surgical apical cut of the displaced teeth was performed, and tooth position in the dental arch was possible, with a positive response to the pulp vitality test. Conclusion. Apicotomy is an effective technique to treat severe canine displacement and primary orthodontic traction failure of palatally displaced canines.

  2. Fission product concentration evolution in sodium pool following a fuel subassembly failure in an LMFBR

    International Nuclear Information System (INIS)

    Natesan, K.; Velusamy, K.; Selvaraj, P.; Kasinathan, N.; Chellapandi, P.; Chetal, S.; Bhoje, S.

    2003-01-01

    During a fuel element failure in a liquid metal cooled fast breeder reactor, the fission products originating from the failed pins mix into the sodium pool. Delayed Neutron Detectors (DND) are provided in the sodium pool to detect such failures by way of detection of delayed neutrons emitted by the fission products. The transient evolution of fission product concentration is governed by the sodium flow distribution in the pool. Transient hydraulic analysis has been carried out using the CFD code PHOENICS to estimate fission product concentration evolution in hot pool. k- ε turbulence model and zero laminar diffusivity for the fission product concentration have been considered in the analysis. Times at which the failures of various fuel subassemblies (SA) are detected by the DND are obtained. It has been found that in order to effectively detect the failure of every fuel SA, a minimum of 8 DND in hot pool are essential

  3. Understanding the Minimum Wage: Issues and Answers.

    Science.gov (United States)

    Employment Policies Inst. Foundation, Washington, DC.

    This booklet, which is designed to clarify facts regarding the minimum wage's impact on marketplace economics, contains a total of 31 questions and answers pertaining to the following topics: relationship between minimum wages and poverty; impacts of changes in the minimum wage on welfare reform; and possible effects of changes in the minimum wage…

  4. A Competing Risk Model of First Failure Site after Definitive Chemoradiation Therapy for Locally Advanced Non-Small Cell Lung Cancer.

    Science.gov (United States)

    Nygård, Lotte; Vogelius, Ivan R; Fischer, Barbara M; Kjær, Andreas; Langer, Seppo W; Aznar, Marianne C; Persson, Gitte F; Bentzen, Søren M

    2018-04-01

    The aim of the study was to build a model of first failure site- and lesion-specific failure probability after definitive chemoradiotherapy for inoperable NSCLC. We retrospectively analyzed 251 patients receiving definitive chemoradiotherapy for NSCLC at a single institution between 2009 and 2015. All patients were scanned by fludeoxyglucose positron emission tomography/computed tomography for radiotherapy planning. Clinical patient data and fludeoxyglucose positron emission tomography standardized uptake values from primary tumor and nodal lesions were analyzed by using multivariate cause-specific Cox regression. In patients experiencing locoregional failure, multivariable logistic regression was applied to assess risk of each lesion being the first site of failure. The two models were used in combination to predict probability of lesion failure accounting for competing events. Adenocarcinoma had a lower hazard ratio (HR) of locoregional failure than squamous cell carcinoma (HR = 0.45, 95% confidence interval [CI]: 0.26-0.76, p = 0.003). Distant failures were more common in the adenocarcinoma group (HR = 2.21, 95% CI: 1.41-3.48, p failure showed that primary tumors were more likely to fail than lymph nodes (OR = 12.8, 95% CI: 5.10-32.17, p failure (OR = 1.26 per unit increase, 95% CI: 1.12-1.40, p failure site-specific competing risk model based on patient- and lesion-level characteristics. Failure patterns differed between adenocarcinoma and squamous cell carcinoma, illustrating the limitation of aggregating them into NSCLC. Failure site-specific models add complementary information to conventional prognostic models. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  5. Youth minimum wages and youth employment

    NARCIS (Netherlands)

    Marimpi, Maria; Koning, Pierre

    2018-01-01

    This paper performs a cross-country level analysis on the impact of the level of specific youth minimum wages on the labor market performance of young individuals. We use information on the use and level of youth minimum wages, as compared to the level of adult minimum wages as well as to the median

  6. Fuzzy modeling of analytical redundancy for sensor failure detection

    International Nuclear Information System (INIS)

    Tsai, T.M.; Chou, H.P.

    1991-01-01

    Failure detection and isolation (FDI) in dynamic systems may be accomplished by testing the consistency of the system via analytically redundant relations. The redundant relation is basically a mathematical model relating system inputs and dissimilar sensor outputs from which information is extracted and subsequently examined for the presence of failure signatures. Performance of the approach is often jeopardized by inherent modeling error and noise interference. To mitigate such effects, techniques such as Kalman filtering, auto-regression-moving-average (ARMA) modeling in conjunction with probability tests are often employed. These conventional techniques treat the stochastic nature of uncertainties in a deterministic manner to generate best-estimated model and sensor outputs by minimizing uncertainties. In this paper, the authors present a different approach by treating the effect of uncertainties with fuzzy numbers. Coefficients in redundant relations derived from first-principle physical models are considered as fuzzy parameters and on-line updated according to system behaviors. Failure detection is accomplished by examining the possibility that a sensor signal occurred in an estimated fuzzy domain. To facilitate failure isolation, individual FDI monitors are designed for each interested sensor

  7. Elastic Rock Heterogeneity Controls Brittle Rock Failure during Hydraulic Fracturing

    Science.gov (United States)

    Langenbruch, C.; Shapiro, S. A.

    2014-12-01

    For interpretation and inversion of microseismic data it is important to understand, which properties of the reservoir rock control the occurrence probability of brittle rock failure and associated seismicity during hydraulic stimulation. This is especially important, when inverting for key properties like permeability and fracture conductivity. Although it became accepted that seismic events are triggered by fluid flow and the resulting perturbation of the stress field in the reservoir rock, the magnitude of stress perturbations, capable of triggering failure in rocks, can be highly variable. The controlling physical mechanism of this variability is still under discussion. We compare the occurrence of microseismic events at the Cotton Valley gas field to elastic rock heterogeneity, obtained from measurements along the treatment wells. The heterogeneity is characterized by scale invariant fluctuations of elastic properties. We observe that the elastic heterogeneity of the rock formation controls the occurrence of brittle failure. In particular, we find that the density of events is increasing with the Brittleness Index (BI) of the rock, which is defined as a combination of Young's modulus and Poisson's ratio. We evaluate the physical meaning of the BI. By applying geomechanical investigations we characterize the influence of fluctuating elastic properties in rocks on the probability of brittle rock failure. Our analysis is based on the computation of stress fluctuations caused by elastic heterogeneity of rocks. We find that elastic rock heterogeneity causes stress fluctuations of significant magnitude. Moreover, the stress changes necessary to open and reactivate fractures in rocks are strongly related to fluctuations of elastic moduli. Our analysis gives a physical explanation to the observed relation between elastic heterogeneity of the rock formation and the occurrence of brittle failure during hydraulic reservoir stimulations. A crucial factor for understanding

  8. Sensor failure and multivariable control for airbreathing propulsion systems. Ph.D. Thesis - Dec. 1979 Final Report

    Science.gov (United States)

    Behbehani, K.

    1980-01-01

    A new sensor/actuator failure analysis technique for turbofan jet engines was developed. Three phases of failure analysis, namely detection, isolation, and accommodation are considered. Failure detection and isolation techniques are developed by utilizing the concept of Generalized Likelihood Ratio (GLR) tests. These techniques are applicable to both time varying and time invariant systems. Three GLR detectors are developed for: (1) hard-over sensor failure; (2) hard-over actuator failure; and (3) brief disturbances in the actuators. The probability distribution of the GLR detectors and the detectability of sensor/actuator failures are established. Failure type is determined by the maximum of the GLR detectors. Failure accommodation is accomplished by extending the Multivariable Nyquest Array (MNA) control design techniques to nonsquare system designs. The performance and effectiveness of the failure analysis technique are studied by applying the technique to a turbofan jet engine, namely the Quiet Clean Short Haul Experimental Engine (QCSEE). Single and multiple sensor/actuator failures in the QCSEE are simulated and analyzed and the effects of model degradation are studied.

  9. The Weakest Link : Spatial Variability in the Piping Failure Mechanism of Dikes

    NARCIS (Netherlands)

    Kanning, W.

    2012-01-01

    Piping is an important failure mechanism of flood defense structures. A dike fails due to piping when a head difference causes first the uplift of an inland blanket layer, and subsequently soil erosion due to a ground water flow. Spatial variability of subsoil parameters causes the probability of

  10. Discretization of space and time: determining the values of minimum length and minimum time

    OpenAIRE

    Roatta , Luca

    2017-01-01

    Assuming that space and time can only have discrete values, we obtain the expression of the minimum length and the minimum time interval. These values are found to be exactly coincident with the Planck's length and the Planck's time but for the presence of h instead of ħ .

  11. Minimum wage development in the Russian Federation

    OpenAIRE

    Bolsheva, Anna

    2012-01-01

    The aim of this paper is to analyze the effectiveness of the minimum wage policy at the national level in Russia and its impact on living standards in the country. The analysis showed that the national minimum wage in Russia does not serve its original purpose of protecting the lowest wage earners and has no substantial effect on poverty reduction. The national subsistence minimum is too low and cannot be considered an adequate criterion for the setting of the minimum wage. The minimum wage d...

  12. Error Probability Analysis of Hardware Impaired Systems with Asymmetric Transmission

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2018-01-01

    Error probability study of the hardware impaired (HWI) systems highly depends on the adopted model. Recent models have proved that the aggregate noise is equivalent to improper Gaussian signals. Therefore, considering the distinct noise nature and self-interfering (SI) signals, an optimal maximum likelihood (ML) receiver is derived. This renders the conventional minimum Euclidean distance (MED) receiver as a sub-optimal receiver because it is based on the assumptions of ideal hardware transceivers and proper Gaussian noise in communication systems. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds and approximations are derived for various adopted systems including transmitter and receiver I/Q imbalanced systems with or without transmitter distortions as well as transmitter or receiver only impaired systems. Motivated by recent studies that shed the light on the benefit of improper Gaussian signaling in mitigating the HWIs, asymmetric quadrature amplitude modulation or phase shift keying is optimized and adapted for transmission. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver, the tightness of the derived bounds and effectiveness of asymmetric transmission in dampening HWIs and improving overall system performance

  13. Error Probability Analysis of Hardware Impaired Systems with Asymmetric Transmission

    KAUST Repository

    Javed, Sidrah

    2018-04-26

    Error probability study of the hardware impaired (HWI) systems highly depends on the adopted model. Recent models have proved that the aggregate noise is equivalent to improper Gaussian signals. Therefore, considering the distinct noise nature and self-interfering (SI) signals, an optimal maximum likelihood (ML) receiver is derived. This renders the conventional minimum Euclidean distance (MED) receiver as a sub-optimal receiver because it is based on the assumptions of ideal hardware transceivers and proper Gaussian noise in communication systems. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds and approximations are derived for various adopted systems including transmitter and receiver I/Q imbalanced systems with or without transmitter distortions as well as transmitter or receiver only impaired systems. Motivated by recent studies that shed the light on the benefit of improper Gaussian signaling in mitigating the HWIs, asymmetric quadrature amplitude modulation or phase shift keying is optimized and adapted for transmission. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver, the tightness of the derived bounds and effectiveness of asymmetric transmission in dampening HWIs and improving overall system performance

  14. Exploiting Outage and Error Probability of Cooperative Incremental Relaying in Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Hina Nasir

    2016-07-01

    Full Text Available This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs; performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE efficient depth based routing and Enhanced-ACE (E-ACE are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ. E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment.

  15. Minimum emittance of three-bend achromats

    International Nuclear Information System (INIS)

    Li Xiaoyu; Xu Gang

    2012-01-01

    The calculation of the minimum emittance of three-bend achromats (TBAs) made by Mathematical software can ignore the actual magnets lattice in the matching condition of dispersion function in phase space. The minimum scaling factors of two kinds of widely used TBA lattices are obtained. Then the relationship between the lengths and the radii of the three dipoles in TBA is obtained and so is the minimum scaling factor, when the TBA lattice achieves its minimum emittance. The procedure of analysis and the results can be widely used in achromats lattices, because the calculation is not restricted by the actual lattice. (authors)

  16. Heart failure rehospitalization of the Medicare FFS patient: a state-level analysis exploring 30-day readmission factors.

    Science.gov (United States)

    Schmeida, Mary; Savrin, Ronald A

    2012-01-01

    Heart failure readmission among the elderly is frequent and costly to both the patient and the Medicare trust fund. In this study, the authors explore the factors that are associated with states having heart failure readmission rates that are higher than the U.S. national rate. Acute inpatient hospital settings. 50 state-level data and multivariate regression analysis is used. The dependent variable Heart Failure 30-day Readmission Worse than U.S. Rate is based on adult Medicare Fee-for-Service patients hospitalized with a primary discharge diagnosis of heart failure and for which a subsequent inpatient readmission occurred within 30 days of their last discharge. One key variable found--states with a higher resident population speaking a primary language other than English at home--that is significantly associated with a decrease in probability in states ranking "worse" on heart failure 30-day readmission. Whereas, states with a higher median income, more total days of care per 1,000 Medicare enrollees, and a greater percentage of Medicare enrollees with prescription drug coverage have a greater probability for heart failure 30-day readmission to be "worse" than the U.S. national rate. Case management interventions targeting health literacy may be more effective than other factors to improve state-level hospital status on heart failure 30-day readmission. Factors such as total days of care per 1,000 Medicare enrollees and improving patient access to postdischarge medication(s) may not be as important as literacy. Interventions aimed to prevent disparities should consider higher income population groups as vulnerable for readmission.

  17. Foundations of probability

    International Nuclear Information System (INIS)

    Fraassen, B.C. van

    1979-01-01

    The interpretation of probabilities in physical theories are considered, whether quantum or classical. The following points are discussed 1) the functions P(μ, Q) in terms of which states and propositions can be represented, are classical (Kolmogoroff) probabilities, formally speaking, 2) these probabilities are generally interpreted as themselves conditional, and the conditions are mutually incompatible where the observables are maximal and 3) testing of the theory typically takes the form of confronting the expectation values of observable Q calculated with probability measures P(μ, Q) for states μ; hence, of comparing the probabilities P(μ, Q)(E) with the frequencies of occurrence of the corresponding events. It seems that even the interpretation of quantum mechanics, in so far as it concerns what the theory says about the empirical (i.e. actual, observable) phenomena, deals with the confrontation of classical probability measures with observable frequencies. This confrontation is studied. (Auth./C.F.)

  18. Determination of slope failure using 2-D resistivity method

    Science.gov (United States)

    Muztaza, Nordiana Mohd; Saad, Rosli; Ismail, Nur Azwin; Bery, Andy Anderson

    2017-07-01

    Landslides and slope failure may give negative economic effects including the cost to repair structures, loss of property value and medical costs in the event of injury. To avoid landslide, slope failure and disturbance of the ecosystem, good and detailed planning must be done when developing hilly area. Slope failure classification and various factors contributing to the instability using 2-D resistivity survey conducted in Selangor, Malaysia are described. The study on landslide and slope failure was conducted at Site A and Site B, Selangor using 2-D resistivity method. The implications of the anticipated ground conditions as well as the field observation of the actual conditions are discussed. Nine 2-D resistivity survey lines were conducted in Site A and six 2-D resistivity survey lines with 5 m minimum electrode spacing using Pole-dipole array were performed in Site B. The data were processed using Res2Dinv and Surfer10 software to evaluate the subsurface characteristics. 2-D resistivity results from both locations show that the study areas consist of two main zones. The first zone is alluvium or highly weathered with the resistivity of 100-1000 Ωm at 20-70 m depth. This zone consists of saturated area (1-100 Ωm) and boulders with resistivity value of 1200-3000 Ωm. The second zone with resistivity values of > 3000 Ωm was interpreted as granitic bedrock. The study area was characterized by saturated zones, highly weathered zone, highly contain of sand and boulders that will trigger slope failure in the survey area. Based on the results obtained from the study findings, it can be concluded that 2-D resistivity method is useful method in determination of slope failure.

  19. Epistemic-based investigation of the probability of hazard scenarios using Bayesian network for the lifting operation of floating objects

    Science.gov (United States)

    Toroody, Ahmad Bahoo; Abaiee, Mohammad Mahdi; Gholamnia, Reza; Ketabdari, Mohammad Javad

    2016-09-01

    Owing to the increase in unprecedented accidents with new root causes in almost all operational areas, the importance of risk management has dramatically risen. Risk assessment, one of the most significant aspects of risk management, has a substantial impact on the system-safety level of organizations, industries, and operations. If the causes of all kinds of failure and the interactions between them are considered, effective risk assessment can be highly accurate. A combination of traditional risk assessment approaches and modern scientific probability methods can help in realizing better quantitative risk assessment methods. Most researchers face the problem of minimal field data with respect to the probability and frequency of each failure. Because of this limitation in the availability of epistemic knowledge, it is important to conduct epistemic estimations by applying the Bayesian theory for identifying plausible outcomes. In this paper, we propose an algorithm and demonstrate its application in a case study for a light-weight lifting operation in the Persian Gulf of Iran. First, we identify potential accident scenarios and present them in an event tree format. Next, excluding human error, we use the event tree to roughly estimate the prior probability of other hazard-promoting factors using a minimal amount of field data. We then use the Success Likelihood Index Method (SLIM) to calculate the probability of human error. On the basis of the proposed event tree, we use the Bayesian network of the provided scenarios to compensate for the lack of data. Finally, we determine the resulting probability of each event based on its evidence in the epistemic estimation format by building on two Bayesian network types: the probability of hazard promotion factors and the Bayesian theory. The study results indicate that despite the lack of available information on the operation of floating objects, a satisfactory result can be achieved using epistemic data.

  20. Introduction of a National Minimum Wage Reduced Depressive Symptoms in Low-Wage Workers: A Quasi-Natural Experiment in the UK.

    Science.gov (United States)

    Reeves, Aaron; McKee, Martin; Mackenbach, Johan; Whitehead, Margaret; Stuckler, David

    2017-05-01

    Does increasing incomes improve health? In 1999, the UK government implemented minimum wage legislation, increasing hourly wages to at least £3.60. This policy experiment created intervention and control groups that can be used to assess the effects of increasing wages on health. Longitudinal data were taken from the British Household Panel Survey. We compared the health effects of higher wages on recipients of the minimum wage with otherwise similar persons who were likely unaffected because (1) their wages were between 100 and 110% of the eligibility threshold or (2) their firms did not increase wages to meet the threshold. We assessed the probability of mental ill health using the 12-item General Health Questionnaire. We also assessed changes in smoking, blood pressure, as well as hearing ability (control condition). The intervention group, whose wages rose above the minimum wage, experienced lower probability of mental ill health compared with both control group 1 and control group 2. This improvement represents 0.37 of a standard deviation, comparable with the effect of antidepressants (0.39 of a standard deviation) on depressive symptoms. The intervention group experienced no change in blood pressure, hearing ability, or smoking. Increasing wages significantly improves mental health by reducing financial strain in low-wage workers. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

  1. A probabilistic analysis of the implications of instrument failures on ESA's Swarm mission for its individual satellite orbit deployments

    Science.gov (United States)

    Jackson, Andrew

    2015-07-01

    On launch, one of Swarm's absolute scalar magnetometers (ASMs) failed to function, leaving an asymmetrical arrangement of redundant spares on different spacecrafts. A decision was required concerning the deployment of individual satellites into the low-orbit pair or the higher "lonely" orbit. I analyse the probabilities for successful operation of two of the science components of the Swarm mission in terms of a classical probabilistic failure analysis, with a view to concluding a favourable assignment for the satellite with the single working ASM. I concentrate on the following two science aspects: the east-west gradiometer aspect of the lower pair of satellites and the constellation aspect, which requires a working ASM in each of the two orbital planes. I use the so-called "expert solicitation" probabilities for instrument failure solicited from Mission Advisory Group (MAG) members. My conclusion from the analysis is that it is better to have redundancy of ASMs in the lonely satellite orbit. Although the opposite scenario, having redundancy (and thus four ASMs) in the lower orbit, increases the chance of a working gradiometer late in the mission; it does so at the expense of a likely constellation. Although the results are presented based on actual MAG members' probabilities, the results are rather generic, excepting the case when the probability of individual ASM failure is very small; in this case, any arrangement will ensure a successful mission since there is essentially no failure expected at all. Since the very design of the lower pair is to enable common mode rejection of external signals, it is likely that its work can be successfully achieved during the first 5 years of the mission.

  2. Non-Archimedean Probability

    NARCIS (Netherlands)

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned

  3. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    Science.gov (United States)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  4. Handbook of probability

    CERN Document Server

    Florescu, Ionut

    2013-01-01

    THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio

  5. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  6. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  7. Serviceability Assessment for Cascading Failures in Water Distribution Network under Seismic Scenario

    Directory of Open Access Journals (Sweden)

    Qing Shuang

    2016-01-01

    Full Text Available The stability of water service is a hot point in industrial production, public safety, and academic research. The paper establishes a service evaluation model for the water distribution network (WDN. The serviceability is measured in three aspects: (1 the functionality of structural components under disaster environment; (2 the recognition of cascading failure process; and (3 the calculation of system reliability. The node and edge failures in WDN are interrelated under seismic excitations. The cascading failure process is provided with the balance of water supply and demand. The matrix-based system reliability (MSR method is used to represent the system events and calculate the nonfailure probability. An example is used to illustrate the proposed method. The cascading failure processes with different node failures are simulated. The serviceability is analyzed. The critical node can be identified. The result shows that the aged network has a greater influence on the system service under seismic scenario. The maintenance could improve the antidisaster ability of WDN. Priority should be given to controlling the time between the initial failure and the first secondary failure, for taking postdisaster emergency measures within this time period can largely cut down the spread of cascade effect in the whole WDN.

  8. Exaggerated risk: prospect theory and probability weighting in risky choice.

    Science.gov (United States)

    Kusev, Petko; van Schaik, Paul; Ayton, Peter; Dent, John; Chater, Nick

    2009-11-01

    In 5 experiments, we studied precautionary decisions in which participants decided whether or not to buy insurance with specified cost against an undesirable event with specified probability and cost. We compared the risks taken for precautionary decisions with those taken for equivalent monetary gambles. Fitting these data to Tversky and Kahneman's (1992) prospect theory, we found that the weighting function required to model precautionary decisions differed from that required for monetary gambles. This result indicates a failure of the descriptive invariance axiom of expected utility theory. For precautionary decisions, people overweighted small, medium-sized, and moderately large probabilities-they exaggerated risks. This effect is not anticipated by prospect theory or experience-based decision research (Hertwig, Barron, Weber, & Erev, 2004). We found evidence that exaggerated risk is caused by the accessibility of events in memory: The weighting function varies as a function of the accessibility of events. This suggests that people's experiences of events leak into decisions even when risk information is explicitly provided. Our findings highlight a need to investigate how variation in decision content produces variation in preferences for risk.

  9. 30 CFR 57.19021 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0. (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0. (c) Tail...

  10. 30 CFR 56.19021 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ... feet: Minimum Value=Static Load×(7.0-0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0-0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...

  11. Assessing the impact of heart failure specialist services on patient populations

    Directory of Open Access Journals (Sweden)

    Lyratzopoulos Georgios

    2004-05-01

    Full Text Available Abstract Background The assessment of the impact of healthcare interventions may help commissioners of healthcare services to make optimal decisions. This can be particularly the case if the impact assessment relates to specific patient populations and uses timely local data. We examined the potential impact on readmissions and mortality of specialist heart failure services capable of delivering treatments such as b-blockers and Nurse-Led Educational Intervention (N-LEI. Methods Statistical modelling of prevented or postponed events among previously hospitalised patients, using estimates of: treatment uptake and contraindications (based on local audit data; treatment effectiveness and intolerance (based on literature; and annual number of hospitalization per patient and annual risk of death (based on routine data. Results Optimal treatment uptake among eligible but untreated patients would over one year prevent or postpone 11% of all expected readmissions and 18% of all expected deaths for spironolactone, 13% of all expected readmisisons and 22% of all expected deaths for b-blockers (carvedilol and 20% of all expected readmissions and an uncertain number of deaths for N-LEI. Optimal combined treatment uptake for all three interventions during one year among all eligible but untreated patients would prevent or postpone 37% of all expected readmissions and a minimum of 36% of all expected deaths. Conclusion In a population of previously hospitalised patients with low previous uptake of b-blockers and no uptake of N-LEI, optimal combined uptake of interventions through specialist heart failure services can potentially help prevent or postpone approximately four times as many readmissions and a minimum of twice as many deaths compared with simply optimising uptake of spironolactone (not necessarily requiring specialist services. Examination of the impact of different heart failure interventions can inform rational planning of relevant healthcare

  12. Assessing the impact of heart failure specialist services on patient populations.

    Science.gov (United States)

    Lyratzopoulos, Georgios; Cook, Gary A; McElduff, Patrick; Havely, Daniel; Edwards, Richard; Heller, Richard F

    2004-05-24

    The assessment of the impact of healthcare interventions may help commissioners of healthcare services to make optimal decisions. This can be particularly the case if the impact assessment relates to specific patient populations and uses timely local data. We examined the potential impact on readmissions and mortality of specialist heart failure services capable of delivering treatments such as b-blockers and Nurse-Led Educational Intervention (N-LEI). Statistical modelling of prevented or postponed events among previously hospitalised patients, using estimates of: treatment uptake and contraindications (based on local audit data); treatment effectiveness and intolerance (based on literature); and annual number of hospitalization per patient and annual risk of death (based on routine data). Optimal treatment uptake among eligible but untreated patients would over one year prevent or postpone 11% of all expected readmissions and 18% of all expected deaths for spironolactone, 13% of all expected readmisisons and 22% of all expected deaths for b-blockers (carvedilol) and 20% of all expected readmissions and an uncertain number of deaths for N-LEI. Optimal combined treatment uptake for all three interventions during one year among all eligible but untreated patients would prevent or postpone 37% of all expected readmissions and a minimum of 36% of all expected deaths. In a population of previously hospitalised patients with low previous uptake of b-blockers and no uptake of N-LEI, optimal combined uptake of interventions through specialist heart failure services can potentially help prevent or postpone approximately four times as many readmissions and a minimum of twice as many deaths compared with simply optimising uptake of spironolactone (not necessarily requiring specialist services). Examination of the impact of different heart failure interventions can inform rational planning of relevant healthcare services.

  13. A hazard and risk classification system for catastrophic rock slope failures in Norway

    Science.gov (United States)

    Hermanns, R.; Oppikofer, T.; Anda, E.; Blikra, L. H.; Böhme, M.; Bunkholt, H.; Dahle, H.; Devoli, G.; Eikenæs, O.; Fischer, L.; Harbitz, C. B.; Jaboyedoff, M.; Loew, S.; Yugsi Molina, F. X.

    2012-04-01

    outburst floods. It became obvious that large rock slope failures cannot be evaluated on a slope scale with frequency analyses of historical and prehistorical events only, as multiple rockslides have occurred within one century on a single slope that prior to the recent failures had been inactive for several thousand years. In addition, a systematic analysis on temporal distribution indicates that rockslide activity following deglaciation after the Last Glacial Maximum has been much higher than throughout the Holocene. Therefore the classification system has to be based primarily on the geological conditions on the deforming slope and on the deformation rates and only to a lesser weight on a frequency analyses. Our hazard classification therefore is primarily based on several criteria: 1) Development of the back-scarp, 2) development of the lateral release surfaces, 3) development of the potential basal sliding surface, 4) morphologic expression of the basal sliding surface, 5) kinematic feasibility tests for different displacement mechanisms, 6) landslide displacement rates, 7) change of displacement rates (acceleration), 8) increase of rockfall activity on the unstable rock slope, 9) Presence post-glacial events of similar size along the affected slope and its vicinity. For each of these criteria several conditions are possible to choose from (e.g. different velocity classes for the displacement rate criterion). A score is assigned to each condition and the sum of all scores gives the total susceptibility score. Since many of these observations are somewhat uncertain, the classification system is organized in a decision tree where probabilities can be assigned to each condition. All possibilities in the decision tree are computed and the individual probabilities giving the same total score are summed. Basic statistics show the minimum and maximum total scores of a scenario, as well as the mean and modal value. The final output is a cumulative frequency distribution of

  14. Estimating minimum polycrystalline aggregate size for macroscopic material homogeneity

    International Nuclear Information System (INIS)

    Kovac, M.; Simonovski, I.; Cizelj, L.

    2002-01-01

    During severe accidents the pressure boundary of reactor coolant system can be subjected to extreme loadings, which might cause failure. Reliable estimation of the extreme deformations can be crucial to determine the consequences of severe accidents. Important drawback of classical continuum mechanics is idealization of inhomogenous microstructure of materials. Classical continuum mechanics therefore cannot predict accurately the differences between measured responses of specimens, which are different in size but geometrical similar (size effect). A numerical approach, which models elastic-plastic behavior on mesoscopic level, is proposed to estimate minimum size of polycrystalline aggregate above which it can be considered macroscopically homogeneous. The main idea is to divide continuum into a set of sub-continua. Analysis of macroscopic element is divided into modeling the random grain structure (using Voronoi tessellation and random orientation of crystal lattice) and calculation of strain/stress field. Finite element method is used to obtain numerical solutions of strain and stress fields. The analysis is limited to 2D models.(author)

  15. Dosing strategy based on prevailing aminoglycoside minimum inhibitory concentration in India: Evidence and issues

    Directory of Open Access Journals (Sweden)

    Balaji Veeraraghavan

    2017-01-01

    Full Text Available Aminoglycosides are important agents used for treating drug-resistant infections. The current dosing regimen of aminoglycosides does not achieve sufficient serum level concentration for the infected bacterial pathogen interpreted as susceptible based on laboratory testing. Minimum inhibitory concentration was determined for nearly 2000 isolates of Enterobacteriaceae and Pseudomonas aeruginosa by broth microdilution method. Results were interpreted based on CLSI and EUCAST interpretative criteria and the inconsistencies in the susceptibility profile were noted. This study provides insights into the inconsistencies existing in the laboratory interpretation and the corresponding clinical success rates. This urges the need for revising clinical breakpoints for amikacin, to resolve under dosing leading to clinical failure.

  16. Fulminate Hepatic Failure in a 5 Year Old Female after Inappropriate Acetaminophen Treatment

    Directory of Open Access Journals (Sweden)

    Irena Kasmi

    2015-09-01

    CONCLUSION: Healthcare providers should considered probable acetaminophen toxicity in any child who has received the drug and presented with liver failure. When there is a high index of suspicion of acetaminophen toxicity NAC should be initiated and continued until there are no signs of hepatic dysfunction.

  17. Micromechanical local approach to brittle failure in bainite high resolution polycrystals: A short presentation

    International Nuclear Information System (INIS)

    N'Guyen, C.N.; Osipov, N.; Cailletaud, G.; Barbe, F.; Marini, B.; Petry, C.

    2012-01-01

    The problem of determining the probability of failure in a brittle material from a micromechanical local approach has recently been addressed in few works, all related to bainite polycrystals at different temperatures and states of irradiation. They have separately paved the ground for a full-field modelling with high realism in terms of constitutive modelling and microstructural morphology. This work first contributes to enhance this realism by assembling the most pertinent/valuable characteristics (dislocation density based model, large deformation framework, fully controlled triaxiality conditions, explicit microstructure representation of grains and sub-grains,... ) and by accounting for a statistically representative Volume Element; this condition indeed must be fulfilled in order to capture rare events like brittle micro-fractures which, in the stress analysis, correspond to the tails of distribution curves. The second original contribution of this work concerns the methodology for determining fracture probabilities: rather than classically - and abruptly - considering a polycrystal as broken as soon as an elementary link (grain or sub-grain) has failed, the possibility of microcrack arrest at microstructural barriers is introduced, which enables to determine the probability of polycrystal failure according to different scenarios of multiple micro-fractures. (authors)

  18. 30 CFR 77.1431 - Minimum rope strength.

    Science.gov (United States)

    2010-07-01

    ... feet: Minimum Value=Static Load×(7.0−0.001L) For rope lengths 3,000 feet or greater: Minimum Value=Static Load×4.0 (b) Friction drum ropes. For rope lengths less than 4,000 feet: Minimum Value=Static Load×(7.0−0.0005L) For rope lengths 4,000 feet or greater: Minimum Value=Static Load×5.0 (c) Tail ropes...

  19. Systemic risk in dynamical networks with stochastic failure criterion

    Science.gov (United States)

    Podobnik, B.; Horvatic, D.; Bertella, M. A.; Feng, L.; Huang, X.; Li, B.

    2014-06-01

    Complex non-linear interactions between banks and assets we model by two time-dependent Erdős-Renyi network models where each node, representing a bank, can invest either to a single asset (model I) or multiple assets (model II). We use a dynamical network approach to evaluate the collective financial failure —systemic risk— quantified by the fraction of active nodes. The systemic risk can be calculated over any future time period, divided into sub-periods, where within each sub-period banks may contiguously fail due to links to either i) assets or ii) other banks, controlled by two parameters, probability of internal failure p and threshold Th (“solvency” parameter). The systemic risk decreases with the average network degree faster when all assets are equally distributed across banks than if assets are randomly distributed. The more inactive banks each bank can sustain (smaller Th), the smaller the systemic risk —for some Th values in I we report a discontinuity in systemic risk. When contiguous spreading becomes stochastic ii) controlled by probability p2 —a condition for the bank to be solvent (active) is stochastic— the systemic risk decreases with decreasing p2. We analyse the asset allocation for the U.S. banks.

  20. Evaluating the Phoenix definition of biochemical failure after (125)I prostate brachytherapy: Can PSA kinetics distinguish PSA failures from PSA bounces?

    Science.gov (United States)

    Thompson, Anna; Keyes, Mira; Pickles, Tom; Palma, David; Moravan, Veronika; Spadinger, Ingrid; Lapointe, Vincent; Morris, W James

    2010-10-01

    To evaluate the prostate-specific antigen (PSA) kinetics of PSA failure (PSAf) and PSA bounce (PSAb) after permanent (125)I prostate brachytherapy (PB). The study included 1,006 consecutive low and "low tier" intermediate-risk patients treated with (125)I PB, with a potential minimum follow-up of 4 years. Patients who met the Phoenix definition of biochemical failure (nadir + 2 ng/mL(-1)) were identified. If the PSA subsequently fell to ≤0.5 ng/mL(-1)without intervention, this was considered a PSAb. All others were scored as true PSAf. Patient, tumor and dosimetric characteristics were compared between groups using the chi-square test and analysis of variance to evaluate factors associated with PSAf or PSAb. Median follow-up was 54 months. Of the 1,006 men, 57 patients triggered the Phoenix definition of PSA failure, 32 (56%) were true PSAf, and 25 PSAb (44%). The median time to trigger nadir + 2 was 20.6 months (range, 6-36) vs. 49 mo (range, 12-83) for PSAb vs. PSAf groups (p < 0.001). The PSAb patients were significantly younger (p < 0.0001), had shorter time to reach the nadir (median 6 vs. 11.5 months, p = 0.001) and had a shorter PSA doubling time (p = 0.05). Men younger than age 70 who trigger nadir +2 PSA failure within 38 months of implant have an 80% likelihood of having PSAb and 20% chance of PSAf. With adequate follow-up, 44% of PSA failures by the Phoenix definition in our cohort were found to be benign PSA bounces. Our study reinforces the need for adequate follow-up when reporting PB PSA outcomes, to ensure accurate estimates of treatment efficacy and to avoid unnecessary secondary interventions. 2010. Published by Elsevier Inc. All rights reserved.